Nov 5 16:03:06.544230 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Nov 5 13:45:21 -00 2025 Nov 5 16:03:06.544319 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 16:03:06.544342 kernel: BIOS-provided physical RAM map: Nov 5 16:03:06.544351 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 5 16:03:06.544359 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Nov 5 16:03:06.544368 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Nov 5 16:03:06.544379 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Nov 5 16:03:06.544388 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Nov 5 16:03:06.544397 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Nov 5 16:03:06.544406 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Nov 5 16:03:06.544422 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Nov 5 16:03:06.544429 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Nov 5 16:03:06.544438 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Nov 5 16:03:06.544447 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Nov 5 16:03:06.544458 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Nov 5 16:03:06.544472 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Nov 5 16:03:06.544482 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Nov 5 16:03:06.544492 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Nov 5 16:03:06.544502 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Nov 5 16:03:06.544511 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Nov 5 16:03:06.544519 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Nov 5 16:03:06.544526 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Nov 5 16:03:06.544536 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 5 16:03:06.544545 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 5 16:03:06.544553 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Nov 5 16:03:06.544567 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 5 16:03:06.544574 kernel: NX (Execute Disable) protection: active Nov 5 16:03:06.544584 kernel: APIC: Static calls initialized Nov 5 16:03:06.544591 kernel: e820: update [mem 0x9b319018-0x9b322c57] usable ==> usable Nov 5 16:03:06.544599 kernel: e820: update [mem 0x9b2dc018-0x9b318e57] usable ==> usable Nov 5 16:03:06.544606 kernel: extended physical RAM map: Nov 5 16:03:06.544616 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 5 16:03:06.544626 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Nov 5 16:03:06.544635 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Nov 5 16:03:06.544643 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Nov 5 16:03:06.544651 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Nov 5 16:03:06.544666 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Nov 5 16:03:06.544673 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Nov 5 16:03:06.544681 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2dc017] usable Nov 5 16:03:06.544688 kernel: reserve setup_data: [mem 0x000000009b2dc018-0x000000009b318e57] usable Nov 5 16:03:06.544704 kernel: reserve setup_data: [mem 0x000000009b318e58-0x000000009b319017] usable Nov 5 16:03:06.544718 kernel: reserve setup_data: [mem 0x000000009b319018-0x000000009b322c57] usable Nov 5 16:03:06.544728 kernel: reserve setup_data: [mem 0x000000009b322c58-0x000000009bd3efff] usable Nov 5 16:03:06.544736 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Nov 5 16:03:06.544743 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Nov 5 16:03:06.544754 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Nov 5 16:03:06.544764 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Nov 5 16:03:06.544771 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Nov 5 16:03:06.544779 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Nov 5 16:03:06.544794 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Nov 5 16:03:06.544802 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Nov 5 16:03:06.544810 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Nov 5 16:03:06.544818 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Nov 5 16:03:06.544848 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Nov 5 16:03:06.544859 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 5 16:03:06.544867 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 5 16:03:06.544875 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Nov 5 16:03:06.544885 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 5 16:03:06.544896 kernel: efi: EFI v2.7 by EDK II Nov 5 16:03:06.544904 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Nov 5 16:03:06.544921 kernel: random: crng init done Nov 5 16:03:06.544931 kernel: efi: Remove mem150: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Nov 5 16:03:06.544939 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Nov 5 16:03:06.544949 kernel: secureboot: Secure boot disabled Nov 5 16:03:06.544956 kernel: SMBIOS 2.8 present. Nov 5 16:03:06.544964 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Nov 5 16:03:06.544972 kernel: DMI: Memory slots populated: 1/1 Nov 5 16:03:06.544982 kernel: Hypervisor detected: KVM Nov 5 16:03:06.544989 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Nov 5 16:03:06.544999 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 5 16:03:06.545009 kernel: kvm-clock: using sched offset of 6101978886 cycles Nov 5 16:03:06.545026 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 5 16:03:06.545034 kernel: tsc: Detected 2794.748 MHz processor Nov 5 16:03:06.545043 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 5 16:03:06.545052 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 5 16:03:06.545062 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Nov 5 16:03:06.545070 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 5 16:03:06.545081 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 5 16:03:06.545097 kernel: Using GB pages for direct mapping Nov 5 16:03:06.545105 kernel: ACPI: Early table checksum verification disabled Nov 5 16:03:06.545114 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Nov 5 16:03:06.545122 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Nov 5 16:03:06.545131 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 16:03:06.545139 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 16:03:06.545148 kernel: ACPI: FACS 0x000000009CBDD000 000040 Nov 5 16:03:06.545156 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 16:03:06.545173 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 16:03:06.545181 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 16:03:06.545190 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 16:03:06.545198 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Nov 5 16:03:06.545207 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Nov 5 16:03:06.545234 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Nov 5 16:03:06.545243 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Nov 5 16:03:06.545260 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Nov 5 16:03:06.545268 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Nov 5 16:03:06.545277 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Nov 5 16:03:06.545285 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Nov 5 16:03:06.545293 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Nov 5 16:03:06.545301 kernel: No NUMA configuration found Nov 5 16:03:06.545310 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Nov 5 16:03:06.545326 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Nov 5 16:03:06.545335 kernel: Zone ranges: Nov 5 16:03:06.545343 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 5 16:03:06.545351 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Nov 5 16:03:06.545359 kernel: Normal empty Nov 5 16:03:06.545368 kernel: Device empty Nov 5 16:03:06.545376 kernel: Movable zone start for each node Nov 5 16:03:06.545392 kernel: Early memory node ranges Nov 5 16:03:06.545401 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 5 16:03:06.545411 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Nov 5 16:03:06.545420 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Nov 5 16:03:06.545428 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Nov 5 16:03:06.545436 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Nov 5 16:03:06.545444 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Nov 5 16:03:06.545453 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Nov 5 16:03:06.545468 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Nov 5 16:03:06.545478 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Nov 5 16:03:06.545487 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 5 16:03:06.545516 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 5 16:03:06.545532 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Nov 5 16:03:06.545540 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 5 16:03:06.545549 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Nov 5 16:03:06.545558 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Nov 5 16:03:06.545566 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Nov 5 16:03:06.545582 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Nov 5 16:03:06.545591 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Nov 5 16:03:06.545600 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 5 16:03:06.545608 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 5 16:03:06.545624 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 5 16:03:06.545633 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 5 16:03:06.545641 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 5 16:03:06.545650 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 5 16:03:06.545659 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 5 16:03:06.545667 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 5 16:03:06.545676 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 5 16:03:06.545692 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 5 16:03:06.545701 kernel: TSC deadline timer available Nov 5 16:03:06.545709 kernel: CPU topo: Max. logical packages: 1 Nov 5 16:03:06.545718 kernel: CPU topo: Max. logical dies: 1 Nov 5 16:03:06.545726 kernel: CPU topo: Max. dies per package: 1 Nov 5 16:03:06.545735 kernel: CPU topo: Max. threads per core: 1 Nov 5 16:03:06.545743 kernel: CPU topo: Num. cores per package: 4 Nov 5 16:03:06.545759 kernel: CPU topo: Num. threads per package: 4 Nov 5 16:03:06.545768 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Nov 5 16:03:06.545777 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 5 16:03:06.545785 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 5 16:03:06.545794 kernel: kvm-guest: setup PV sched yield Nov 5 16:03:06.545802 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Nov 5 16:03:06.545811 kernel: Booting paravirtualized kernel on KVM Nov 5 16:03:06.545820 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 5 16:03:06.545857 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 5 16:03:06.545866 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Nov 5 16:03:06.545874 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Nov 5 16:03:06.545883 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 5 16:03:06.545892 kernel: kvm-guest: PV spinlocks enabled Nov 5 16:03:06.545900 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 5 16:03:06.545913 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 16:03:06.545930 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 5 16:03:06.545939 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 5 16:03:06.545948 kernel: Fallback order for Node 0: 0 Nov 5 16:03:06.545956 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Nov 5 16:03:06.545965 kernel: Policy zone: DMA32 Nov 5 16:03:06.545974 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 5 16:03:06.545994 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 5 16:03:06.546003 kernel: ftrace: allocating 40092 entries in 157 pages Nov 5 16:03:06.546012 kernel: ftrace: allocated 157 pages with 5 groups Nov 5 16:03:06.546020 kernel: Dynamic Preempt: voluntary Nov 5 16:03:06.546029 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 5 16:03:06.546041 kernel: rcu: RCU event tracing is enabled. Nov 5 16:03:06.546050 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 5 16:03:06.546066 kernel: Trampoline variant of Tasks RCU enabled. Nov 5 16:03:06.546075 kernel: Rude variant of Tasks RCU enabled. Nov 5 16:03:06.546083 kernel: Tracing variant of Tasks RCU enabled. Nov 5 16:03:06.546092 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 5 16:03:06.546100 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 5 16:03:06.546112 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 5 16:03:06.546120 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 5 16:03:06.546129 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 5 16:03:06.546146 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 5 16:03:06.546154 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 5 16:03:06.546163 kernel: Console: colour dummy device 80x25 Nov 5 16:03:06.546171 kernel: printk: legacy console [ttyS0] enabled Nov 5 16:03:06.546180 kernel: ACPI: Core revision 20240827 Nov 5 16:03:06.546188 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 5 16:03:06.546197 kernel: APIC: Switch to symmetric I/O mode setup Nov 5 16:03:06.546213 kernel: x2apic enabled Nov 5 16:03:06.546221 kernel: APIC: Switched APIC routing to: physical x2apic Nov 5 16:03:06.546230 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 5 16:03:06.546239 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 5 16:03:06.546247 kernel: kvm-guest: setup PV IPIs Nov 5 16:03:06.546256 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 5 16:03:06.546265 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 5 16:03:06.546281 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Nov 5 16:03:06.546292 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 5 16:03:06.546304 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 5 16:03:06.546314 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 5 16:03:06.546325 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 5 16:03:06.546336 kernel: Spectre V2 : Mitigation: Retpolines Nov 5 16:03:06.546348 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 5 16:03:06.546371 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 5 16:03:06.546382 kernel: active return thunk: retbleed_return_thunk Nov 5 16:03:06.546394 kernel: RETBleed: Mitigation: untrained return thunk Nov 5 16:03:06.546405 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 5 16:03:06.546414 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 5 16:03:06.546423 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 5 16:03:06.546433 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 5 16:03:06.546450 kernel: active return thunk: srso_return_thunk Nov 5 16:03:06.546459 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 5 16:03:06.546467 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 5 16:03:06.546476 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 5 16:03:06.546485 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 5 16:03:06.546493 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 5 16:03:06.546502 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 5 16:03:06.546518 kernel: Freeing SMP alternatives memory: 32K Nov 5 16:03:06.546527 kernel: pid_max: default: 32768 minimum: 301 Nov 5 16:03:06.546535 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 5 16:03:06.546544 kernel: landlock: Up and running. Nov 5 16:03:06.546552 kernel: SELinux: Initializing. Nov 5 16:03:06.546561 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 16:03:06.546570 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 16:03:06.546586 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 5 16:03:06.546594 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 5 16:03:06.546603 kernel: ... version: 0 Nov 5 16:03:06.546611 kernel: ... bit width: 48 Nov 5 16:03:06.546620 kernel: ... generic registers: 6 Nov 5 16:03:06.546628 kernel: ... value mask: 0000ffffffffffff Nov 5 16:03:06.546636 kernel: ... max period: 00007fffffffffff Nov 5 16:03:06.546652 kernel: ... fixed-purpose events: 0 Nov 5 16:03:06.546661 kernel: ... event mask: 000000000000003f Nov 5 16:03:06.546669 kernel: signal: max sigframe size: 1776 Nov 5 16:03:06.546677 kernel: rcu: Hierarchical SRCU implementation. Nov 5 16:03:06.546686 kernel: rcu: Max phase no-delay instances is 400. Nov 5 16:03:06.546697 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 5 16:03:06.546706 kernel: smp: Bringing up secondary CPUs ... Nov 5 16:03:06.546768 kernel: smpboot: x86: Booting SMP configuration: Nov 5 16:03:06.546777 kernel: .... node #0, CPUs: #1 #2 #3 Nov 5 16:03:06.546785 kernel: smp: Brought up 1 node, 4 CPUs Nov 5 16:03:06.546794 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Nov 5 16:03:06.546803 kernel: Memory: 2445196K/2565800K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15964K init, 2080K bss, 114668K reserved, 0K cma-reserved) Nov 5 16:03:06.546811 kernel: devtmpfs: initialized Nov 5 16:03:06.546820 kernel: x86/mm: Memory block size: 128MB Nov 5 16:03:06.546865 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Nov 5 16:03:06.546875 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Nov 5 16:03:06.546884 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Nov 5 16:03:06.546893 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Nov 5 16:03:06.546902 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Nov 5 16:03:06.546911 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Nov 5 16:03:06.546921 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 5 16:03:06.546938 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 5 16:03:06.546947 kernel: pinctrl core: initialized pinctrl subsystem Nov 5 16:03:06.546956 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 5 16:03:06.546965 kernel: audit: initializing netlink subsys (disabled) Nov 5 16:03:06.546975 kernel: audit: type=2000 audit(1762358583.622:1): state=initialized audit_enabled=0 res=1 Nov 5 16:03:06.546984 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 5 16:03:06.546993 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 5 16:03:06.547008 kernel: cpuidle: using governor menu Nov 5 16:03:06.547017 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 5 16:03:06.547025 kernel: dca service started, version 1.12.1 Nov 5 16:03:06.547034 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Nov 5 16:03:06.547043 kernel: PCI: Using configuration type 1 for base access Nov 5 16:03:06.547051 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 5 16:03:06.547060 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 5 16:03:06.547076 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 5 16:03:06.547084 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 5 16:03:06.547093 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 5 16:03:06.547102 kernel: ACPI: Added _OSI(Module Device) Nov 5 16:03:06.547110 kernel: ACPI: Added _OSI(Processor Device) Nov 5 16:03:06.547119 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 5 16:03:06.547127 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 5 16:03:06.547144 kernel: ACPI: Interpreter enabled Nov 5 16:03:06.547153 kernel: ACPI: PM: (supports S0 S3 S5) Nov 5 16:03:06.547161 kernel: ACPI: Using IOAPIC for interrupt routing Nov 5 16:03:06.547170 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 5 16:03:06.547178 kernel: PCI: Using E820 reservations for host bridge windows Nov 5 16:03:06.547187 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 5 16:03:06.547195 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 5 16:03:06.547507 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 5 16:03:06.547703 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 5 16:03:06.547907 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 5 16:03:06.547919 kernel: PCI host bridge to bus 0000:00 Nov 5 16:03:06.548097 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 5 16:03:06.548258 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 5 16:03:06.548434 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 5 16:03:06.548593 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Nov 5 16:03:06.548751 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Nov 5 16:03:06.548957 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Nov 5 16:03:06.549148 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 5 16:03:06.549364 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 5 16:03:06.549550 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 5 16:03:06.549724 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Nov 5 16:03:06.549942 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Nov 5 16:03:06.550116 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Nov 5 16:03:06.550287 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 5 16:03:06.550487 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 5 16:03:06.550661 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Nov 5 16:03:06.550859 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Nov 5 16:03:06.551045 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Nov 5 16:03:06.551236 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 5 16:03:06.551429 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Nov 5 16:03:06.551620 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Nov 5 16:03:06.551793 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Nov 5 16:03:06.552011 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 5 16:03:06.552186 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Nov 5 16:03:06.552358 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Nov 5 16:03:06.552546 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Nov 5 16:03:06.552719 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Nov 5 16:03:06.552929 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 5 16:03:06.553104 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 5 16:03:06.553285 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 5 16:03:06.553474 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Nov 5 16:03:06.553647 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Nov 5 16:03:06.553849 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 5 16:03:06.554030 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Nov 5 16:03:06.554042 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 5 16:03:06.554051 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 5 16:03:06.554060 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 5 16:03:06.554172 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 5 16:03:06.554180 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 5 16:03:06.554189 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 5 16:03:06.554197 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 5 16:03:06.554206 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 5 16:03:06.554215 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 5 16:03:06.554223 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 5 16:03:06.554239 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 5 16:03:06.554247 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 5 16:03:06.554256 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 5 16:03:06.554265 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 5 16:03:06.554273 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 5 16:03:06.554281 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 5 16:03:06.554290 kernel: iommu: Default domain type: Translated Nov 5 16:03:06.554306 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 5 16:03:06.554314 kernel: efivars: Registered efivars operations Nov 5 16:03:06.554323 kernel: PCI: Using ACPI for IRQ routing Nov 5 16:03:06.554332 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 5 16:03:06.554340 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Nov 5 16:03:06.554349 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Nov 5 16:03:06.554357 kernel: e820: reserve RAM buffer [mem 0x9b2dc018-0x9bffffff] Nov 5 16:03:06.554373 kernel: e820: reserve RAM buffer [mem 0x9b319018-0x9bffffff] Nov 5 16:03:06.554381 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Nov 5 16:03:06.554390 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Nov 5 16:03:06.554398 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Nov 5 16:03:06.554406 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Nov 5 16:03:06.554582 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 5 16:03:06.554766 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 5 16:03:06.554962 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 5 16:03:06.554975 kernel: vgaarb: loaded Nov 5 16:03:06.554984 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 5 16:03:06.554993 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 5 16:03:06.555001 kernel: clocksource: Switched to clocksource kvm-clock Nov 5 16:03:06.555010 kernel: VFS: Disk quotas dquot_6.6.0 Nov 5 16:03:06.555018 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 5 16:03:06.555039 kernel: pnp: PnP ACPI init Nov 5 16:03:06.555288 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Nov 5 16:03:06.555312 kernel: pnp: PnP ACPI: found 6 devices Nov 5 16:03:06.555321 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 5 16:03:06.555330 kernel: NET: Registered PF_INET protocol family Nov 5 16:03:06.555338 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 5 16:03:06.555354 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 5 16:03:06.555363 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 5 16:03:06.555372 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 5 16:03:06.555381 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 5 16:03:06.555390 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 5 16:03:06.555399 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 16:03:06.555408 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 16:03:06.555423 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 5 16:03:06.555432 kernel: NET: Registered PF_XDP protocol family Nov 5 16:03:06.555638 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Nov 5 16:03:06.555818 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Nov 5 16:03:06.556010 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 5 16:03:06.556172 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 5 16:03:06.556350 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 5 16:03:06.556511 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Nov 5 16:03:06.556670 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Nov 5 16:03:06.556854 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Nov 5 16:03:06.556867 kernel: PCI: CLS 0 bytes, default 64 Nov 5 16:03:06.556877 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 5 16:03:06.556898 kernel: Initialise system trusted keyrings Nov 5 16:03:06.556907 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 5 16:03:06.556916 kernel: Key type asymmetric registered Nov 5 16:03:06.556925 kernel: Asymmetric key parser 'x509' registered Nov 5 16:03:06.556934 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 5 16:03:06.556950 kernel: io scheduler mq-deadline registered Nov 5 16:03:06.556958 kernel: io scheduler kyber registered Nov 5 16:03:06.556967 kernel: io scheduler bfq registered Nov 5 16:03:06.556976 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 5 16:03:06.556985 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 5 16:03:06.556995 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 5 16:03:06.557004 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 5 16:03:06.557013 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 5 16:03:06.557028 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 5 16:03:06.557037 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 5 16:03:06.557046 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 5 16:03:06.557055 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 5 16:03:06.557261 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 5 16:03:06.557292 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 5 16:03:06.557468 kernel: rtc_cmos 00:04: registered as rtc0 Nov 5 16:03:06.557633 kernel: rtc_cmos 00:04: setting system clock to 2025-11-05T16:03:04 UTC (1762358584) Nov 5 16:03:06.557801 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Nov 5 16:03:06.557813 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 5 16:03:06.557830 kernel: efifb: probing for efifb Nov 5 16:03:06.557856 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Nov 5 16:03:06.557866 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Nov 5 16:03:06.557886 kernel: efifb: scrolling: redraw Nov 5 16:03:06.557895 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 5 16:03:06.557904 kernel: Console: switching to colour frame buffer device 160x50 Nov 5 16:03:06.557913 kernel: fb0: EFI VGA frame buffer device Nov 5 16:03:06.557922 kernel: pstore: Using crash dump compression: deflate Nov 5 16:03:06.557931 kernel: pstore: Registered efi_pstore as persistent store backend Nov 5 16:03:06.557940 kernel: NET: Registered PF_INET6 protocol family Nov 5 16:03:06.557955 kernel: Segment Routing with IPv6 Nov 5 16:03:06.557964 kernel: In-situ OAM (IOAM) with IPv6 Nov 5 16:03:06.557975 kernel: NET: Registered PF_PACKET protocol family Nov 5 16:03:06.557985 kernel: Key type dns_resolver registered Nov 5 16:03:06.557996 kernel: IPI shorthand broadcast: enabled Nov 5 16:03:06.558005 kernel: sched_clock: Marking stable (1721002319, 356815900)->(2156888846, -79070627) Nov 5 16:03:06.558014 kernel: registered taskstats version 1 Nov 5 16:03:06.558030 kernel: Loading compiled-in X.509 certificates Nov 5 16:03:06.558039 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 9f02cc8d588ce542f03b0da66dde47a90a145382' Nov 5 16:03:06.558048 kernel: Demotion targets for Node 0: null Nov 5 16:03:06.558057 kernel: Key type .fscrypt registered Nov 5 16:03:06.558065 kernel: Key type fscrypt-provisioning registered Nov 5 16:03:06.558074 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 5 16:03:06.558083 kernel: ima: Allocated hash algorithm: sha1 Nov 5 16:03:06.558092 kernel: ima: No architecture policies found Nov 5 16:03:06.558107 kernel: clk: Disabling unused clocks Nov 5 16:03:06.558116 kernel: Freeing unused kernel image (initmem) memory: 15964K Nov 5 16:03:06.558125 kernel: Write protecting the kernel read-only data: 40960k Nov 5 16:03:06.558134 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 5 16:03:06.558143 kernel: Run /init as init process Nov 5 16:03:06.558151 kernel: with arguments: Nov 5 16:03:06.558160 kernel: /init Nov 5 16:03:06.558175 kernel: with environment: Nov 5 16:03:06.558184 kernel: HOME=/ Nov 5 16:03:06.558193 kernel: TERM=linux Nov 5 16:03:06.558201 kernel: SCSI subsystem initialized Nov 5 16:03:06.558210 kernel: libata version 3.00 loaded. Nov 5 16:03:06.558394 kernel: ahci 0000:00:1f.2: version 3.0 Nov 5 16:03:06.558406 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 5 16:03:06.558605 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 5 16:03:06.558801 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 5 16:03:06.559006 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 5 16:03:06.559214 kernel: scsi host0: ahci Nov 5 16:03:06.559407 kernel: scsi host1: ahci Nov 5 16:03:06.559758 kernel: scsi host2: ahci Nov 5 16:03:06.559975 kernel: scsi host3: ahci Nov 5 16:03:06.560199 kernel: scsi host4: ahci Nov 5 16:03:06.560401 kernel: scsi host5: ahci Nov 5 16:03:06.560414 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Nov 5 16:03:06.560424 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Nov 5 16:03:06.560446 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Nov 5 16:03:06.560456 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Nov 5 16:03:06.560465 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Nov 5 16:03:06.560474 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Nov 5 16:03:06.560482 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 5 16:03:06.560491 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 5 16:03:06.560500 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 5 16:03:06.560516 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 5 16:03:06.560525 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 5 16:03:06.560534 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 5 16:03:06.560543 kernel: ata3.00: LPM support broken, forcing max_power Nov 5 16:03:06.560551 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 5 16:03:06.560568 kernel: ata3.00: applying bridge limits Nov 5 16:03:06.560577 kernel: ata3.00: LPM support broken, forcing max_power Nov 5 16:03:06.560593 kernel: ata3.00: configured for UDMA/100 Nov 5 16:03:06.560810 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 5 16:03:06.561076 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 5 16:03:06.561252 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 5 16:03:06.561264 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 5 16:03:06.561273 kernel: GPT:16515071 != 27000831 Nov 5 16:03:06.561296 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 5 16:03:06.561305 kernel: GPT:16515071 != 27000831 Nov 5 16:03:06.561313 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 5 16:03:06.561322 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 5 16:03:06.561517 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 5 16:03:06.561529 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 5 16:03:06.561718 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 5 16:03:06.561741 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 5 16:03:06.561750 kernel: device-mapper: uevent: version 1.0.3 Nov 5 16:03:06.561760 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 5 16:03:06.561769 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 5 16:03:06.561778 kernel: raid6: avx2x4 gen() 22530 MB/s Nov 5 16:03:06.561787 kernel: raid6: avx2x2 gen() 22828 MB/s Nov 5 16:03:06.561796 kernel: raid6: avx2x1 gen() 23062 MB/s Nov 5 16:03:06.561812 kernel: raid6: using algorithm avx2x1 gen() 23062 MB/s Nov 5 16:03:06.561830 kernel: raid6: .... xor() 9508 MB/s, rmw enabled Nov 5 16:03:06.561852 kernel: raid6: using avx2x2 recovery algorithm Nov 5 16:03:06.561862 kernel: xor: automatically using best checksumming function avx Nov 5 16:03:06.561872 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 5 16:03:06.561881 kernel: BTRFS: device fsid a4c7be9c-39f6-471d-8a4c-d50144c6bf01 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (182) Nov 5 16:03:06.561890 kernel: BTRFS info (device dm-0): first mount of filesystem a4c7be9c-39f6-471d-8a4c-d50144c6bf01 Nov 5 16:03:06.561919 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 5 16:03:06.561930 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 5 16:03:06.561941 kernel: BTRFS info (device dm-0): enabling free space tree Nov 5 16:03:06.561952 kernel: loop: module loaded Nov 5 16:03:06.561963 kernel: loop0: detected capacity change from 0 to 100120 Nov 5 16:03:06.561974 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 5 16:03:06.561990 systemd[1]: Successfully made /usr/ read-only. Nov 5 16:03:06.562012 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 16:03:06.562022 systemd[1]: Detected virtualization kvm. Nov 5 16:03:06.562031 systemd[1]: Detected architecture x86-64. Nov 5 16:03:06.562040 systemd[1]: Running in initrd. Nov 5 16:03:06.562049 systemd[1]: No hostname configured, using default hostname. Nov 5 16:03:06.562066 systemd[1]: Hostname set to . Nov 5 16:03:06.562077 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 16:03:06.562097 systemd[1]: Queued start job for default target initrd.target. Nov 5 16:03:06.562112 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 16:03:06.562125 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 16:03:06.562137 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 16:03:06.562150 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 5 16:03:06.562173 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 16:03:06.562184 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 5 16:03:06.562201 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 5 16:03:06.562211 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 16:03:06.562221 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 16:03:06.562231 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 5 16:03:06.562248 systemd[1]: Reached target paths.target - Path Units. Nov 5 16:03:06.562258 systemd[1]: Reached target slices.target - Slice Units. Nov 5 16:03:06.562268 systemd[1]: Reached target swap.target - Swaps. Nov 5 16:03:06.562278 systemd[1]: Reached target timers.target - Timer Units. Nov 5 16:03:06.562288 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 16:03:06.562297 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 16:03:06.562307 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 5 16:03:06.562324 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 5 16:03:06.562334 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 16:03:06.562344 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 16:03:06.562354 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 16:03:06.562363 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 16:03:06.562373 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 5 16:03:06.562397 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 5 16:03:06.562407 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 16:03:06.562416 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 5 16:03:06.562427 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 5 16:03:06.562437 systemd[1]: Starting systemd-fsck-usr.service... Nov 5 16:03:06.562447 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 16:03:06.562457 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 16:03:06.562474 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 16:03:06.562484 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 5 16:03:06.562494 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 16:03:06.562504 systemd[1]: Finished systemd-fsck-usr.service. Nov 5 16:03:06.562520 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 16:03:06.562568 systemd-journald[315]: Collecting audit messages is disabled. Nov 5 16:03:06.562591 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 16:03:06.562619 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 16:03:06.562634 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 5 16:03:06.562648 systemd-journald[315]: Journal started Nov 5 16:03:06.562674 systemd-journald[315]: Runtime Journal (/run/log/journal/a9039056f1f747e8963cc314c9580717) is 6M, max 48.1M, 42.1M free. Nov 5 16:03:06.567868 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 5 16:03:06.572167 systemd-modules-load[318]: Inserted module 'br_netfilter' Nov 5 16:03:06.573315 kernel: Bridge firewalling registered Nov 5 16:03:06.580124 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 16:03:06.580155 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 16:03:06.587102 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 16:03:06.596343 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 16:03:06.598320 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 16:03:06.600616 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 16:03:06.610600 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 16:03:06.614762 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 5 16:03:06.628274 systemd-tmpfiles[349]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 5 16:03:06.633726 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 16:03:06.638666 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 16:03:06.644925 dracut-cmdline[353]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 16:03:06.645944 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 16:03:06.714629 systemd-resolved[370]: Positive Trust Anchors: Nov 5 16:03:06.714646 systemd-resolved[370]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 16:03:06.714651 systemd-resolved[370]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 16:03:06.714682 systemd-resolved[370]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 16:03:06.753286 systemd-resolved[370]: Defaulting to hostname 'linux'. Nov 5 16:03:06.755234 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 16:03:06.756796 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 16:03:06.808890 kernel: Loading iSCSI transport class v2.0-870. Nov 5 16:03:06.823880 kernel: iscsi: registered transport (tcp) Nov 5 16:03:06.849874 kernel: iscsi: registered transport (qla4xxx) Nov 5 16:03:06.849943 kernel: QLogic iSCSI HBA Driver Nov 5 16:03:06.882112 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 16:03:06.912868 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 16:03:06.915675 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 16:03:06.993005 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 5 16:03:06.997327 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 5 16:03:06.999887 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 5 16:03:07.076743 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 5 16:03:07.079483 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 16:03:07.114156 systemd-udevd[593]: Using default interface naming scheme 'v257'. Nov 5 16:03:07.132167 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 16:03:07.136474 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 5 16:03:07.168247 dracut-pre-trigger[648]: rd.md=0: removing MD RAID activation Nov 5 16:03:07.203889 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 16:03:07.209075 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 16:03:07.216626 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 16:03:07.222674 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 16:03:07.293567 systemd-networkd[730]: lo: Link UP Nov 5 16:03:07.293577 systemd-networkd[730]: lo: Gained carrier Nov 5 16:03:07.294343 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 16:03:07.295879 systemd[1]: Reached target network.target - Network. Nov 5 16:03:07.354472 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 16:03:07.360023 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 5 16:03:07.420230 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 5 16:03:07.440669 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 5 16:03:07.455042 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 5 16:03:07.473032 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 5 16:03:07.480888 kernel: cryptd: max_cpu_qlen set to 1000 Nov 5 16:03:07.484062 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 5 16:03:07.488510 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 16:03:07.490478 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 16:03:07.495068 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 16:03:07.503326 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 16:03:07.519593 systemd-networkd[730]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 16:03:07.520647 systemd-networkd[730]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 16:03:07.522354 systemd-networkd[730]: eth0: Link UP Nov 5 16:03:07.522615 systemd-networkd[730]: eth0: Gained carrier Nov 5 16:03:07.522625 systemd-networkd[730]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 16:03:07.526817 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 16:03:07.526987 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 16:03:07.538226 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 16:03:07.544935 systemd-networkd[730]: eth0: DHCPv4 address 10.0.0.150/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 5 16:03:07.547688 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 5 16:03:07.550865 kernel: AES CTR mode by8 optimization enabled Nov 5 16:03:07.577638 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 16:03:07.641460 disk-uuid[780]: Primary Header is updated. Nov 5 16:03:07.641460 disk-uuid[780]: Secondary Entries is updated. Nov 5 16:03:07.641460 disk-uuid[780]: Secondary Header is updated. Nov 5 16:03:07.647747 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 5 16:03:07.653372 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 16:03:07.657019 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 16:03:07.670125 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 16:03:07.686059 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 5 16:03:07.726765 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 5 16:03:08.719690 disk-uuid[848]: Warning: The kernel is still using the old partition table. Nov 5 16:03:08.719690 disk-uuid[848]: The new table will be used at the next reboot or after you Nov 5 16:03:08.719690 disk-uuid[848]: run partprobe(8) or kpartx(8) Nov 5 16:03:08.719690 disk-uuid[848]: The operation has completed successfully. Nov 5 16:03:08.734999 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 5 16:03:08.735227 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 5 16:03:08.738248 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 5 16:03:08.784821 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (868) Nov 5 16:03:08.784912 kernel: BTRFS info (device vda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 16:03:08.784924 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 16:03:08.790677 kernel: BTRFS info (device vda6): turning on async discard Nov 5 16:03:08.790796 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 16:03:08.819892 kernel: BTRFS info (device vda6): last unmount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 16:03:08.821295 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 5 16:03:08.826366 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 5 16:03:08.965467 ignition[887]: Ignition 2.22.0 Nov 5 16:03:08.965482 ignition[887]: Stage: fetch-offline Nov 5 16:03:08.965534 ignition[887]: no configs at "/usr/lib/ignition/base.d" Nov 5 16:03:08.965547 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 16:03:08.965643 ignition[887]: parsed url from cmdline: "" Nov 5 16:03:08.965647 ignition[887]: no config URL provided Nov 5 16:03:08.965652 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 16:03:08.965664 ignition[887]: no config at "/usr/lib/ignition/user.ign" Nov 5 16:03:08.965714 ignition[887]: op(1): [started] loading QEMU firmware config module Nov 5 16:03:08.965720 ignition[887]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 5 16:03:08.985177 ignition[887]: op(1): [finished] loading QEMU firmware config module Nov 5 16:03:09.068088 ignition[887]: parsing config with SHA512: 082c04b7b77659eacea1005b3a0bf23bed881b511feccdcde17156cd49f89d483e9f023c35e8c56b741f6293571d8ac46b1d0e806196c10d15747490269da335 Nov 5 16:03:09.074215 unknown[887]: fetched base config from "system" Nov 5 16:03:09.075643 unknown[887]: fetched user config from "qemu" Nov 5 16:03:09.076104 ignition[887]: fetch-offline: fetch-offline passed Nov 5 16:03:09.076189 ignition[887]: Ignition finished successfully Nov 5 16:03:09.079674 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 16:03:09.081404 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 5 16:03:09.082525 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 5 16:03:09.122280 ignition[898]: Ignition 2.22.0 Nov 5 16:03:09.122300 ignition[898]: Stage: kargs Nov 5 16:03:09.122464 ignition[898]: no configs at "/usr/lib/ignition/base.d" Nov 5 16:03:09.122476 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 16:03:09.123278 ignition[898]: kargs: kargs passed Nov 5 16:03:09.123332 ignition[898]: Ignition finished successfully Nov 5 16:03:09.129456 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 5 16:03:09.131563 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 5 16:03:09.171992 ignition[907]: Ignition 2.22.0 Nov 5 16:03:09.172009 ignition[907]: Stage: disks Nov 5 16:03:09.172204 ignition[907]: no configs at "/usr/lib/ignition/base.d" Nov 5 16:03:09.172219 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 16:03:09.173169 ignition[907]: disks: disks passed Nov 5 16:03:09.178121 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 5 16:03:09.173230 ignition[907]: Ignition finished successfully Nov 5 16:03:09.179898 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 5 16:03:09.183252 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 5 16:03:09.183860 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 16:03:09.191622 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 16:03:09.192712 systemd[1]: Reached target basic.target - Basic System. Nov 5 16:03:09.195006 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 5 16:03:09.239699 systemd-fsck[917]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 5 16:03:09.248940 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 5 16:03:09.251146 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 5 16:03:09.290064 systemd-networkd[730]: eth0: Gained IPv6LL Nov 5 16:03:09.479910 kernel: EXT4-fs (vda9): mounted filesystem f3db699e-c9e0-4f6b-8c2b-aa40a78cd116 r/w with ordered data mode. Quota mode: none. Nov 5 16:03:09.480981 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 5 16:03:09.482243 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 5 16:03:09.485980 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 16:03:09.490998 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 5 16:03:09.492926 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 5 16:03:09.492992 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 5 16:03:09.493040 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 16:03:09.516408 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 5 16:03:09.524780 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (925) Nov 5 16:03:09.524813 kernel: BTRFS info (device vda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 16:03:09.524831 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 16:03:09.524733 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 5 16:03:09.533098 kernel: BTRFS info (device vda6): turning on async discard Nov 5 16:03:09.533133 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 16:03:09.538186 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 16:03:09.617341 initrd-setup-root[949]: cut: /sysroot/etc/passwd: No such file or directory Nov 5 16:03:09.625403 initrd-setup-root[956]: cut: /sysroot/etc/group: No such file or directory Nov 5 16:03:09.630175 initrd-setup-root[963]: cut: /sysroot/etc/shadow: No such file or directory Nov 5 16:03:09.643582 initrd-setup-root[970]: cut: /sysroot/etc/gshadow: No such file or directory Nov 5 16:03:09.819553 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 5 16:03:09.823725 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 5 16:03:09.826280 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 5 16:03:09.851518 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 5 16:03:09.854867 kernel: BTRFS info (device vda6): last unmount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 16:03:09.879031 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 5 16:03:09.903133 ignition[1039]: INFO : Ignition 2.22.0 Nov 5 16:03:09.903133 ignition[1039]: INFO : Stage: mount Nov 5 16:03:09.905891 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 16:03:09.905891 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 16:03:09.905891 ignition[1039]: INFO : mount: mount passed Nov 5 16:03:09.905891 ignition[1039]: INFO : Ignition finished successfully Nov 5 16:03:09.909353 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 5 16:03:09.918341 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 5 16:03:09.942464 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 16:03:10.004907 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1051) Nov 5 16:03:10.004991 kernel: BTRFS info (device vda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 16:03:10.008217 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 16:03:10.014302 kernel: BTRFS info (device vda6): turning on async discard Nov 5 16:03:10.014341 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 16:03:10.016682 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 16:03:10.092221 ignition[1068]: INFO : Ignition 2.22.0 Nov 5 16:03:10.092221 ignition[1068]: INFO : Stage: files Nov 5 16:03:10.096778 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 16:03:10.096778 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 16:03:10.096778 ignition[1068]: DEBUG : files: compiled without relabeling support, skipping Nov 5 16:03:10.096778 ignition[1068]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 5 16:03:10.096778 ignition[1068]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 5 16:03:10.108682 ignition[1068]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 5 16:03:10.108682 ignition[1068]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 5 16:03:10.108682 ignition[1068]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 5 16:03:10.108682 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 5 16:03:10.108682 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 5 16:03:10.098407 unknown[1068]: wrote ssh authorized keys file for user: core Nov 5 16:03:10.174212 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 5 16:03:10.327332 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 5 16:03:10.327332 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 5 16:03:10.341963 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 5 16:03:10.341963 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 5 16:03:10.341963 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 5 16:03:10.341963 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 16:03:10.341963 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 16:03:10.341963 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 16:03:10.341963 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 16:03:10.465351 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 16:03:10.468675 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 16:03:10.468675 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 16:03:10.561780 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 16:03:10.561780 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 16:03:10.569923 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 5 16:03:11.461814 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 5 16:03:12.382312 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 16:03:12.382312 ignition[1068]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 5 16:03:12.390209 ignition[1068]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 16:03:12.390209 ignition[1068]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 16:03:12.390209 ignition[1068]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 5 16:03:12.390209 ignition[1068]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 5 16:03:12.390209 ignition[1068]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 5 16:03:12.390209 ignition[1068]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 5 16:03:12.390209 ignition[1068]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 5 16:03:12.390209 ignition[1068]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 5 16:03:12.480614 ignition[1068]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 5 16:03:12.485898 ignition[1068]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 5 16:03:12.488887 ignition[1068]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 5 16:03:12.488887 ignition[1068]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 5 16:03:12.488887 ignition[1068]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 5 16:03:12.488887 ignition[1068]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 5 16:03:12.488887 ignition[1068]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 5 16:03:12.488887 ignition[1068]: INFO : files: files passed Nov 5 16:03:12.488887 ignition[1068]: INFO : Ignition finished successfully Nov 5 16:03:12.492889 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 5 16:03:12.498967 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 5 16:03:12.504441 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 5 16:03:12.526644 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 5 16:03:12.526835 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 5 16:03:12.532344 initrd-setup-root-after-ignition[1099]: grep: /sysroot/oem/oem-release: No such file or directory Nov 5 16:03:12.538248 initrd-setup-root-after-ignition[1105]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 16:03:12.541386 initrd-setup-root-after-ignition[1101]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 16:03:12.541386 initrd-setup-root-after-ignition[1101]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 5 16:03:12.549180 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 16:03:12.550611 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 5 16:03:12.558097 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 5 16:03:12.627584 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 5 16:03:12.627746 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 5 16:03:12.630733 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 5 16:03:12.634409 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 5 16:03:12.638399 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 5 16:03:12.640974 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 5 16:03:12.697505 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 16:03:12.700292 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 5 16:03:12.727237 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 16:03:12.727442 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 5 16:03:12.731334 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 16:03:12.735030 systemd[1]: Stopped target timers.target - Timer Units. Nov 5 16:03:12.735930 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 5 16:03:12.736095 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 16:03:12.743670 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 5 16:03:12.745334 systemd[1]: Stopped target basic.target - Basic System. Nov 5 16:03:12.752790 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 5 16:03:12.753640 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 16:03:12.757804 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 5 16:03:12.762399 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 5 16:03:12.769505 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 5 16:03:12.770311 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 16:03:12.773929 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 5 16:03:12.778471 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 5 16:03:12.782280 systemd[1]: Stopped target swap.target - Swaps. Nov 5 16:03:12.785396 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 5 16:03:12.785551 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 5 16:03:12.790468 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 5 16:03:12.794512 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 16:03:12.795476 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 5 16:03:12.795665 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 16:03:12.800369 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 5 16:03:12.800546 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 5 16:03:12.809541 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 5 16:03:12.809953 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 16:03:12.810894 systemd[1]: Stopped target paths.target - Path Units. Nov 5 16:03:12.815391 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 5 16:03:12.817968 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 16:03:12.819361 systemd[1]: Stopped target slices.target - Slice Units. Nov 5 16:03:12.824572 systemd[1]: Stopped target sockets.target - Socket Units. Nov 5 16:03:12.825572 systemd[1]: iscsid.socket: Deactivated successfully. Nov 5 16:03:12.825745 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 16:03:12.831326 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 5 16:03:12.831428 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 16:03:12.831986 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 5 16:03:12.832156 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 16:03:12.837529 systemd[1]: ignition-files.service: Deactivated successfully. Nov 5 16:03:12.837708 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 5 16:03:12.843727 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 5 16:03:12.846076 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 5 16:03:12.849234 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 5 16:03:12.849483 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 16:03:12.865453 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 5 16:03:12.865608 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 16:03:12.866710 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 5 16:03:12.866826 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 16:03:12.880438 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 5 16:03:12.880602 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 5 16:03:12.896074 ignition[1125]: INFO : Ignition 2.22.0 Nov 5 16:03:12.896074 ignition[1125]: INFO : Stage: umount Nov 5 16:03:12.899482 ignition[1125]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 16:03:12.899482 ignition[1125]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 16:03:12.899482 ignition[1125]: INFO : umount: umount passed Nov 5 16:03:12.899482 ignition[1125]: INFO : Ignition finished successfully Nov 5 16:03:12.901384 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 5 16:03:12.901555 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 5 16:03:12.906478 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 5 16:03:12.907186 systemd[1]: Stopped target network.target - Network. Nov 5 16:03:12.908293 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 5 16:03:12.908385 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 5 16:03:12.912135 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 5 16:03:12.912213 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 5 16:03:12.992337 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 5 16:03:12.992442 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 5 16:03:12.996386 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 5 16:03:12.996440 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 5 16:03:12.999035 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 5 16:03:13.002960 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 5 16:03:13.008127 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 5 16:03:13.008287 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 5 16:03:13.010693 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 5 16:03:13.010753 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 5 16:03:13.025164 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 5 16:03:13.025307 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 5 16:03:13.032958 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 5 16:03:13.033118 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 5 16:03:13.039486 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 5 16:03:13.043297 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 5 16:03:13.043350 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 5 16:03:13.048702 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 5 16:03:13.049549 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 5 16:03:13.049631 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 16:03:13.053477 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 16:03:13.053551 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 16:03:13.057460 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 5 16:03:13.057518 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 5 16:03:13.061222 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 16:03:13.090892 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 5 16:03:13.091104 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 16:03:13.092918 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 5 16:03:13.093007 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 5 16:03:13.097812 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 5 16:03:13.097905 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 16:03:13.098376 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 5 16:03:13.098489 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 5 16:03:13.107493 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 5 16:03:13.107605 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 5 16:03:13.112494 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 5 16:03:13.112580 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 16:03:13.118758 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 5 16:03:13.128247 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 5 16:03:13.128372 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 16:03:13.128895 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 5 16:03:13.128968 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 16:03:13.138777 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 5 16:03:13.138967 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 16:03:13.139688 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 5 16:03:13.139754 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 16:03:13.140538 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 16:03:13.140610 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 16:03:13.142626 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 5 16:03:13.142794 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 5 16:03:13.161725 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 5 16:03:13.161924 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 5 16:03:13.165418 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 5 16:03:13.169930 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 5 16:03:13.199249 systemd[1]: Switching root. Nov 5 16:03:13.243787 systemd-journald[315]: Journal stopped Nov 5 16:03:15.215672 systemd-journald[315]: Received SIGTERM from PID 1 (systemd). Nov 5 16:03:15.215773 kernel: SELinux: policy capability network_peer_controls=1 Nov 5 16:03:15.215822 kernel: SELinux: policy capability open_perms=1 Nov 5 16:03:15.215835 kernel: SELinux: policy capability extended_socket_class=1 Nov 5 16:03:15.215863 kernel: SELinux: policy capability always_check_network=0 Nov 5 16:03:15.215879 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 5 16:03:15.215900 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 5 16:03:15.215912 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 5 16:03:15.215933 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 5 16:03:15.215945 kernel: SELinux: policy capability userspace_initial_context=0 Nov 5 16:03:15.215958 kernel: audit: type=1403 audit(1762358594.222:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 5 16:03:15.215974 systemd[1]: Successfully loaded SELinux policy in 73.925ms. Nov 5 16:03:15.215997 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.618ms. Nov 5 16:03:15.216011 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 16:03:15.216024 systemd[1]: Detected virtualization kvm. Nov 5 16:03:15.216045 systemd[1]: Detected architecture x86-64. Nov 5 16:03:15.216058 systemd[1]: Detected first boot. Nov 5 16:03:15.216071 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 16:03:15.216091 zram_generator::config[1171]: No configuration found. Nov 5 16:03:15.216113 kernel: Guest personality initialized and is inactive Nov 5 16:03:15.216128 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 5 16:03:15.216142 kernel: Initialized host personality Nov 5 16:03:15.216165 kernel: NET: Registered PF_VSOCK protocol family Nov 5 16:03:15.216179 systemd[1]: Populated /etc with preset unit settings. Nov 5 16:03:15.216191 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 5 16:03:15.216206 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 5 16:03:15.216220 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 5 16:03:15.216236 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 5 16:03:15.216249 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 5 16:03:15.216270 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 5 16:03:15.216283 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 5 16:03:15.216296 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 5 16:03:15.216309 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 5 16:03:15.216322 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 5 16:03:15.216334 systemd[1]: Created slice user.slice - User and Session Slice. Nov 5 16:03:15.216347 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 16:03:15.216370 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 16:03:15.216384 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 5 16:03:15.216404 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 5 16:03:15.216423 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 5 16:03:15.216441 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 16:03:15.216458 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 5 16:03:15.216485 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 16:03:15.216499 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 16:03:15.216511 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 5 16:03:15.216528 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 5 16:03:15.216541 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 5 16:03:15.216555 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 5 16:03:15.216573 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 16:03:15.216605 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 16:03:15.216618 systemd[1]: Reached target slices.target - Slice Units. Nov 5 16:03:15.216637 systemd[1]: Reached target swap.target - Swaps. Nov 5 16:03:15.216652 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 5 16:03:15.216665 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 5 16:03:15.216678 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 5 16:03:15.216691 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 16:03:15.216715 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 16:03:15.216731 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 16:03:15.216744 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 5 16:03:15.216757 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 5 16:03:15.216770 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 5 16:03:15.216782 systemd[1]: Mounting media.mount - External Media Directory... Nov 5 16:03:15.216798 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 16:03:15.216821 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 5 16:03:15.216835 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 5 16:03:15.216884 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 5 16:03:15.216898 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 5 16:03:15.216911 systemd[1]: Reached target machines.target - Containers. Nov 5 16:03:15.216927 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 5 16:03:15.216941 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 16:03:15.216963 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 16:03:15.216976 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 5 16:03:15.216989 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 16:03:15.217001 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 16:03:15.217018 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 16:03:15.217031 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 5 16:03:15.217044 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 16:03:15.217066 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 5 16:03:15.217086 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 5 16:03:15.217102 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 5 16:03:15.217116 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 5 16:03:15.217128 systemd[1]: Stopped systemd-fsck-usr.service. Nov 5 16:03:15.217142 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 16:03:15.217163 kernel: fuse: init (API version 7.41) Nov 5 16:03:15.217181 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 16:03:15.217194 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 16:03:15.217207 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 16:03:15.217219 kernel: ACPI: bus type drm_connector registered Nov 5 16:03:15.217240 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 5 16:03:15.217253 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 5 16:03:15.217268 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 16:03:15.217283 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 16:03:15.217296 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 5 16:03:15.217308 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 5 16:03:15.217377 systemd-journald[1253]: Collecting audit messages is disabled. Nov 5 16:03:15.217403 systemd-journald[1253]: Journal started Nov 5 16:03:15.217426 systemd-journald[1253]: Runtime Journal (/run/log/journal/a9039056f1f747e8963cc314c9580717) is 6M, max 48.1M, 42.1M free. Nov 5 16:03:14.846724 systemd[1]: Queued start job for default target multi-user.target. Nov 5 16:03:14.868886 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 5 16:03:14.869698 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 5 16:03:15.222146 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 16:03:15.224121 systemd[1]: Mounted media.mount - External Media Directory. Nov 5 16:03:15.226066 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 5 16:03:15.229175 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 5 16:03:15.231257 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 5 16:03:15.233332 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 5 16:03:15.235732 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 16:03:15.238227 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 5 16:03:15.238458 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 5 16:03:15.240811 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 16:03:15.241069 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 16:03:15.243387 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 16:03:15.243630 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 16:03:15.245965 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 16:03:15.246223 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 16:03:15.248674 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 5 16:03:15.248966 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 5 16:03:15.251428 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 16:03:15.251720 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 16:03:15.254185 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 16:03:15.256756 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 16:03:15.260255 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 5 16:03:15.263001 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 5 16:03:15.282274 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 16:03:15.285183 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 5 16:03:15.289536 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 5 16:03:15.292805 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 5 16:03:15.295021 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 5 16:03:15.295177 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 16:03:15.298542 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 5 16:03:15.301150 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 16:03:15.312007 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 5 16:03:15.316003 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 5 16:03:15.318350 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 16:03:15.320987 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 5 16:03:15.323230 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 16:03:15.324872 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 16:03:15.330116 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 5 16:03:15.331652 systemd-journald[1253]: Time spent on flushing to /var/log/journal/a9039056f1f747e8963cc314c9580717 is 18.352ms for 1053 entries. Nov 5 16:03:15.331652 systemd-journald[1253]: System Journal (/var/log/journal/a9039056f1f747e8963cc314c9580717) is 8M, max 163.5M, 155.5M free. Nov 5 16:03:15.808618 systemd-journald[1253]: Received client request to flush runtime journal. Nov 5 16:03:15.808715 kernel: loop1: detected capacity change from 0 to 110984 Nov 5 16:03:15.808746 kernel: loop2: detected capacity change from 0 to 229808 Nov 5 16:03:15.808768 kernel: loop3: detected capacity change from 0 to 128048 Nov 5 16:03:15.808787 kernel: loop4: detected capacity change from 0 to 110984 Nov 5 16:03:15.339629 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 16:03:15.343466 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 5 16:03:15.345520 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 5 16:03:15.352161 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 16:03:15.519141 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Nov 5 16:03:15.519155 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Nov 5 16:03:15.521898 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 16:03:15.525024 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 16:03:15.529099 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 5 16:03:15.763392 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 5 16:03:15.769998 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 16:03:15.775985 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 16:03:15.778578 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 5 16:03:15.781941 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 5 16:03:15.789285 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 5 16:03:15.801907 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Nov 5 16:03:15.801922 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Nov 5 16:03:15.808739 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 16:03:15.813542 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 5 16:03:15.825879 kernel: loop5: detected capacity change from 0 to 229808 Nov 5 16:03:15.826370 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 5 16:03:15.848814 kernel: loop6: detected capacity change from 0 to 128048 Nov 5 16:03:15.847908 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 5 16:03:15.862023 (sd-merge)[1303]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 5 16:03:15.867403 (sd-merge)[1303]: Merged extensions into '/usr'. Nov 5 16:03:15.871865 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 5 16:03:15.875154 systemd[1]: Reload requested from client PID 1290 ('systemd-sysext') (unit systemd-sysext.service)... Nov 5 16:03:15.875177 systemd[1]: Reloading... Nov 5 16:03:16.007081 zram_generator::config[1352]: No configuration found. Nov 5 16:03:16.027957 systemd-resolved[1304]: Positive Trust Anchors: Nov 5 16:03:16.027981 systemd-resolved[1304]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 16:03:16.027988 systemd-resolved[1304]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 16:03:16.028030 systemd-resolved[1304]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 16:03:16.034042 systemd-resolved[1304]: Defaulting to hostname 'linux'. Nov 5 16:03:16.218937 systemd[1]: Reloading finished in 343 ms. Nov 5 16:03:16.248455 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 5 16:03:16.250710 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 16:03:16.252980 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 5 16:03:16.257933 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 16:03:16.284623 systemd[1]: Starting ensure-sysext.service... Nov 5 16:03:16.287334 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 16:03:16.317470 systemd[1]: Reload requested from client PID 1385 ('systemctl') (unit ensure-sysext.service)... Nov 5 16:03:16.317647 systemd[1]: Reloading... Nov 5 16:03:16.391461 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 5 16:03:16.392736 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 5 16:03:16.393218 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 5 16:03:16.395064 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 5 16:03:16.396369 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 5 16:03:16.397941 systemd-tmpfiles[1386]: ACLs are not supported, ignoring. Nov 5 16:03:16.398094 systemd-tmpfiles[1386]: ACLs are not supported, ignoring. Nov 5 16:03:16.405902 zram_generator::config[1413]: No configuration found. Nov 5 16:03:16.407200 systemd-tmpfiles[1386]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 16:03:16.407212 systemd-tmpfiles[1386]: Skipping /boot Nov 5 16:03:16.423016 systemd-tmpfiles[1386]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 16:03:16.423143 systemd-tmpfiles[1386]: Skipping /boot Nov 5 16:03:16.645518 systemd[1]: Reloading finished in 327 ms. Nov 5 16:03:16.670831 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 5 16:03:16.706473 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 16:03:16.718643 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 16:03:16.722325 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 5 16:03:16.750950 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 5 16:03:16.754622 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 5 16:03:16.760117 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 16:03:16.763938 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 5 16:03:16.770246 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 16:03:16.770415 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 16:03:16.776353 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 16:03:16.782294 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 16:03:16.787828 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 16:03:16.789948 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 16:03:16.790077 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 16:03:16.790180 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 16:03:16.792730 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 16:03:16.798992 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 16:03:16.810113 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 16:03:16.810413 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 16:03:16.814767 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 16:03:16.815120 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 16:03:16.826306 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 5 16:03:16.827630 systemd-udevd[1460]: Using default interface naming scheme 'v257'. Nov 5 16:03:16.834047 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 16:03:16.834286 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 16:03:16.836225 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 16:03:16.841071 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 16:03:16.853015 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 16:03:16.858132 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 16:03:16.860296 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 16:03:16.860621 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 16:03:16.860762 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 16:03:16.863941 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 5 16:03:16.867716 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 16:03:16.868178 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 16:03:16.871187 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 16:03:16.871542 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 16:03:16.875719 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 16:03:16.876093 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 16:03:16.879637 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 16:03:16.880405 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 16:03:16.882903 augenrules[1491]: No rules Nov 5 16:03:16.885486 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 16:03:16.885791 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 16:03:16.894634 systemd[1]: Finished ensure-sysext.service. Nov 5 16:03:16.901980 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 16:03:16.911059 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 16:03:16.913937 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 16:03:16.914010 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 16:03:16.917016 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 5 16:03:16.939413 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 5 16:03:16.942137 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 16:03:17.053136 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 5 16:03:17.125155 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 5 16:03:17.127687 systemd[1]: Reached target time-set.target - System Time Set. Nov 5 16:03:17.128251 systemd-networkd[1509]: lo: Link UP Nov 5 16:03:17.128630 systemd-networkd[1509]: lo: Gained carrier Nov 5 16:03:17.130278 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 16:03:17.132788 systemd[1]: Reached target network.target - Network. Nov 5 16:03:17.139519 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 5 16:03:17.190897 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 5 16:03:17.204590 systemd-networkd[1509]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 16:03:17.204604 systemd-networkd[1509]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 16:03:17.205424 systemd-networkd[1509]: eth0: Link UP Nov 5 16:03:17.205692 systemd-networkd[1509]: eth0: Gained carrier Nov 5 16:03:17.205782 systemd-networkd[1509]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 16:03:17.209746 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 5 16:03:17.246959 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 5 16:03:17.249424 systemd-networkd[1509]: eth0: DHCPv4 address 10.0.0.150/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 5 16:03:17.252646 systemd-timesyncd[1512]: Network configuration changed, trying to establish connection. Nov 5 16:03:17.258341 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 5 16:03:17.261873 kernel: mousedev: PS/2 mouse device common for all mice Nov 5 16:03:17.265600 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 5 16:03:18.086609 systemd-timesyncd[1512]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 5 16:03:18.086694 systemd-timesyncd[1512]: Initial clock synchronization to Wed 2025-11-05 16:03:18.086292 UTC. Nov 5 16:03:18.086766 systemd-resolved[1304]: Clock change detected. Flushing caches. Nov 5 16:03:18.094393 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Nov 5 16:03:18.100429 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 5 16:03:18.101296 kernel: ACPI: button: Power Button [PWRF] Nov 5 16:03:18.101315 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 5 16:03:18.109772 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 5 16:03:18.258261 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 16:03:18.284859 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 16:03:18.285405 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 16:03:18.290740 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 16:03:18.377887 kernel: kvm_amd: TSC scaling supported Nov 5 16:03:18.378015 kernel: kvm_amd: Nested Virtualization enabled Nov 5 16:03:18.378043 kernel: kvm_amd: Nested Paging enabled Nov 5 16:03:18.378063 kernel: kvm_amd: LBR virtualization supported Nov 5 16:03:18.409692 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 5 16:03:18.409778 kernel: kvm_amd: Virtual GIF supported Nov 5 16:03:18.502386 kernel: EDAC MC: Ver: 3.0.0 Nov 5 16:03:18.589317 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 16:03:18.628335 ldconfig[1457]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 5 16:03:18.637801 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 5 16:03:18.641765 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 5 16:03:18.672326 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 5 16:03:18.676827 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 16:03:18.679630 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 5 16:03:18.682206 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 5 16:03:18.685179 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 5 16:03:18.687961 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 5 16:03:18.690475 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 5 16:03:18.693125 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 5 16:03:18.695550 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 5 16:03:18.695604 systemd[1]: Reached target paths.target - Path Units. Nov 5 16:03:18.697487 systemd[1]: Reached target timers.target - Timer Units. Nov 5 16:03:18.701508 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 5 16:03:18.706911 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 5 16:03:18.738908 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 5 16:03:18.741641 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 5 16:03:18.744791 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 5 16:03:18.758914 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 5 16:03:18.761449 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 5 16:03:18.764453 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 5 16:03:18.767826 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 16:03:18.769706 systemd[1]: Reached target basic.target - Basic System. Nov 5 16:03:18.771630 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 5 16:03:18.771673 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 5 16:03:18.773459 systemd[1]: Starting containerd.service - containerd container runtime... Nov 5 16:03:18.777032 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 5 16:03:18.792849 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 5 16:03:18.796592 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 5 16:03:18.799797 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 5 16:03:18.801668 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 5 16:03:18.809809 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 5 16:03:18.814905 jq[1579]: false Nov 5 16:03:18.814476 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 5 16:03:18.818501 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 5 16:03:18.822278 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 5 16:03:18.825933 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 5 16:03:18.828572 extend-filesystems[1580]: Found /dev/vda6 Nov 5 16:03:18.834238 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 5 16:03:18.836312 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 5 16:03:18.841887 google_oslogin_nss_cache[1581]: oslogin_cache_refresh[1581]: Refreshing passwd entry cache Nov 5 16:03:18.841672 oslogin_cache_refresh[1581]: Refreshing passwd entry cache Nov 5 16:03:18.845082 extend-filesystems[1580]: Found /dev/vda9 Nov 5 16:03:18.848965 extend-filesystems[1580]: Checking size of /dev/vda9 Nov 5 16:03:18.851619 google_oslogin_nss_cache[1581]: oslogin_cache_refresh[1581]: Failure getting users, quitting Nov 5 16:03:18.851619 google_oslogin_nss_cache[1581]: oslogin_cache_refresh[1581]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 16:03:18.851619 google_oslogin_nss_cache[1581]: oslogin_cache_refresh[1581]: Refreshing group entry cache Nov 5 16:03:18.850972 oslogin_cache_refresh[1581]: Failure getting users, quitting Nov 5 16:03:18.851023 oslogin_cache_refresh[1581]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 16:03:18.851108 oslogin_cache_refresh[1581]: Refreshing group entry cache Nov 5 16:03:18.857950 google_oslogin_nss_cache[1581]: oslogin_cache_refresh[1581]: Failure getting groups, quitting Nov 5 16:03:18.857950 google_oslogin_nss_cache[1581]: oslogin_cache_refresh[1581]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 16:03:18.857933 oslogin_cache_refresh[1581]: Failure getting groups, quitting Nov 5 16:03:18.857951 oslogin_cache_refresh[1581]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 16:03:18.883195 extend-filesystems[1580]: Resized partition /dev/vda9 Nov 5 16:03:18.888753 extend-filesystems[1601]: resize2fs 1.47.3 (8-Jul-2025) Nov 5 16:03:18.891902 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 5 16:03:18.893720 systemd[1]: Starting update-engine.service - Update Engine... Nov 5 16:03:18.898511 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 5 16:03:18.903091 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 5 16:03:18.905771 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 5 16:03:18.906077 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 5 16:03:18.906550 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 5 16:03:18.906793 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 5 16:03:18.909332 systemd[1]: motdgen.service: Deactivated successfully. Nov 5 16:03:18.910395 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 5 16:03:18.918385 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 5 16:03:18.923509 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 5 16:03:18.930298 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 5 16:03:18.946501 jq[1603]: true Nov 5 16:03:18.966151 (ntainerd)[1624]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 5 16:03:18.966921 update_engine[1602]: I20251105 16:03:18.966434 1602 main.cc:92] Flatcar Update Engine starting Nov 5 16:03:19.028658 tar[1607]: linux-amd64/LICENSE Nov 5 16:03:19.029248 tar[1607]: linux-amd64/helm Nov 5 16:03:19.068401 jq[1621]: true Nov 5 16:03:19.337882 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 5 16:03:19.362841 dbus-daemon[1577]: [system] SELinux support is enabled Nov 5 16:03:19.428689 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 5 16:03:19.767336 update_engine[1602]: I20251105 16:03:19.367913 1602 update_check_scheduler.cc:74] Next update check in 10m28s Nov 5 16:03:19.434788 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 5 16:03:19.438134 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 5 16:03:19.438158 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 5 16:03:19.441534 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 5 16:03:19.441552 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 5 16:03:19.444371 systemd[1]: Started update-engine.service - Update Engine. Nov 5 16:03:19.449860 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 5 16:03:19.658572 locksmithd[1646]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 5 16:03:19.710592 systemd-networkd[1509]: eth0: Gained IPv6LL Nov 5 16:03:19.714808 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 5 16:03:19.717465 systemd[1]: Reached target network-online.target - Network is Online. Nov 5 16:03:19.721037 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 5 16:03:19.764533 systemd-logind[1592]: Watching system buttons on /dev/input/event2 (Power Button) Nov 5 16:03:19.764556 systemd-logind[1592]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 5 16:03:19.765162 systemd-logind[1592]: New seat seat0. Nov 5 16:03:19.777484 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 16:03:19.810500 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 5 16:03:19.812323 extend-filesystems[1601]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 5 16:03:19.812323 extend-filesystems[1601]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 5 16:03:19.812323 extend-filesystems[1601]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 5 16:03:19.822508 extend-filesystems[1580]: Resized filesystem in /dev/vda9 Nov 5 16:03:19.813337 systemd[1]: Started systemd-logind.service - User Login Management. Nov 5 16:03:19.817389 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 5 16:03:19.824501 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 5 16:03:19.873699 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 5 16:03:19.874106 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 5 16:03:19.961564 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 5 16:03:20.012602 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 5 16:03:20.189887 tar[1607]: linux-amd64/README.md Nov 5 16:03:20.261515 sshd_keygen[1617]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 5 16:03:20.280878 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 5 16:03:20.305718 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 5 16:03:20.311026 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 5 16:03:20.314047 systemd[1]: Started sshd@0-10.0.0.150:22-10.0.0.1:45924.service - OpenSSH per-connection server daemon (10.0.0.1:45924). Nov 5 16:03:20.334711 systemd[1]: issuegen.service: Deactivated successfully. Nov 5 16:03:20.335023 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 5 16:03:20.339446 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 5 16:03:20.397749 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 5 16:03:20.404189 containerd[1624]: time="2025-11-05T16:03:20Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 5 16:03:20.404851 containerd[1624]: time="2025-11-05T16:03:20.404814564Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 5 16:03:20.424270 containerd[1624]: time="2025-11-05T16:03:20.423084517Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="22.222µs" Nov 5 16:03:20.424270 containerd[1624]: time="2025-11-05T16:03:20.423132276Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 5 16:03:20.424270 containerd[1624]: time="2025-11-05T16:03:20.423157043Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 5 16:03:20.424270 containerd[1624]: time="2025-11-05T16:03:20.423393717Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 5 16:03:20.424270 containerd[1624]: time="2025-11-05T16:03:20.423409256Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 5 16:03:20.424270 containerd[1624]: time="2025-11-05T16:03:20.423446155Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 16:03:20.424270 containerd[1624]: time="2025-11-05T16:03:20.423516697Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 16:03:20.424270 containerd[1624]: time="2025-11-05T16:03:20.423528569Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 16:03:20.424270 containerd[1624]: time="2025-11-05T16:03:20.423832560Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 16:03:20.424270 containerd[1624]: time="2025-11-05T16:03:20.423847127Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 16:03:20.424270 containerd[1624]: time="2025-11-05T16:03:20.423858638Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 16:03:20.424270 containerd[1624]: time="2025-11-05T16:03:20.423867164Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 5 16:03:20.424611 containerd[1624]: time="2025-11-05T16:03:20.423976770Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 5 16:03:20.424611 containerd[1624]: time="2025-11-05T16:03:20.424237970Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 16:03:20.424611 containerd[1624]: time="2025-11-05T16:03:20.424302060Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 16:03:20.424611 containerd[1624]: time="2025-11-05T16:03:20.424313632Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 5 16:03:20.424611 containerd[1624]: time="2025-11-05T16:03:20.424374907Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 5 16:03:20.424611 containerd[1624]: time="2025-11-05T16:03:20.424611100Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 5 16:03:20.424778 containerd[1624]: time="2025-11-05T16:03:20.424680400Z" level=info msg="metadata content store policy set" policy=shared Nov 5 16:03:20.433400 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 5 16:03:20.505998 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 5 16:03:20.508006 systemd[1]: Reached target getty.target - Login Prompts. Nov 5 16:03:20.962615 sshd[1690]: Connection closed by authenticating user core 10.0.0.1 port 45924 [preauth] Nov 5 16:03:20.966139 systemd[1]: sshd@0-10.0.0.150:22-10.0.0.1:45924.service: Deactivated successfully. Nov 5 16:03:21.109024 containerd[1624]: time="2025-11-05T16:03:21.108924692Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 5 16:03:21.109187 containerd[1624]: time="2025-11-05T16:03:21.109087227Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 5 16:03:21.109187 containerd[1624]: time="2025-11-05T16:03:21.109117664Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 5 16:03:21.109187 containerd[1624]: time="2025-11-05T16:03:21.109138342Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 5 16:03:21.109269 containerd[1624]: time="2025-11-05T16:03:21.109224755Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 5 16:03:21.109269 containerd[1624]: time="2025-11-05T16:03:21.109249170Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 5 16:03:21.109269 containerd[1624]: time="2025-11-05T16:03:21.109265822Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 5 16:03:21.109369 containerd[1624]: time="2025-11-05T16:03:21.109281000Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 5 16:03:21.109641 bash[1644]: Updated "/home/core/.ssh/authorized_keys" Nov 5 16:03:21.110074 containerd[1624]: time="2025-11-05T16:03:21.109811455Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 5 16:03:21.110074 containerd[1624]: time="2025-11-05T16:03:21.109854365Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 5 16:03:21.110074 containerd[1624]: time="2025-11-05T16:03:21.109977035Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 5 16:03:21.110203 containerd[1624]: time="2025-11-05T16:03:21.110171891Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 5 16:03:21.110660 containerd[1624]: time="2025-11-05T16:03:21.110454551Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 5 16:03:21.112971 containerd[1624]: time="2025-11-05T16:03:21.110701183Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 5 16:03:21.112971 containerd[1624]: time="2025-11-05T16:03:21.110822301Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 5 16:03:21.112971 containerd[1624]: time="2025-11-05T16:03:21.110845314Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 5 16:03:21.112971 containerd[1624]: time="2025-11-05T16:03:21.110859881Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 5 16:03:21.112971 containerd[1624]: time="2025-11-05T16:03:21.111407298Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 5 16:03:21.112971 containerd[1624]: time="2025-11-05T16:03:21.111446501Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 5 16:03:21.112971 containerd[1624]: time="2025-11-05T16:03:21.111462191Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 5 16:03:21.112971 containerd[1624]: time="2025-11-05T16:03:21.111478852Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 5 16:03:21.112971 containerd[1624]: time="2025-11-05T16:03:21.111500222Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 5 16:03:21.112971 containerd[1624]: time="2025-11-05T16:03:21.111534226Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 5 16:03:21.112971 containerd[1624]: time="2025-11-05T16:03:21.111659190Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 5 16:03:21.112971 containerd[1624]: time="2025-11-05T16:03:21.111750882Z" level=info msg="Start snapshots syncer" Nov 5 16:03:21.112971 containerd[1624]: time="2025-11-05T16:03:21.111815002Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 5 16:03:21.111301 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 5 16:03:21.113402 containerd[1624]: time="2025-11-05T16:03:21.112232345Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 5 16:03:21.113402 containerd[1624]: time="2025-11-05T16:03:21.112329267Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 5 16:03:21.113527 containerd[1624]: time="2025-11-05T16:03:21.112478537Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 5 16:03:21.113527 containerd[1624]: time="2025-11-05T16:03:21.112662812Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 5 16:03:21.113527 containerd[1624]: time="2025-11-05T16:03:21.112698810Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 5 16:03:21.113527 containerd[1624]: time="2025-11-05T16:03:21.112724588Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 5 16:03:21.113527 containerd[1624]: time="2025-11-05T16:03:21.112742301Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 5 16:03:21.113527 containerd[1624]: time="2025-11-05T16:03:21.112766927Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 5 16:03:21.113527 containerd[1624]: time="2025-11-05T16:03:21.112785683Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 5 16:03:21.113527 containerd[1624]: time="2025-11-05T16:03:21.112802785Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 5 16:03:21.113527 containerd[1624]: time="2025-11-05T16:03:21.112837840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 5 16:03:21.113527 containerd[1624]: time="2025-11-05T16:03:21.112855073Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 5 16:03:21.113527 containerd[1624]: time="2025-11-05T16:03:21.112891761Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 5 16:03:21.113527 containerd[1624]: time="2025-11-05T16:03:21.112972663Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 16:03:21.113527 containerd[1624]: time="2025-11-05T16:03:21.113003782Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 16:03:21.113527 containerd[1624]: time="2025-11-05T16:03:21.113016906Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 16:03:21.113936 containerd[1624]: time="2025-11-05T16:03:21.113035000Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 16:03:21.113936 containerd[1624]: time="2025-11-05T16:03:21.113050499Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 5 16:03:21.113936 containerd[1624]: time="2025-11-05T16:03:21.113063724Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 5 16:03:21.113936 containerd[1624]: time="2025-11-05T16:03:21.113080235Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 5 16:03:21.113936 containerd[1624]: time="2025-11-05T16:03:21.113116964Z" level=info msg="runtime interface created" Nov 5 16:03:21.113936 containerd[1624]: time="2025-11-05T16:03:21.113125680Z" level=info msg="created NRI interface" Nov 5 16:03:21.113936 containerd[1624]: time="2025-11-05T16:03:21.113140989Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 5 16:03:21.113936 containerd[1624]: time="2025-11-05T16:03:21.113159073Z" level=info msg="Connect containerd service" Nov 5 16:03:21.113936 containerd[1624]: time="2025-11-05T16:03:21.113210159Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 5 16:03:21.116602 containerd[1624]: time="2025-11-05T16:03:21.115999369Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 16:03:21.117103 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 5 16:03:21.458448 containerd[1624]: time="2025-11-05T16:03:21.457419274Z" level=info msg="Start subscribing containerd event" Nov 5 16:03:21.458448 containerd[1624]: time="2025-11-05T16:03:21.457559807Z" level=info msg="Start recovering state" Nov 5 16:03:21.458448 containerd[1624]: time="2025-11-05T16:03:21.457826006Z" level=info msg="Start event monitor" Nov 5 16:03:21.458448 containerd[1624]: time="2025-11-05T16:03:21.457831256Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 5 16:03:21.458448 containerd[1624]: time="2025-11-05T16:03:21.457863226Z" level=info msg="Start cni network conf syncer for default" Nov 5 16:03:21.458448 containerd[1624]: time="2025-11-05T16:03:21.457915174Z" level=info msg="Start streaming server" Nov 5 16:03:21.458448 containerd[1624]: time="2025-11-05T16:03:21.457950600Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 5 16:03:21.458448 containerd[1624]: time="2025-11-05T16:03:21.457956681Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 5 16:03:21.458448 containerd[1624]: time="2025-11-05T16:03:21.457962452Z" level=info msg="runtime interface starting up..." Nov 5 16:03:21.458448 containerd[1624]: time="2025-11-05T16:03:21.458072759Z" level=info msg="starting plugins..." Nov 5 16:03:21.458448 containerd[1624]: time="2025-11-05T16:03:21.458123685Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 5 16:03:21.458586 systemd[1]: Started containerd.service - containerd container runtime. Nov 5 16:03:21.460621 containerd[1624]: time="2025-11-05T16:03:21.460059264Z" level=info msg="containerd successfully booted in 1.056511s" Nov 5 16:03:22.596086 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:03:22.625482 (kubelet)[1726]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 16:03:22.626519 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 5 16:03:22.629164 systemd[1]: Startup finished in 3.070s (kernel) + 8.266s (initrd) + 7.657s (userspace) = 18.994s. Nov 5 16:03:23.414462 kubelet[1726]: E1105 16:03:23.414335 1726 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 16:03:23.418580 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 16:03:23.418775 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 16:03:23.419183 systemd[1]: kubelet.service: Consumed 2.479s CPU time, 268.1M memory peak. Nov 5 16:03:30.975765 systemd[1]: Started sshd@1-10.0.0.150:22-10.0.0.1:50638.service - OpenSSH per-connection server daemon (10.0.0.1:50638). Nov 5 16:03:31.030688 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 50638 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 16:03:31.032691 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:03:31.039871 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 5 16:03:31.040982 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 5 16:03:31.046888 systemd-logind[1592]: New session 1 of user core. Nov 5 16:03:31.066706 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 5 16:03:31.070183 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 5 16:03:31.089409 (systemd)[1744]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 5 16:03:31.091880 systemd-logind[1592]: New session c1 of user core. Nov 5 16:03:31.239772 systemd[1744]: Queued start job for default target default.target. Nov 5 16:03:31.262029 systemd[1744]: Created slice app.slice - User Application Slice. Nov 5 16:03:31.262068 systemd[1744]: Reached target paths.target - Paths. Nov 5 16:03:31.262126 systemd[1744]: Reached target timers.target - Timers. Nov 5 16:03:31.263836 systemd[1744]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 5 16:03:31.276933 systemd[1744]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 5 16:03:31.277074 systemd[1744]: Reached target sockets.target - Sockets. Nov 5 16:03:31.277116 systemd[1744]: Reached target basic.target - Basic System. Nov 5 16:03:31.277158 systemd[1744]: Reached target default.target - Main User Target. Nov 5 16:03:31.277199 systemd[1744]: Startup finished in 178ms. Nov 5 16:03:31.277536 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 5 16:03:31.279529 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 5 16:03:31.351990 systemd[1]: Started sshd@2-10.0.0.150:22-10.0.0.1:50648.service - OpenSSH per-connection server daemon (10.0.0.1:50648). Nov 5 16:03:31.412592 sshd[1755]: Accepted publickey for core from 10.0.0.1 port 50648 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 16:03:31.414487 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:03:31.420979 systemd-logind[1592]: New session 2 of user core. Nov 5 16:03:31.445742 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 5 16:03:31.503840 sshd[1758]: Connection closed by 10.0.0.1 port 50648 Nov 5 16:03:31.504273 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Nov 5 16:03:31.520032 systemd[1]: sshd@2-10.0.0.150:22-10.0.0.1:50648.service: Deactivated successfully. Nov 5 16:03:31.522601 systemd[1]: session-2.scope: Deactivated successfully. Nov 5 16:03:31.523780 systemd-logind[1592]: Session 2 logged out. Waiting for processes to exit. Nov 5 16:03:31.527052 systemd[1]: Started sshd@3-10.0.0.150:22-10.0.0.1:50654.service - OpenSSH per-connection server daemon (10.0.0.1:50654). Nov 5 16:03:31.527890 systemd-logind[1592]: Removed session 2. Nov 5 16:03:31.590751 sshd[1764]: Accepted publickey for core from 10.0.0.1 port 50654 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 16:03:31.592894 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:03:31.598725 systemd-logind[1592]: New session 3 of user core. Nov 5 16:03:31.613736 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 5 16:03:31.670914 sshd[1767]: Connection closed by 10.0.0.1 port 50654 Nov 5 16:03:31.671271 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Nov 5 16:03:31.693271 systemd[1]: sshd@3-10.0.0.150:22-10.0.0.1:50654.service: Deactivated successfully. Nov 5 16:03:31.696683 systemd[1]: session-3.scope: Deactivated successfully. Nov 5 16:03:31.697741 systemd-logind[1592]: Session 3 logged out. Waiting for processes to exit. Nov 5 16:03:31.702308 systemd[1]: Started sshd@4-10.0.0.150:22-10.0.0.1:50662.service - OpenSSH per-connection server daemon (10.0.0.1:50662). Nov 5 16:03:31.703178 systemd-logind[1592]: Removed session 3. Nov 5 16:03:31.766580 sshd[1773]: Accepted publickey for core from 10.0.0.1 port 50662 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 16:03:31.769075 sshd-session[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:03:31.775442 systemd-logind[1592]: New session 4 of user core. Nov 5 16:03:31.790713 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 5 16:03:31.850613 sshd[1776]: Connection closed by 10.0.0.1 port 50662 Nov 5 16:03:31.851071 sshd-session[1773]: pam_unix(sshd:session): session closed for user core Nov 5 16:03:31.865880 systemd[1]: sshd@4-10.0.0.150:22-10.0.0.1:50662.service: Deactivated successfully. Nov 5 16:03:31.867806 systemd[1]: session-4.scope: Deactivated successfully. Nov 5 16:03:31.868891 systemd-logind[1592]: Session 4 logged out. Waiting for processes to exit. Nov 5 16:03:31.871592 systemd[1]: Started sshd@5-10.0.0.150:22-10.0.0.1:50672.service - OpenSSH per-connection server daemon (10.0.0.1:50672). Nov 5 16:03:31.872567 systemd-logind[1592]: Removed session 4. Nov 5 16:03:31.945975 sshd[1782]: Accepted publickey for core from 10.0.0.1 port 50672 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 16:03:31.948341 sshd-session[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:03:31.955099 systemd-logind[1592]: New session 5 of user core. Nov 5 16:03:31.965694 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 5 16:03:32.063233 sudo[1786]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 5 16:03:32.063638 sudo[1786]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 16:03:32.087849 sudo[1786]: pam_unix(sudo:session): session closed for user root Nov 5 16:03:32.089835 sshd[1785]: Connection closed by 10.0.0.1 port 50672 Nov 5 16:03:32.090201 sshd-session[1782]: pam_unix(sshd:session): session closed for user core Nov 5 16:03:32.103128 systemd[1]: sshd@5-10.0.0.150:22-10.0.0.1:50672.service: Deactivated successfully. Nov 5 16:03:32.105118 systemd[1]: session-5.scope: Deactivated successfully. Nov 5 16:03:32.105922 systemd-logind[1592]: Session 5 logged out. Waiting for processes to exit. Nov 5 16:03:32.108782 systemd[1]: Started sshd@6-10.0.0.150:22-10.0.0.1:50688.service - OpenSSH per-connection server daemon (10.0.0.1:50688). Nov 5 16:03:32.109615 systemd-logind[1592]: Removed session 5. Nov 5 16:03:32.163934 sshd[1792]: Accepted publickey for core from 10.0.0.1 port 50688 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 16:03:32.165537 sshd-session[1792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:03:32.170149 systemd-logind[1592]: New session 6 of user core. Nov 5 16:03:32.191504 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 5 16:03:32.247869 sudo[1797]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 5 16:03:32.248260 sudo[1797]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 16:03:32.381868 sudo[1797]: pam_unix(sudo:session): session closed for user root Nov 5 16:03:32.391307 sudo[1796]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 5 16:03:32.391773 sudo[1796]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 16:03:32.404114 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 16:03:32.449631 augenrules[1819]: No rules Nov 5 16:03:32.451232 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 16:03:32.451530 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 16:03:32.452825 sudo[1796]: pam_unix(sudo:session): session closed for user root Nov 5 16:03:32.454790 sshd[1795]: Connection closed by 10.0.0.1 port 50688 Nov 5 16:03:32.455136 sshd-session[1792]: pam_unix(sshd:session): session closed for user core Nov 5 16:03:32.473271 systemd[1]: sshd@6-10.0.0.150:22-10.0.0.1:50688.service: Deactivated successfully. Nov 5 16:03:32.475426 systemd[1]: session-6.scope: Deactivated successfully. Nov 5 16:03:32.476236 systemd-logind[1592]: Session 6 logged out. Waiting for processes to exit. Nov 5 16:03:32.479601 systemd[1]: Started sshd@7-10.0.0.150:22-10.0.0.1:50694.service - OpenSSH per-connection server daemon (10.0.0.1:50694). Nov 5 16:03:32.480306 systemd-logind[1592]: Removed session 6. Nov 5 16:03:32.529184 sshd[1828]: Accepted publickey for core from 10.0.0.1 port 50694 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 16:03:32.530550 sshd-session[1828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:03:32.534948 systemd-logind[1592]: New session 7 of user core. Nov 5 16:03:32.552478 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 5 16:03:32.608530 sudo[1832]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 5 16:03:32.608853 sudo[1832]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 16:03:33.669508 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 5 16:03:33.672157 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 16:03:33.740468 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 5 16:03:33.766006 (dockerd)[1855]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 5 16:03:34.098333 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:03:34.113942 (kubelet)[1861]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 16:03:34.291238 kubelet[1861]: E1105 16:03:34.291130 1861 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 16:03:34.299069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 16:03:34.299312 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 16:03:34.299824 systemd[1]: kubelet.service: Consumed 511ms CPU time, 110.3M memory peak. Nov 5 16:03:34.764782 dockerd[1855]: time="2025-11-05T16:03:34.764695877Z" level=info msg="Starting up" Nov 5 16:03:34.765840 dockerd[1855]: time="2025-11-05T16:03:34.765797163Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 5 16:03:34.785946 dockerd[1855]: time="2025-11-05T16:03:34.785887078Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 5 16:03:36.621267 dockerd[1855]: time="2025-11-05T16:03:36.621203161Z" level=info msg="Loading containers: start." Nov 5 16:03:36.633403 kernel: Initializing XFRM netlink socket Nov 5 16:03:37.147082 systemd-networkd[1509]: docker0: Link UP Nov 5 16:03:37.462111 dockerd[1855]: time="2025-11-05T16:03:37.461924343Z" level=info msg="Loading containers: done." Nov 5 16:03:37.478087 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1465923064-merged.mount: Deactivated successfully. Nov 5 16:03:37.524273 dockerd[1855]: time="2025-11-05T16:03:37.524185605Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 5 16:03:37.524489 dockerd[1855]: time="2025-11-05T16:03:37.524309488Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 5 16:03:37.524489 dockerd[1855]: time="2025-11-05T16:03:37.524456824Z" level=info msg="Initializing buildkit" Nov 5 16:03:37.562252 dockerd[1855]: time="2025-11-05T16:03:37.562174408Z" level=info msg="Completed buildkit initialization" Nov 5 16:03:37.568494 dockerd[1855]: time="2025-11-05T16:03:37.568410633Z" level=info msg="Daemon has completed initialization" Nov 5 16:03:37.568830 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 5 16:03:37.569985 dockerd[1855]: time="2025-11-05T16:03:37.569918321Z" level=info msg="API listen on /run/docker.sock" Nov 5 16:03:38.516462 containerd[1624]: time="2025-11-05T16:03:38.516273845Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 5 16:03:39.296157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2600286771.mount: Deactivated successfully. Nov 5 16:03:41.409407 containerd[1624]: time="2025-11-05T16:03:41.409282108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:41.412390 containerd[1624]: time="2025-11-05T16:03:41.410703474Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Nov 5 16:03:41.412622 containerd[1624]: time="2025-11-05T16:03:41.412591715Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:41.417426 containerd[1624]: time="2025-11-05T16:03:41.417335491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:41.418706 containerd[1624]: time="2025-11-05T16:03:41.418617495Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.902174153s" Nov 5 16:03:41.418706 containerd[1624]: time="2025-11-05T16:03:41.418707965Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 5 16:03:41.419608 containerd[1624]: time="2025-11-05T16:03:41.419578588Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 5 16:03:43.018647 containerd[1624]: time="2025-11-05T16:03:43.018575337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:43.019446 containerd[1624]: time="2025-11-05T16:03:43.019363115Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Nov 5 16:03:43.020606 containerd[1624]: time="2025-11-05T16:03:43.020518381Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:43.023375 containerd[1624]: time="2025-11-05T16:03:43.023313062Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:43.024470 containerd[1624]: time="2025-11-05T16:03:43.024434756Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.604818688s" Nov 5 16:03:43.024552 containerd[1624]: time="2025-11-05T16:03:43.024472908Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 5 16:03:43.024979 containerd[1624]: time="2025-11-05T16:03:43.024955493Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 5 16:03:44.450675 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 5 16:03:44.453011 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 16:03:44.876494 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:03:44.902000 (kubelet)[2160]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 16:03:44.979532 kubelet[2160]: E1105 16:03:44.979467 2160 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 16:03:44.983977 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 16:03:44.984261 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 16:03:44.984740 systemd[1]: kubelet.service: Consumed 448ms CPU time, 110.7M memory peak. Nov 5 16:03:45.800300 containerd[1624]: time="2025-11-05T16:03:45.800078609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:45.801171 containerd[1624]: time="2025-11-05T16:03:45.801129079Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Nov 5 16:03:45.802547 containerd[1624]: time="2025-11-05T16:03:45.802506412Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:45.805606 containerd[1624]: time="2025-11-05T16:03:45.805542205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:45.806624 containerd[1624]: time="2025-11-05T16:03:45.806581474Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 2.781594272s" Nov 5 16:03:45.806682 containerd[1624]: time="2025-11-05T16:03:45.806626088Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 5 16:03:45.807210 containerd[1624]: time="2025-11-05T16:03:45.807175418Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 5 16:03:47.534054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3620821595.mount: Deactivated successfully. Nov 5 16:03:48.017145 containerd[1624]: time="2025-11-05T16:03:48.017069548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:48.017797 containerd[1624]: time="2025-11-05T16:03:48.017750596Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Nov 5 16:03:48.018995 containerd[1624]: time="2025-11-05T16:03:48.018958861Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:48.020921 containerd[1624]: time="2025-11-05T16:03:48.020893109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:48.021493 containerd[1624]: time="2025-11-05T16:03:48.021441257Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 2.214237074s" Nov 5 16:03:48.021493 containerd[1624]: time="2025-11-05T16:03:48.021489046Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 5 16:03:48.021944 containerd[1624]: time="2025-11-05T16:03:48.021908683Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 5 16:03:48.715572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2389292305.mount: Deactivated successfully. Nov 5 16:03:49.780328 containerd[1624]: time="2025-11-05T16:03:49.780247506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:49.780964 containerd[1624]: time="2025-11-05T16:03:49.780930748Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Nov 5 16:03:49.782173 containerd[1624]: time="2025-11-05T16:03:49.782123625Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:49.785010 containerd[1624]: time="2025-11-05T16:03:49.784949815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:49.785891 containerd[1624]: time="2025-11-05T16:03:49.785853179Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.763917645s" Nov 5 16:03:49.785891 containerd[1624]: time="2025-11-05T16:03:49.785883847Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 5 16:03:49.786430 containerd[1624]: time="2025-11-05T16:03:49.786403902Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 5 16:03:50.242916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4088189189.mount: Deactivated successfully. Nov 5 16:03:50.248593 containerd[1624]: time="2025-11-05T16:03:50.248539231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 16:03:50.249378 containerd[1624]: time="2025-11-05T16:03:50.249313282Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 5 16:03:50.250528 containerd[1624]: time="2025-11-05T16:03:50.250459602Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 16:03:50.253101 containerd[1624]: time="2025-11-05T16:03:50.253060419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 16:03:50.253707 containerd[1624]: time="2025-11-05T16:03:50.253673359Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 467.241745ms" Nov 5 16:03:50.253800 containerd[1624]: time="2025-11-05T16:03:50.253709607Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 5 16:03:50.254382 containerd[1624]: time="2025-11-05T16:03:50.254318118Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 5 16:03:50.939021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount211625834.mount: Deactivated successfully. Nov 5 16:03:54.559768 containerd[1624]: time="2025-11-05T16:03:54.559619617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:54.560706 containerd[1624]: time="2025-11-05T16:03:54.560294222Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Nov 5 16:03:54.561696 containerd[1624]: time="2025-11-05T16:03:54.561645503Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:54.565201 containerd[1624]: time="2025-11-05T16:03:54.565149533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:54.566553 containerd[1624]: time="2025-11-05T16:03:54.566515682Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 4.312111032s" Nov 5 16:03:54.566640 containerd[1624]: time="2025-11-05T16:03:54.566553576Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 5 16:03:55.022655 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 5 16:03:55.025160 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 16:03:55.254313 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:03:55.281746 (kubelet)[2322]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 16:03:55.323458 kubelet[2322]: E1105 16:03:55.323328 2322 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 16:03:55.328790 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 16:03:55.329053 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 16:03:55.329774 systemd[1]: kubelet.service: Consumed 260ms CPU time, 108.7M memory peak. Nov 5 16:03:59.260950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:03:59.261244 systemd[1]: kubelet.service: Consumed 260ms CPU time, 108.7M memory peak. Nov 5 16:03:59.265529 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 16:03:59.314427 systemd[1]: Reload requested from client PID 2338 ('systemctl') (unit session-7.scope)... Nov 5 16:03:59.314476 systemd[1]: Reloading... Nov 5 16:03:59.445395 zram_generator::config[2387]: No configuration found. Nov 5 16:04:01.087215 systemd[1]: Reloading finished in 1772 ms. Nov 5 16:04:01.178906 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 5 16:04:01.179021 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 5 16:04:01.179397 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:04:01.179455 systemd[1]: kubelet.service: Consumed 218ms CPU time, 98.2M memory peak. Nov 5 16:04:01.181439 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 16:04:01.575481 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:04:01.603461 (kubelet)[2429]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 16:04:01.721131 kubelet[2429]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 16:04:01.721131 kubelet[2429]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 16:04:01.721131 kubelet[2429]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 16:04:01.721729 kubelet[2429]: I1105 16:04:01.721146 2429 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 16:04:02.383747 kubelet[2429]: I1105 16:04:02.383663 2429 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 5 16:04:02.383747 kubelet[2429]: I1105 16:04:02.383711 2429 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 16:04:02.384028 kubelet[2429]: I1105 16:04:02.383987 2429 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 16:04:02.424714 kubelet[2429]: E1105 16:04:02.424626 2429 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.150:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 5 16:04:02.425833 kubelet[2429]: I1105 16:04:02.425791 2429 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 16:04:02.449110 kubelet[2429]: I1105 16:04:02.449037 2429 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 16:04:02.462746 kubelet[2429]: I1105 16:04:02.461248 2429 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 16:04:02.462746 kubelet[2429]: I1105 16:04:02.461756 2429 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 16:04:02.462746 kubelet[2429]: I1105 16:04:02.461793 2429 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 16:04:02.462746 kubelet[2429]: I1105 16:04:02.462114 2429 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 16:04:02.463157 kubelet[2429]: I1105 16:04:02.462129 2429 container_manager_linux.go:303] "Creating device plugin manager" Nov 5 16:04:02.463157 kubelet[2429]: I1105 16:04:02.462405 2429 state_mem.go:36] "Initialized new in-memory state store" Nov 5 16:04:02.468518 kubelet[2429]: I1105 16:04:02.466242 2429 kubelet.go:480] "Attempting to sync node with API server" Nov 5 16:04:02.468518 kubelet[2429]: I1105 16:04:02.467264 2429 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 16:04:02.471686 kubelet[2429]: I1105 16:04:02.470251 2429 kubelet.go:386] "Adding apiserver pod source" Nov 5 16:04:02.478863 kubelet[2429]: I1105 16:04:02.475278 2429 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 16:04:02.480511 kubelet[2429]: E1105 16:04:02.480407 2429 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 16:04:02.481014 kubelet[2429]: E1105 16:04:02.480976 2429 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.150:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 16:04:02.482541 kubelet[2429]: I1105 16:04:02.482476 2429 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 16:04:02.483301 kubelet[2429]: I1105 16:04:02.483155 2429 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 16:04:02.489252 kubelet[2429]: W1105 16:04:02.486427 2429 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 5 16:04:02.503249 kubelet[2429]: I1105 16:04:02.501092 2429 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 16:04:02.503249 kubelet[2429]: I1105 16:04:02.502609 2429 server.go:1289] "Started kubelet" Nov 5 16:04:02.509372 kubelet[2429]: I1105 16:04:02.507499 2429 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 16:04:02.509372 kubelet[2429]: I1105 16:04:02.507810 2429 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 16:04:02.509372 kubelet[2429]: I1105 16:04:02.508047 2429 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 16:04:02.509372 kubelet[2429]: I1105 16:04:02.508128 2429 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 16:04:02.509614 kubelet[2429]: I1105 16:04:02.509593 2429 server.go:317] "Adding debug handlers to kubelet server" Nov 5 16:04:02.512410 kubelet[2429]: I1105 16:04:02.512343 2429 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 16:04:02.515541 kubelet[2429]: E1105 16:04:02.513705 2429 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.150:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.150:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.187527d9153433d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-05 16:04:02.502546384 +0000 UTC m=+0.889806703,LastTimestamp:2025-11-05 16:04:02.502546384 +0000 UTC m=+0.889806703,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 5 16:04:02.515729 kubelet[2429]: E1105 16:04:02.515631 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 16:04:02.515729 kubelet[2429]: I1105 16:04:02.515672 2429 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 16:04:02.515994 kubelet[2429]: I1105 16:04:02.515965 2429 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 16:04:02.516126 kubelet[2429]: I1105 16:04:02.516101 2429 reconciler.go:26] "Reconciler: start to sync state" Nov 5 16:04:02.518264 kubelet[2429]: E1105 16:04:02.517588 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="200ms" Nov 5 16:04:02.518264 kubelet[2429]: E1105 16:04:02.517744 2429 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 16:04:02.518264 kubelet[2429]: I1105 16:04:02.518116 2429 factory.go:223] Registration of the systemd container factory successfully Nov 5 16:04:02.518483 kubelet[2429]: I1105 16:04:02.518295 2429 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 16:04:02.519667 kubelet[2429]: E1105 16:04:02.519634 2429 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 16:04:02.521798 kubelet[2429]: I1105 16:04:02.521761 2429 factory.go:223] Registration of the containerd container factory successfully Nov 5 16:04:02.556865 kubelet[2429]: I1105 16:04:02.556789 2429 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 16:04:02.556865 kubelet[2429]: I1105 16:04:02.556817 2429 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 16:04:02.556865 kubelet[2429]: I1105 16:04:02.556845 2429 state_mem.go:36] "Initialized new in-memory state store" Nov 5 16:04:02.568999 kubelet[2429]: I1105 16:04:02.567494 2429 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 5 16:04:02.571252 kubelet[2429]: I1105 16:04:02.571146 2429 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 5 16:04:02.571252 kubelet[2429]: I1105 16:04:02.571226 2429 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 5 16:04:02.572487 kubelet[2429]: I1105 16:04:02.571615 2429 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 16:04:02.572487 kubelet[2429]: I1105 16:04:02.571636 2429 kubelet.go:2436] "Starting kubelet main sync loop" Nov 5 16:04:02.572487 kubelet[2429]: E1105 16:04:02.571726 2429 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 16:04:02.572870 kubelet[2429]: E1105 16:04:02.572836 2429 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 16:04:02.616576 kubelet[2429]: E1105 16:04:02.616424 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 16:04:02.672189 kubelet[2429]: E1105 16:04:02.671897 2429 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 5 16:04:02.717556 kubelet[2429]: E1105 16:04:02.717450 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 16:04:02.720208 kubelet[2429]: E1105 16:04:02.720093 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="400ms" Nov 5 16:04:02.818813 kubelet[2429]: E1105 16:04:02.817784 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 16:04:02.874115 kubelet[2429]: E1105 16:04:02.873030 2429 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 5 16:04:02.919098 kubelet[2429]: E1105 16:04:02.918903 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 16:04:02.998834 kubelet[2429]: I1105 16:04:02.998735 2429 policy_none.go:49] "None policy: Start" Nov 5 16:04:02.998834 kubelet[2429]: I1105 16:04:02.998799 2429 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 16:04:02.998834 kubelet[2429]: I1105 16:04:02.998827 2429 state_mem.go:35] "Initializing new in-memory state store" Nov 5 16:04:03.020378 kubelet[2429]: E1105 16:04:03.019481 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 16:04:03.020526 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 5 16:04:03.065837 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 5 16:04:03.081647 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 5 16:04:03.105365 kubelet[2429]: E1105 16:04:03.105270 2429 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 16:04:03.105685 kubelet[2429]: I1105 16:04:03.105662 2429 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 16:04:03.105771 kubelet[2429]: I1105 16:04:03.105688 2429 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 16:04:03.106194 kubelet[2429]: I1105 16:04:03.106022 2429 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 16:04:03.107904 kubelet[2429]: E1105 16:04:03.107798 2429 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 16:04:03.107904 kubelet[2429]: E1105 16:04:03.107881 2429 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 5 16:04:03.121770 kubelet[2429]: E1105 16:04:03.121706 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="800ms" Nov 5 16:04:03.209559 kubelet[2429]: I1105 16:04:03.209472 2429 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 16:04:03.211614 kubelet[2429]: E1105 16:04:03.211520 2429 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Nov 5 16:04:03.312808 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Nov 5 16:04:03.319920 kubelet[2429]: I1105 16:04:03.319821 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 16:04:03.319920 kubelet[2429]: I1105 16:04:03.319883 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 16:04:03.319920 kubelet[2429]: I1105 16:04:03.319910 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 5 16:04:03.319920 kubelet[2429]: I1105 16:04:03.319935 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/58cc59dcdea031e95055e279b3259bb0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"58cc59dcdea031e95055e279b3259bb0\") " pod="kube-system/kube-apiserver-localhost" Nov 5 16:04:03.320248 kubelet[2429]: I1105 16:04:03.319961 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 16:04:03.320248 kubelet[2429]: I1105 16:04:03.319984 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 16:04:03.320248 kubelet[2429]: I1105 16:04:03.320007 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 16:04:03.320248 kubelet[2429]: I1105 16:04:03.320030 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/58cc59dcdea031e95055e279b3259bb0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"58cc59dcdea031e95055e279b3259bb0\") " pod="kube-system/kube-apiserver-localhost" Nov 5 16:04:03.320248 kubelet[2429]: I1105 16:04:03.320088 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/58cc59dcdea031e95055e279b3259bb0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"58cc59dcdea031e95055e279b3259bb0\") " pod="kube-system/kube-apiserver-localhost" Nov 5 16:04:03.350293 kubelet[2429]: E1105 16:04:03.350229 2429 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 16:04:03.351457 kubelet[2429]: E1105 16:04:03.351175 2429 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 16:04:03.356566 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Nov 5 16:04:03.363781 kubelet[2429]: E1105 16:04:03.360734 2429 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 16:04:03.407991 systemd[1]: Created slice kubepods-burstable-pod58cc59dcdea031e95055e279b3259bb0.slice - libcontainer container kubepods-burstable-pod58cc59dcdea031e95055e279b3259bb0.slice. Nov 5 16:04:03.414680 kubelet[2429]: I1105 16:04:03.414601 2429 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 16:04:03.415146 kubelet[2429]: E1105 16:04:03.415098 2429 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Nov 5 16:04:03.416029 kubelet[2429]: E1105 16:04:03.415974 2429 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 16:04:03.440804 kubelet[2429]: E1105 16:04:03.440707 2429 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.150:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 16:04:03.653113 kubelet[2429]: E1105 16:04:03.652918 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:03.654158 containerd[1624]: time="2025-11-05T16:04:03.654104378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Nov 5 16:04:03.661789 kubelet[2429]: E1105 16:04:03.661690 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:03.662577 containerd[1624]: time="2025-11-05T16:04:03.662474724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Nov 5 16:04:03.717339 kubelet[2429]: E1105 16:04:03.717240 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:03.718128 containerd[1624]: time="2025-11-05T16:04:03.718057460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:58cc59dcdea031e95055e279b3259bb0,Namespace:kube-system,Attempt:0,}" Nov 5 16:04:03.817833 kubelet[2429]: I1105 16:04:03.817737 2429 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 16:04:03.818491 kubelet[2429]: E1105 16:04:03.818403 2429 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Nov 5 16:04:03.928423 kubelet[2429]: E1105 16:04:03.923854 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="1.6s" Nov 5 16:04:04.078410 containerd[1624]: time="2025-11-05T16:04:04.078305057Z" level=info msg="connecting to shim 15b1000fdb1ddfe3d50a26e68b62dc66ec8194ac508cab2273ccec30ebeb8e86" address="unix:///run/containerd/s/802f17289c35c6b9a361bd66486a3c93fc469654057e6b3d80d62499c11d8174" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:04:04.315510 kubelet[2429]: E1105 16:04:04.315468 2429 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 16:04:04.315884 kubelet[2429]: E1105 16:04:04.315859 2429 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 16:04:04.316370 containerd[1624]: time="2025-11-05T16:04:04.316038560Z" level=info msg="connecting to shim 5be37dfab0fedfd5dc2a17eca4960e5f4138514c54e643572243daf035355eaf" address="unix:///run/containerd/s/7aebebd4e6135cd16b916e5a381c009e70813ac03a7635fb318880bc6608ddd0" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:04:04.325662 containerd[1624]: time="2025-11-05T16:04:04.325583683Z" level=info msg="connecting to shim 13b59cd20e1af9f61cbc4fcd1d7c661dcdc5cd9b66e1858d1a781173fe72a77d" address="unix:///run/containerd/s/0c05d670bd829dec458bacf13ec5b07cb3b1ae759201aed2e8e3ed0c4a0e8f09" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:04:04.369822 systemd[1]: Started cri-containerd-15b1000fdb1ddfe3d50a26e68b62dc66ec8194ac508cab2273ccec30ebeb8e86.scope - libcontainer container 15b1000fdb1ddfe3d50a26e68b62dc66ec8194ac508cab2273ccec30ebeb8e86. Nov 5 16:04:04.425623 systemd[1]: Started cri-containerd-5be37dfab0fedfd5dc2a17eca4960e5f4138514c54e643572243daf035355eaf.scope - libcontainer container 5be37dfab0fedfd5dc2a17eca4960e5f4138514c54e643572243daf035355eaf. Nov 5 16:04:04.444588 systemd[1]: Started cri-containerd-13b59cd20e1af9f61cbc4fcd1d7c661dcdc5cd9b66e1858d1a781173fe72a77d.scope - libcontainer container 13b59cd20e1af9f61cbc4fcd1d7c661dcdc5cd9b66e1858d1a781173fe72a77d. Nov 5 16:04:04.584601 containerd[1624]: time="2025-11-05T16:04:04.556290533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"15b1000fdb1ddfe3d50a26e68b62dc66ec8194ac508cab2273ccec30ebeb8e86\"" Nov 5 16:04:04.584751 kubelet[2429]: E1105 16:04:04.559807 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:04.587242 containerd[1624]: time="2025-11-05T16:04:04.586671253Z" level=info msg="CreateContainer within sandbox \"15b1000fdb1ddfe3d50a26e68b62dc66ec8194ac508cab2273ccec30ebeb8e86\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 5 16:04:04.608387 containerd[1624]: time="2025-11-05T16:04:04.607678948Z" level=info msg="Container b212d0e22c7fc36e716477c3a958ebd3b8660c694b8dc470b1dd26ef070273ac: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:04:04.613615 kubelet[2429]: E1105 16:04:04.613552 2429 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.150:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 5 16:04:04.627407 kubelet[2429]: I1105 16:04:04.623678 2429 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 16:04:04.627407 kubelet[2429]: E1105 16:04:04.624385 2429 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Nov 5 16:04:04.630553 containerd[1624]: time="2025-11-05T16:04:04.630283792Z" level=info msg="CreateContainer within sandbox \"15b1000fdb1ddfe3d50a26e68b62dc66ec8194ac508cab2273ccec30ebeb8e86\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b212d0e22c7fc36e716477c3a958ebd3b8660c694b8dc470b1dd26ef070273ac\"" Nov 5 16:04:04.633378 containerd[1624]: time="2025-11-05T16:04:04.631533915Z" level=info msg="StartContainer for \"b212d0e22c7fc36e716477c3a958ebd3b8660c694b8dc470b1dd26ef070273ac\"" Nov 5 16:04:04.633378 containerd[1624]: time="2025-11-05T16:04:04.633273446Z" level=info msg="connecting to shim b212d0e22c7fc36e716477c3a958ebd3b8660c694b8dc470b1dd26ef070273ac" address="unix:///run/containerd/s/802f17289c35c6b9a361bd66486a3c93fc469654057e6b3d80d62499c11d8174" protocol=ttrpc version=3 Nov 5 16:04:04.635617 update_engine[1602]: I20251105 16:04:04.634459 1602 update_attempter.cc:509] Updating boot flags... Nov 5 16:04:05.018638 systemd[1]: Started cri-containerd-b212d0e22c7fc36e716477c3a958ebd3b8660c694b8dc470b1dd26ef070273ac.scope - libcontainer container b212d0e22c7fc36e716477c3a958ebd3b8660c694b8dc470b1dd26ef070273ac. Nov 5 16:04:05.030723 containerd[1624]: time="2025-11-05T16:04:05.030652672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:58cc59dcdea031e95055e279b3259bb0,Namespace:kube-system,Attempt:0,} returns sandbox id \"13b59cd20e1af9f61cbc4fcd1d7c661dcdc5cd9b66e1858d1a781173fe72a77d\"" Nov 5 16:04:05.033873 kubelet[2429]: E1105 16:04:05.033843 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:05.084476 containerd[1624]: time="2025-11-05T16:04:05.084417001Z" level=info msg="CreateContainer within sandbox \"13b59cd20e1af9f61cbc4fcd1d7c661dcdc5cd9b66e1858d1a781173fe72a77d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 5 16:04:05.116295 containerd[1624]: time="2025-11-05T16:04:05.115977276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"5be37dfab0fedfd5dc2a17eca4960e5f4138514c54e643572243daf035355eaf\"" Nov 5 16:04:05.127372 kubelet[2429]: E1105 16:04:05.127278 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:05.155751 containerd[1624]: time="2025-11-05T16:04:05.154520152Z" level=info msg="CreateContainer within sandbox \"5be37dfab0fedfd5dc2a17eca4960e5f4138514c54e643572243daf035355eaf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 5 16:04:05.163390 containerd[1624]: time="2025-11-05T16:04:05.163297916Z" level=info msg="Container 72eae40982f8e9f2ef71ba6ae17b350cde785f73ca888ee4dde8d0995f0da0ca: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:04:05.195168 containerd[1624]: time="2025-11-05T16:04:05.195035066Z" level=info msg="CreateContainer within sandbox \"13b59cd20e1af9f61cbc4fcd1d7c661dcdc5cd9b66e1858d1a781173fe72a77d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"72eae40982f8e9f2ef71ba6ae17b350cde785f73ca888ee4dde8d0995f0da0ca\"" Nov 5 16:04:05.199151 containerd[1624]: time="2025-11-05T16:04:05.198067737Z" level=info msg="StartContainer for \"72eae40982f8e9f2ef71ba6ae17b350cde785f73ca888ee4dde8d0995f0da0ca\"" Nov 5 16:04:05.208857 containerd[1624]: time="2025-11-05T16:04:05.203694216Z" level=info msg="connecting to shim 72eae40982f8e9f2ef71ba6ae17b350cde785f73ca888ee4dde8d0995f0da0ca" address="unix:///run/containerd/s/0c05d670bd829dec458bacf13ec5b07cb3b1ae759201aed2e8e3ed0c4a0e8f09" protocol=ttrpc version=3 Nov 5 16:04:05.216867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1406017734.mount: Deactivated successfully. Nov 5 16:04:05.295420 containerd[1624]: time="2025-11-05T16:04:05.294620963Z" level=info msg="Container 636dd6da464cbf184299fec20cf95c8c73f23969008633deb5ea74d44c12eb5d: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:04:05.300267 containerd[1624]: time="2025-11-05T16:04:05.300215271Z" level=info msg="StartContainer for \"b212d0e22c7fc36e716477c3a958ebd3b8660c694b8dc470b1dd26ef070273ac\" returns successfully" Nov 5 16:04:05.349265 containerd[1624]: time="2025-11-05T16:04:05.347849826Z" level=info msg="CreateContainer within sandbox \"5be37dfab0fedfd5dc2a17eca4960e5f4138514c54e643572243daf035355eaf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"636dd6da464cbf184299fec20cf95c8c73f23969008633deb5ea74d44c12eb5d\"" Nov 5 16:04:05.351840 containerd[1624]: time="2025-11-05T16:04:05.351794385Z" level=info msg="StartContainer for \"636dd6da464cbf184299fec20cf95c8c73f23969008633deb5ea74d44c12eb5d\"" Nov 5 16:04:05.353406 containerd[1624]: time="2025-11-05T16:04:05.353307024Z" level=info msg="connecting to shim 636dd6da464cbf184299fec20cf95c8c73f23969008633deb5ea74d44c12eb5d" address="unix:///run/containerd/s/7aebebd4e6135cd16b916e5a381c009e70813ac03a7635fb318880bc6608ddd0" protocol=ttrpc version=3 Nov 5 16:04:05.405619 systemd[1]: Started cri-containerd-72eae40982f8e9f2ef71ba6ae17b350cde785f73ca888ee4dde8d0995f0da0ca.scope - libcontainer container 72eae40982f8e9f2ef71ba6ae17b350cde785f73ca888ee4dde8d0995f0da0ca. Nov 5 16:04:05.414744 systemd[1]: Started cri-containerd-636dd6da464cbf184299fec20cf95c8c73f23969008633deb5ea74d44c12eb5d.scope - libcontainer container 636dd6da464cbf184299fec20cf95c8c73f23969008633deb5ea74d44c12eb5d. Nov 5 16:04:05.527064 kubelet[2429]: E1105 16:04:05.526953 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="3.2s" Nov 5 16:04:05.534966 containerd[1624]: time="2025-11-05T16:04:05.534732684Z" level=info msg="StartContainer for \"72eae40982f8e9f2ef71ba6ae17b350cde785f73ca888ee4dde8d0995f0da0ca\" returns successfully" Nov 5 16:04:05.535119 containerd[1624]: time="2025-11-05T16:04:05.534820361Z" level=info msg="StartContainer for \"636dd6da464cbf184299fec20cf95c8c73f23969008633deb5ea74d44c12eb5d\" returns successfully" Nov 5 16:04:05.601292 kubelet[2429]: E1105 16:04:05.601143 2429 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 16:04:05.604009 kubelet[2429]: E1105 16:04:05.602676 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:05.610235 kubelet[2429]: E1105 16:04:05.610012 2429 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 16:04:05.610475 kubelet[2429]: E1105 16:04:05.610423 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:05.629034 kubelet[2429]: E1105 16:04:05.628848 2429 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 16:04:05.632504 kubelet[2429]: E1105 16:04:05.632280 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:06.228372 kubelet[2429]: I1105 16:04:06.228112 2429 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 16:04:06.632444 kubelet[2429]: E1105 16:04:06.630984 2429 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 16:04:06.632444 kubelet[2429]: E1105 16:04:06.631143 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:06.632444 kubelet[2429]: E1105 16:04:06.631943 2429 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 16:04:06.632444 kubelet[2429]: E1105 16:04:06.632046 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:06.632444 kubelet[2429]: E1105 16:04:06.632197 2429 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 16:04:06.632444 kubelet[2429]: E1105 16:04:06.632312 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:07.669373 kubelet[2429]: E1105 16:04:07.669126 2429 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 16:04:07.669373 kubelet[2429]: E1105 16:04:07.669340 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:09.674303 kubelet[2429]: E1105 16:04:09.674215 2429 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 5 16:04:09.849904 kubelet[2429]: I1105 16:04:09.849829 2429 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 5 16:04:09.917851 kubelet[2429]: I1105 16:04:09.917531 2429 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 16:04:09.956461 kubelet[2429]: E1105 16:04:09.952636 2429 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 5 16:04:09.956461 kubelet[2429]: I1105 16:04:09.952674 2429 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 16:04:09.960778 kubelet[2429]: E1105 16:04:09.959369 2429 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 5 16:04:09.960778 kubelet[2429]: I1105 16:04:09.959626 2429 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 16:04:09.968984 kubelet[2429]: E1105 16:04:09.967511 2429 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 5 16:04:10.126220 kernel: hrtimer: interrupt took 4738689 ns Nov 5 16:04:10.506170 kubelet[2429]: I1105 16:04:10.505944 2429 apiserver.go:52] "Watching apiserver" Nov 5 16:04:10.516857 kubelet[2429]: I1105 16:04:10.516230 2429 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 16:04:13.720000 kubelet[2429]: I1105 16:04:13.719949 2429 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 16:04:13.726668 kubelet[2429]: E1105 16:04:13.726602 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:14.661358 kubelet[2429]: E1105 16:04:14.661301 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:14.771758 kubelet[2429]: I1105 16:04:14.771707 2429 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 16:04:14.777027 kubelet[2429]: E1105 16:04:14.776947 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:15.567535 systemd[1]: Reload requested from client PID 2732 ('systemctl') (unit session-7.scope)... Nov 5 16:04:15.567558 systemd[1]: Reloading... Nov 5 16:04:15.660372 zram_generator::config[2776]: No configuration found. Nov 5 16:04:15.663946 kubelet[2429]: E1105 16:04:15.663905 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:15.906277 systemd[1]: Reloading finished in 338 ms. Nov 5 16:04:15.944905 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 16:04:15.969956 systemd[1]: kubelet.service: Deactivated successfully. Nov 5 16:04:15.970377 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:04:15.970451 systemd[1]: kubelet.service: Consumed 1.959s CPU time, 135.2M memory peak. Nov 5 16:04:15.972782 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 16:04:16.223470 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:04:16.235927 (kubelet)[2821]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 16:04:16.281899 kubelet[2821]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 16:04:16.281899 kubelet[2821]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 16:04:16.281899 kubelet[2821]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 16:04:16.281899 kubelet[2821]: I1105 16:04:16.281528 2821 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 16:04:16.289927 kubelet[2821]: I1105 16:04:16.289880 2821 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 5 16:04:16.289927 kubelet[2821]: I1105 16:04:16.289911 2821 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 16:04:16.290166 kubelet[2821]: I1105 16:04:16.290147 2821 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 16:04:16.294468 kubelet[2821]: I1105 16:04:16.294200 2821 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 5 16:04:16.297132 kubelet[2821]: I1105 16:04:16.297093 2821 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 16:04:16.301242 kubelet[2821]: I1105 16:04:16.301214 2821 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 16:04:16.308171 kubelet[2821]: I1105 16:04:16.308120 2821 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 16:04:16.308557 kubelet[2821]: I1105 16:04:16.308514 2821 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 16:04:16.308807 kubelet[2821]: I1105 16:04:16.308559 2821 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 16:04:16.308893 kubelet[2821]: I1105 16:04:16.308826 2821 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 16:04:16.308893 kubelet[2821]: I1105 16:04:16.308839 2821 container_manager_linux.go:303] "Creating device plugin manager" Nov 5 16:04:16.308937 kubelet[2821]: I1105 16:04:16.308894 2821 state_mem.go:36] "Initialized new in-memory state store" Nov 5 16:04:16.309138 kubelet[2821]: I1105 16:04:16.309123 2821 kubelet.go:480] "Attempting to sync node with API server" Nov 5 16:04:16.309189 kubelet[2821]: I1105 16:04:16.309142 2821 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 16:04:16.309189 kubelet[2821]: I1105 16:04:16.309173 2821 kubelet.go:386] "Adding apiserver pod source" Nov 5 16:04:16.309245 kubelet[2821]: I1105 16:04:16.309193 2821 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 16:04:16.311729 kubelet[2821]: I1105 16:04:16.311696 2821 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 16:04:16.312450 kubelet[2821]: I1105 16:04:16.312396 2821 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 16:04:16.319029 kubelet[2821]: I1105 16:04:16.318993 2821 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 16:04:16.320408 kubelet[2821]: I1105 16:04:16.319439 2821 server.go:1289] "Started kubelet" Nov 5 16:04:16.320408 kubelet[2821]: I1105 16:04:16.319713 2821 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 16:04:16.320408 kubelet[2821]: I1105 16:04:16.319876 2821 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 16:04:16.320408 kubelet[2821]: I1105 16:04:16.320182 2821 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 16:04:16.326143 kubelet[2821]: I1105 16:04:16.326113 2821 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 16:04:16.330582 kubelet[2821]: I1105 16:04:16.330528 2821 server.go:317] "Adding debug handlers to kubelet server" Nov 5 16:04:16.334070 kubelet[2821]: E1105 16:04:16.332884 2821 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 16:04:16.334070 kubelet[2821]: I1105 16:04:16.332900 2821 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 16:04:16.334070 kubelet[2821]: I1105 16:04:16.333144 2821 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 16:04:16.334070 kubelet[2821]: I1105 16:04:16.333369 2821 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 16:04:16.334070 kubelet[2821]: I1105 16:04:16.333502 2821 reconciler.go:26] "Reconciler: start to sync state" Nov 5 16:04:16.335489 kubelet[2821]: I1105 16:04:16.335465 2821 factory.go:223] Registration of the systemd container factory successfully Nov 5 16:04:16.336169 kubelet[2821]: I1105 16:04:16.335835 2821 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 16:04:16.337586 kubelet[2821]: I1105 16:04:16.337566 2821 factory.go:223] Registration of the containerd container factory successfully Nov 5 16:04:16.357931 kubelet[2821]: I1105 16:04:16.357599 2821 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 5 16:04:16.360004 kubelet[2821]: I1105 16:04:16.359981 2821 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 5 16:04:16.360004 kubelet[2821]: I1105 16:04:16.360004 2821 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 5 16:04:16.360116 kubelet[2821]: I1105 16:04:16.360027 2821 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 16:04:16.360116 kubelet[2821]: I1105 16:04:16.360035 2821 kubelet.go:2436] "Starting kubelet main sync loop" Nov 5 16:04:16.360116 kubelet[2821]: E1105 16:04:16.360093 2821 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 16:04:16.392261 kubelet[2821]: I1105 16:04:16.392222 2821 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 16:04:16.392261 kubelet[2821]: I1105 16:04:16.392245 2821 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 16:04:16.392261 kubelet[2821]: I1105 16:04:16.392279 2821 state_mem.go:36] "Initialized new in-memory state store" Nov 5 16:04:16.392518 kubelet[2821]: I1105 16:04:16.392477 2821 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 5 16:04:16.392518 kubelet[2821]: I1105 16:04:16.392496 2821 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 5 16:04:16.392518 kubelet[2821]: I1105 16:04:16.392513 2821 policy_none.go:49] "None policy: Start" Nov 5 16:04:16.392672 kubelet[2821]: I1105 16:04:16.392522 2821 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 16:04:16.392672 kubelet[2821]: I1105 16:04:16.392533 2821 state_mem.go:35] "Initializing new in-memory state store" Nov 5 16:04:16.392672 kubelet[2821]: I1105 16:04:16.392620 2821 state_mem.go:75] "Updated machine memory state" Nov 5 16:04:16.397242 kubelet[2821]: E1105 16:04:16.397203 2821 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 16:04:16.397485 kubelet[2821]: I1105 16:04:16.397458 2821 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 16:04:16.397555 kubelet[2821]: I1105 16:04:16.397478 2821 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 16:04:16.398570 kubelet[2821]: I1105 16:04:16.398235 2821 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 16:04:16.400463 kubelet[2821]: E1105 16:04:16.400415 2821 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 16:04:16.461414 kubelet[2821]: I1105 16:04:16.461370 2821 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 16:04:16.461606 kubelet[2821]: I1105 16:04:16.461375 2821 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 16:04:16.463374 kubelet[2821]: I1105 16:04:16.462584 2821 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 16:04:16.470116 kubelet[2821]: E1105 16:04:16.470072 2821 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 5 16:04:16.470735 kubelet[2821]: E1105 16:04:16.470698 2821 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 5 16:04:16.508511 kubelet[2821]: I1105 16:04:16.508258 2821 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 16:04:16.515566 kubelet[2821]: I1105 16:04:16.515519 2821 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 5 16:04:16.515749 kubelet[2821]: I1105 16:04:16.515639 2821 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 5 16:04:16.534974 kubelet[2821]: I1105 16:04:16.534904 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 16:04:16.534974 kubelet[2821]: I1105 16:04:16.534961 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/58cc59dcdea031e95055e279b3259bb0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"58cc59dcdea031e95055e279b3259bb0\") " pod="kube-system/kube-apiserver-localhost" Nov 5 16:04:16.535214 kubelet[2821]: I1105 16:04:16.534992 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/58cc59dcdea031e95055e279b3259bb0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"58cc59dcdea031e95055e279b3259bb0\") " pod="kube-system/kube-apiserver-localhost" Nov 5 16:04:16.535214 kubelet[2821]: I1105 16:04:16.535017 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/58cc59dcdea031e95055e279b3259bb0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"58cc59dcdea031e95055e279b3259bb0\") " pod="kube-system/kube-apiserver-localhost" Nov 5 16:04:16.535214 kubelet[2821]: I1105 16:04:16.535062 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 16:04:16.535214 kubelet[2821]: I1105 16:04:16.535101 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 16:04:16.535214 kubelet[2821]: I1105 16:04:16.535161 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 5 16:04:16.535400 kubelet[2821]: I1105 16:04:16.535184 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 16:04:16.535400 kubelet[2821]: I1105 16:04:16.535204 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 16:04:16.770447 kubelet[2821]: E1105 16:04:16.770270 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:16.770581 kubelet[2821]: E1105 16:04:16.770457 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:16.771114 kubelet[2821]: E1105 16:04:16.771082 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:17.311771 kubelet[2821]: I1105 16:04:17.311724 2821 apiserver.go:52] "Watching apiserver" Nov 5 16:04:17.333957 kubelet[2821]: I1105 16:04:17.333918 2821 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 16:04:17.374893 kubelet[2821]: I1105 16:04:17.374875 2821 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 16:04:17.374992 kubelet[2821]: I1105 16:04:17.374975 2821 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 16:04:17.375210 kubelet[2821]: E1105 16:04:17.375191 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:17.381375 kubelet[2821]: E1105 16:04:17.380699 2821 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 5 16:04:17.381375 kubelet[2821]: E1105 16:04:17.380834 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:17.381525 kubelet[2821]: E1105 16:04:17.381493 2821 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 5 16:04:17.381735 kubelet[2821]: E1105 16:04:17.381716 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:17.398369 kubelet[2821]: I1105 16:04:17.398171 2821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.398148294 podStartE2EDuration="3.398148294s" podCreationTimestamp="2025-11-05 16:04:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 16:04:17.391107242 +0000 UTC m=+1.150260855" watchObservedRunningTime="2025-11-05 16:04:17.398148294 +0000 UTC m=+1.157301917" Nov 5 16:04:17.398369 kubelet[2821]: I1105 16:04:17.398267 2821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.398264233 podStartE2EDuration="4.398264233s" podCreationTimestamp="2025-11-05 16:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 16:04:17.398013841 +0000 UTC m=+1.157167464" watchObservedRunningTime="2025-11-05 16:04:17.398264233 +0000 UTC m=+1.157417856" Nov 5 16:04:17.406831 kubelet[2821]: I1105 16:04:17.406695 2821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.40667662 podStartE2EDuration="1.40667662s" podCreationTimestamp="2025-11-05 16:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 16:04:17.406641704 +0000 UTC m=+1.165795327" watchObservedRunningTime="2025-11-05 16:04:17.40667662 +0000 UTC m=+1.165830243" Nov 5 16:04:18.376124 kubelet[2821]: E1105 16:04:18.376081 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:18.376636 kubelet[2821]: E1105 16:04:18.376205 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:19.762765 kubelet[2821]: E1105 16:04:19.762724 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:20.379654 kubelet[2821]: E1105 16:04:20.379599 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:21.226471 kubelet[2821]: I1105 16:04:21.226431 2821 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 5 16:04:21.226978 kubelet[2821]: I1105 16:04:21.226911 2821 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 5 16:04:21.227012 containerd[1624]: time="2025-11-05T16:04:21.226732831Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 5 16:04:21.924905 systemd[1]: Created slice kubepods-besteffort-podf5f9b9be_ac4e_491b_9728_ddbdd2b38841.slice - libcontainer container kubepods-besteffort-podf5f9b9be_ac4e_491b_9728_ddbdd2b38841.slice. Nov 5 16:04:21.970201 kubelet[2821]: I1105 16:04:21.970149 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5f9b9be-ac4e-491b-9728-ddbdd2b38841-xtables-lock\") pod \"kube-proxy-hjff7\" (UID: \"f5f9b9be-ac4e-491b-9728-ddbdd2b38841\") " pod="kube-system/kube-proxy-hjff7" Nov 5 16:04:21.970201 kubelet[2821]: I1105 16:04:21.970187 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5f9b9be-ac4e-491b-9728-ddbdd2b38841-lib-modules\") pod \"kube-proxy-hjff7\" (UID: \"f5f9b9be-ac4e-491b-9728-ddbdd2b38841\") " pod="kube-system/kube-proxy-hjff7" Nov 5 16:04:21.970201 kubelet[2821]: I1105 16:04:21.970210 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f5f9b9be-ac4e-491b-9728-ddbdd2b38841-kube-proxy\") pod \"kube-proxy-hjff7\" (UID: \"f5f9b9be-ac4e-491b-9728-ddbdd2b38841\") " pod="kube-system/kube-proxy-hjff7" Nov 5 16:04:21.970428 kubelet[2821]: I1105 16:04:21.970227 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h8pr\" (UniqueName: \"kubernetes.io/projected/f5f9b9be-ac4e-491b-9728-ddbdd2b38841-kube-api-access-9h8pr\") pod \"kube-proxy-hjff7\" (UID: \"f5f9b9be-ac4e-491b-9728-ddbdd2b38841\") " pod="kube-system/kube-proxy-hjff7" Nov 5 16:04:22.091150 kubelet[2821]: E1105 16:04:22.091114 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:22.234743 kubelet[2821]: E1105 16:04:22.234684 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:22.236607 containerd[1624]: time="2025-11-05T16:04:22.236559255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hjff7,Uid:f5f9b9be-ac4e-491b-9728-ddbdd2b38841,Namespace:kube-system,Attempt:0,}" Nov 5 16:04:22.241569 systemd[1]: Created slice kubepods-besteffort-pod8a3d2c56_64dc_4d8d_b65e_73096be4a927.slice - libcontainer container kubepods-besteffort-pod8a3d2c56_64dc_4d8d_b65e_73096be4a927.slice. Nov 5 16:04:22.272283 kubelet[2821]: I1105 16:04:22.272248 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8a3d2c56-64dc-4d8d-b65e-73096be4a927-var-lib-calico\") pod \"tigera-operator-7dcd859c48-v7vmk\" (UID: \"8a3d2c56-64dc-4d8d-b65e-73096be4a927\") " pod="tigera-operator/tigera-operator-7dcd859c48-v7vmk" Nov 5 16:04:22.272553 kubelet[2821]: I1105 16:04:22.272390 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x4mm\" (UniqueName: \"kubernetes.io/projected/8a3d2c56-64dc-4d8d-b65e-73096be4a927-kube-api-access-6x4mm\") pod \"tigera-operator-7dcd859c48-v7vmk\" (UID: \"8a3d2c56-64dc-4d8d-b65e-73096be4a927\") " pod="tigera-operator/tigera-operator-7dcd859c48-v7vmk" Nov 5 16:04:22.275231 containerd[1624]: time="2025-11-05T16:04:22.275194238Z" level=info msg="connecting to shim ad05ceeb1686e775bcc7c60158430429b1556ccbaef15eaf265c07e9b64aca7e" address="unix:///run/containerd/s/4ffa636a3b3294f65c82cec7c678ef05221e4ff03ea0501967ed685f919f10e6" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:04:22.347501 systemd[1]: Started cri-containerd-ad05ceeb1686e775bcc7c60158430429b1556ccbaef15eaf265c07e9b64aca7e.scope - libcontainer container ad05ceeb1686e775bcc7c60158430429b1556ccbaef15eaf265c07e9b64aca7e. Nov 5 16:04:22.380689 containerd[1624]: time="2025-11-05T16:04:22.380635603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hjff7,Uid:f5f9b9be-ac4e-491b-9728-ddbdd2b38841,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad05ceeb1686e775bcc7c60158430429b1556ccbaef15eaf265c07e9b64aca7e\"" Nov 5 16:04:22.382321 kubelet[2821]: E1105 16:04:22.382294 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:22.384950 kubelet[2821]: E1105 16:04:22.384896 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:22.389736 containerd[1624]: time="2025-11-05T16:04:22.389687672Z" level=info msg="CreateContainer within sandbox \"ad05ceeb1686e775bcc7c60158430429b1556ccbaef15eaf265c07e9b64aca7e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 5 16:04:22.403083 containerd[1624]: time="2025-11-05T16:04:22.403025688Z" level=info msg="Container 0d6a7f7d492772f40bf7bd5c30de27f308777d912c89b5be622d21f113f080a7: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:04:22.412851 containerd[1624]: time="2025-11-05T16:04:22.412809003Z" level=info msg="CreateContainer within sandbox \"ad05ceeb1686e775bcc7c60158430429b1556ccbaef15eaf265c07e9b64aca7e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0d6a7f7d492772f40bf7bd5c30de27f308777d912c89b5be622d21f113f080a7\"" Nov 5 16:04:22.413476 containerd[1624]: time="2025-11-05T16:04:22.413447736Z" level=info msg="StartContainer for \"0d6a7f7d492772f40bf7bd5c30de27f308777d912c89b5be622d21f113f080a7\"" Nov 5 16:04:22.414819 containerd[1624]: time="2025-11-05T16:04:22.414780725Z" level=info msg="connecting to shim 0d6a7f7d492772f40bf7bd5c30de27f308777d912c89b5be622d21f113f080a7" address="unix:///run/containerd/s/4ffa636a3b3294f65c82cec7c678ef05221e4ff03ea0501967ed685f919f10e6" protocol=ttrpc version=3 Nov 5 16:04:22.439564 systemd[1]: Started cri-containerd-0d6a7f7d492772f40bf7bd5c30de27f308777d912c89b5be622d21f113f080a7.scope - libcontainer container 0d6a7f7d492772f40bf7bd5c30de27f308777d912c89b5be622d21f113f080a7. Nov 5 16:04:22.484127 containerd[1624]: time="2025-11-05T16:04:22.484074167Z" level=info msg="StartContainer for \"0d6a7f7d492772f40bf7bd5c30de27f308777d912c89b5be622d21f113f080a7\" returns successfully" Nov 5 16:04:22.545821 containerd[1624]: time="2025-11-05T16:04:22.545701509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-v7vmk,Uid:8a3d2c56-64dc-4d8d-b65e-73096be4a927,Namespace:tigera-operator,Attempt:0,}" Nov 5 16:04:22.565072 containerd[1624]: time="2025-11-05T16:04:22.565011707Z" level=info msg="connecting to shim 0e4f59408b4d8303675ea6f7a29f05d1a9c978bb66b40a887fa517616ac90c47" address="unix:///run/containerd/s/d896e17bbebd46dcde06f4061e0c34d881a5a74c61fd58bb375aa99b8854ab6d" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:04:22.595548 systemd[1]: Started cri-containerd-0e4f59408b4d8303675ea6f7a29f05d1a9c978bb66b40a887fa517616ac90c47.scope - libcontainer container 0e4f59408b4d8303675ea6f7a29f05d1a9c978bb66b40a887fa517616ac90c47. Nov 5 16:04:22.647722 containerd[1624]: time="2025-11-05T16:04:22.647675827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-v7vmk,Uid:8a3d2c56-64dc-4d8d-b65e-73096be4a927,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0e4f59408b4d8303675ea6f7a29f05d1a9c978bb66b40a887fa517616ac90c47\"" Nov 5 16:04:22.649023 containerd[1624]: time="2025-11-05T16:04:22.648995771Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 5 16:04:23.096297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount290414283.mount: Deactivated successfully. Nov 5 16:04:23.388581 kubelet[2821]: E1105 16:04:23.388466 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:23.390976 kubelet[2821]: E1105 16:04:23.390940 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:23.505230 kubelet[2821]: I1105 16:04:23.505168 2821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hjff7" podStartSLOduration=2.50515175 podStartE2EDuration="2.50515175s" podCreationTimestamp="2025-11-05 16:04:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 16:04:23.504793035 +0000 UTC m=+7.263946658" watchObservedRunningTime="2025-11-05 16:04:23.50515175 +0000 UTC m=+7.264305373" Nov 5 16:04:24.392082 kubelet[2821]: E1105 16:04:24.392031 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:24.752776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3343119583.mount: Deactivated successfully. Nov 5 16:04:25.072663 containerd[1624]: time="2025-11-05T16:04:25.072523888Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:04:25.073710 containerd[1624]: time="2025-11-05T16:04:25.073670605Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 5 16:04:25.074757 containerd[1624]: time="2025-11-05T16:04:25.074714058Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:04:25.076876 containerd[1624]: time="2025-11-05T16:04:25.076825309Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:04:25.077549 containerd[1624]: time="2025-11-05T16:04:25.077512772Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.428481304s" Nov 5 16:04:25.077595 containerd[1624]: time="2025-11-05T16:04:25.077550824Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 5 16:04:25.082816 containerd[1624]: time="2025-11-05T16:04:25.082776584Z" level=info msg="CreateContainer within sandbox \"0e4f59408b4d8303675ea6f7a29f05d1a9c978bb66b40a887fa517616ac90c47\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 5 16:04:25.091246 containerd[1624]: time="2025-11-05T16:04:25.091193436Z" level=info msg="Container 2c40a59ddc9242657f6e5d3e0e10749d327ff83e644846615f986cdea66b64af: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:04:25.095431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2576876960.mount: Deactivated successfully. Nov 5 16:04:25.097816 containerd[1624]: time="2025-11-05T16:04:25.097782872Z" level=info msg="CreateContainer within sandbox \"0e4f59408b4d8303675ea6f7a29f05d1a9c978bb66b40a887fa517616ac90c47\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2c40a59ddc9242657f6e5d3e0e10749d327ff83e644846615f986cdea66b64af\"" Nov 5 16:04:25.098326 containerd[1624]: time="2025-11-05T16:04:25.098282572Z" level=info msg="StartContainer for \"2c40a59ddc9242657f6e5d3e0e10749d327ff83e644846615f986cdea66b64af\"" Nov 5 16:04:25.099122 containerd[1624]: time="2025-11-05T16:04:25.099089180Z" level=info msg="connecting to shim 2c40a59ddc9242657f6e5d3e0e10749d327ff83e644846615f986cdea66b64af" address="unix:///run/containerd/s/d896e17bbebd46dcde06f4061e0c34d881a5a74c61fd58bb375aa99b8854ab6d" protocol=ttrpc version=3 Nov 5 16:04:25.128485 systemd[1]: Started cri-containerd-2c40a59ddc9242657f6e5d3e0e10749d327ff83e644846615f986cdea66b64af.scope - libcontainer container 2c40a59ddc9242657f6e5d3e0e10749d327ff83e644846615f986cdea66b64af. Nov 5 16:04:25.159269 containerd[1624]: time="2025-11-05T16:04:25.159208630Z" level=info msg="StartContainer for \"2c40a59ddc9242657f6e5d3e0e10749d327ff83e644846615f986cdea66b64af\" returns successfully" Nov 5 16:04:26.339712 kubelet[2821]: E1105 16:04:26.339668 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:26.356384 kubelet[2821]: I1105 16:04:26.354624 2821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-v7vmk" podStartSLOduration=1.924913584 podStartE2EDuration="4.354606738s" podCreationTimestamp="2025-11-05 16:04:22 +0000 UTC" firstStartedPulling="2025-11-05 16:04:22.648695496 +0000 UTC m=+6.407849119" lastFinishedPulling="2025-11-05 16:04:25.07838865 +0000 UTC m=+8.837542273" observedRunningTime="2025-11-05 16:04:25.816828875 +0000 UTC m=+9.575982498" watchObservedRunningTime="2025-11-05 16:04:26.354606738 +0000 UTC m=+10.113760361" Nov 5 16:04:26.399052 kubelet[2821]: E1105 16:04:26.399017 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:31.040705 sudo[1832]: pam_unix(sudo:session): session closed for user root Nov 5 16:04:31.057859 sshd[1831]: Connection closed by 10.0.0.1 port 50694 Nov 5 16:04:31.059299 sshd-session[1828]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:31.066243 systemd[1]: sshd@7-10.0.0.150:22-10.0.0.1:50694.service: Deactivated successfully. Nov 5 16:04:31.066610 systemd-logind[1592]: Session 7 logged out. Waiting for processes to exit. Nov 5 16:04:31.069711 systemd[1]: session-7.scope: Deactivated successfully. Nov 5 16:04:31.070015 systemd[1]: session-7.scope: Consumed 7.207s CPU time, 215.7M memory peak. Nov 5 16:04:31.074785 systemd-logind[1592]: Removed session 7. Nov 5 16:04:35.225811 systemd[1]: Created slice kubepods-besteffort-podda774599_337b_4aaf_a57a_18dc1bde9b17.slice - libcontainer container kubepods-besteffort-podda774599_337b_4aaf_a57a_18dc1bde9b17.slice. Nov 5 16:04:35.261144 kubelet[2821]: I1105 16:04:35.261076 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/da774599-337b-4aaf-a57a-18dc1bde9b17-typha-certs\") pod \"calico-typha-6c5599c9fb-hlzl6\" (UID: \"da774599-337b-4aaf-a57a-18dc1bde9b17\") " pod="calico-system/calico-typha-6c5599c9fb-hlzl6" Nov 5 16:04:35.261144 kubelet[2821]: I1105 16:04:35.261122 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5s9p\" (UniqueName: \"kubernetes.io/projected/da774599-337b-4aaf-a57a-18dc1bde9b17-kube-api-access-s5s9p\") pod \"calico-typha-6c5599c9fb-hlzl6\" (UID: \"da774599-337b-4aaf-a57a-18dc1bde9b17\") " pod="calico-system/calico-typha-6c5599c9fb-hlzl6" Nov 5 16:04:35.261144 kubelet[2821]: I1105 16:04:35.261144 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da774599-337b-4aaf-a57a-18dc1bde9b17-tigera-ca-bundle\") pod \"calico-typha-6c5599c9fb-hlzl6\" (UID: \"da774599-337b-4aaf-a57a-18dc1bde9b17\") " pod="calico-system/calico-typha-6c5599c9fb-hlzl6" Nov 5 16:04:35.307100 systemd[1]: Created slice kubepods-besteffort-pod5f624a00_ba16_4097_a41a_4b4ecaa10c9d.slice - libcontainer container kubepods-besteffort-pod5f624a00_ba16_4097_a41a_4b4ecaa10c9d.slice. Nov 5 16:04:35.362299 kubelet[2821]: I1105 16:04:35.362188 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f624a00-ba16-4097-a41a-4b4ecaa10c9d-lib-modules\") pod \"calico-node-rz4hz\" (UID: \"5f624a00-ba16-4097-a41a-4b4ecaa10c9d\") " pod="calico-system/calico-node-rz4hz" Nov 5 16:04:35.363607 kubelet[2821]: I1105 16:04:35.363053 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5f624a00-ba16-4097-a41a-4b4ecaa10c9d-cni-bin-dir\") pod \"calico-node-rz4hz\" (UID: \"5f624a00-ba16-4097-a41a-4b4ecaa10c9d\") " pod="calico-system/calico-node-rz4hz" Nov 5 16:04:35.363607 kubelet[2821]: I1105 16:04:35.363077 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5f624a00-ba16-4097-a41a-4b4ecaa10c9d-policysync\") pod \"calico-node-rz4hz\" (UID: \"5f624a00-ba16-4097-a41a-4b4ecaa10c9d\") " pod="calico-system/calico-node-rz4hz" Nov 5 16:04:35.363607 kubelet[2821]: I1105 16:04:35.363123 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5f624a00-ba16-4097-a41a-4b4ecaa10c9d-cni-log-dir\") pod \"calico-node-rz4hz\" (UID: \"5f624a00-ba16-4097-a41a-4b4ecaa10c9d\") " pod="calico-system/calico-node-rz4hz" Nov 5 16:04:35.363607 kubelet[2821]: I1105 16:04:35.363139 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5f624a00-ba16-4097-a41a-4b4ecaa10c9d-node-certs\") pod \"calico-node-rz4hz\" (UID: \"5f624a00-ba16-4097-a41a-4b4ecaa10c9d\") " pod="calico-system/calico-node-rz4hz" Nov 5 16:04:35.363607 kubelet[2821]: I1105 16:04:35.363155 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f624a00-ba16-4097-a41a-4b4ecaa10c9d-xtables-lock\") pod \"calico-node-rz4hz\" (UID: \"5f624a00-ba16-4097-a41a-4b4ecaa10c9d\") " pod="calico-system/calico-node-rz4hz" Nov 5 16:04:35.363877 kubelet[2821]: I1105 16:04:35.363171 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f624a00-ba16-4097-a41a-4b4ecaa10c9d-tigera-ca-bundle\") pod \"calico-node-rz4hz\" (UID: \"5f624a00-ba16-4097-a41a-4b4ecaa10c9d\") " pod="calico-system/calico-node-rz4hz" Nov 5 16:04:35.363877 kubelet[2821]: I1105 16:04:35.363204 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5f624a00-ba16-4097-a41a-4b4ecaa10c9d-flexvol-driver-host\") pod \"calico-node-rz4hz\" (UID: \"5f624a00-ba16-4097-a41a-4b4ecaa10c9d\") " pod="calico-system/calico-node-rz4hz" Nov 5 16:04:35.363877 kubelet[2821]: I1105 16:04:35.363225 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5f624a00-ba16-4097-a41a-4b4ecaa10c9d-var-lib-calico\") pod \"calico-node-rz4hz\" (UID: \"5f624a00-ba16-4097-a41a-4b4ecaa10c9d\") " pod="calico-system/calico-node-rz4hz" Nov 5 16:04:35.363877 kubelet[2821]: I1105 16:04:35.363241 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5f624a00-ba16-4097-a41a-4b4ecaa10c9d-var-run-calico\") pod \"calico-node-rz4hz\" (UID: \"5f624a00-ba16-4097-a41a-4b4ecaa10c9d\") " pod="calico-system/calico-node-rz4hz" Nov 5 16:04:35.363877 kubelet[2821]: I1105 16:04:35.363259 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5f624a00-ba16-4097-a41a-4b4ecaa10c9d-cni-net-dir\") pod \"calico-node-rz4hz\" (UID: \"5f624a00-ba16-4097-a41a-4b4ecaa10c9d\") " pod="calico-system/calico-node-rz4hz" Nov 5 16:04:35.364047 kubelet[2821]: I1105 16:04:35.363277 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cclnw\" (UniqueName: \"kubernetes.io/projected/5f624a00-ba16-4097-a41a-4b4ecaa10c9d-kube-api-access-cclnw\") pod \"calico-node-rz4hz\" (UID: \"5f624a00-ba16-4097-a41a-4b4ecaa10c9d\") " pod="calico-system/calico-node-rz4hz" Nov 5 16:04:35.467803 kubelet[2821]: E1105 16:04:35.467769 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.467996 kubelet[2821]: W1105 16:04:35.467962 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.467996 kubelet[2821]: E1105 16:04:35.467996 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.472374 kubelet[2821]: E1105 16:04:35.471011 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.472374 kubelet[2821]: W1105 16:04:35.471038 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.472374 kubelet[2821]: E1105 16:04:35.471060 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.477288 kubelet[2821]: E1105 16:04:35.477190 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.477288 kubelet[2821]: W1105 16:04:35.477204 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.477288 kubelet[2821]: E1105 16:04:35.477219 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.498857 kubelet[2821]: E1105 16:04:35.498808 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qm7k4" podUID="998850e6-5a3e-41d3-948e-1a886bae0358" Nov 5 16:04:35.530888 kubelet[2821]: E1105 16:04:35.530848 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:35.531501 containerd[1624]: time="2025-11-05T16:04:35.531455415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c5599c9fb-hlzl6,Uid:da774599-337b-4aaf-a57a-18dc1bde9b17,Namespace:calico-system,Attempt:0,}" Nov 5 16:04:35.547446 kubelet[2821]: E1105 16:04:35.547416 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.547710 kubelet[2821]: W1105 16:04:35.547575 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.547710 kubelet[2821]: E1105 16:04:35.547604 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.548068 kubelet[2821]: E1105 16:04:35.548039 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.548110 kubelet[2821]: W1105 16:04:35.548067 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.548110 kubelet[2821]: E1105 16:04:35.548096 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.548417 kubelet[2821]: E1105 16:04:35.548399 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.548417 kubelet[2821]: W1105 16:04:35.548411 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.548417 kubelet[2821]: E1105 16:04:35.548419 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.548719 kubelet[2821]: E1105 16:04:35.548700 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.548719 kubelet[2821]: W1105 16:04:35.548717 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.548769 kubelet[2821]: E1105 16:04:35.548733 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.548952 kubelet[2821]: E1105 16:04:35.548940 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.548986 kubelet[2821]: W1105 16:04:35.548953 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.548986 kubelet[2821]: E1105 16:04:35.548961 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.549152 kubelet[2821]: E1105 16:04:35.549138 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.549152 kubelet[2821]: W1105 16:04:35.549148 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.549311 kubelet[2821]: E1105 16:04:35.549158 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.549338 kubelet[2821]: E1105 16:04:35.549317 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.549338 kubelet[2821]: W1105 16:04:35.549324 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.549338 kubelet[2821]: E1105 16:04:35.549332 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.549582 kubelet[2821]: E1105 16:04:35.549567 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.549582 kubelet[2821]: W1105 16:04:35.549578 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.549689 kubelet[2821]: E1105 16:04:35.549586 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.549791 kubelet[2821]: E1105 16:04:35.549780 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.549791 kubelet[2821]: W1105 16:04:35.549790 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.549791 kubelet[2821]: E1105 16:04:35.549798 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.549974 kubelet[2821]: E1105 16:04:35.549964 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.549974 kubelet[2821]: W1105 16:04:35.549973 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.550024 kubelet[2821]: E1105 16:04:35.549981 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.550148 kubelet[2821]: E1105 16:04:35.550137 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.550148 kubelet[2821]: W1105 16:04:35.550147 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.550230 kubelet[2821]: E1105 16:04:35.550155 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.550325 kubelet[2821]: E1105 16:04:35.550313 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.550325 kubelet[2821]: W1105 16:04:35.550323 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.550417 kubelet[2821]: E1105 16:04:35.550331 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.550787 kubelet[2821]: E1105 16:04:35.550764 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.550787 kubelet[2821]: W1105 16:04:35.550776 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.550787 kubelet[2821]: E1105 16:04:35.550786 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.551003 kubelet[2821]: E1105 16:04:35.550989 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.551003 kubelet[2821]: W1105 16:04:35.551000 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.551064 kubelet[2821]: E1105 16:04:35.551009 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.551218 kubelet[2821]: E1105 16:04:35.551204 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.551218 kubelet[2821]: W1105 16:04:35.551216 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.551281 kubelet[2821]: E1105 16:04:35.551226 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.551735 kubelet[2821]: E1105 16:04:35.551717 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.551735 kubelet[2821]: W1105 16:04:35.551730 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.551839 kubelet[2821]: E1105 16:04:35.551740 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.551964 kubelet[2821]: E1105 16:04:35.551949 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.551964 kubelet[2821]: W1105 16:04:35.551960 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.552020 kubelet[2821]: E1105 16:04:35.551969 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.552562 kubelet[2821]: E1105 16:04:35.552545 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.552562 kubelet[2821]: W1105 16:04:35.552561 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.552635 kubelet[2821]: E1105 16:04:35.552571 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.552776 kubelet[2821]: E1105 16:04:35.552754 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.552776 kubelet[2821]: W1105 16:04:35.552766 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.552776 kubelet[2821]: E1105 16:04:35.552775 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.552976 kubelet[2821]: E1105 16:04:35.552962 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.552976 kubelet[2821]: W1105 16:04:35.552973 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.553026 kubelet[2821]: E1105 16:04:35.552982 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.553904 containerd[1624]: time="2025-11-05T16:04:35.553866472Z" level=info msg="connecting to shim 9717ad04dd14357424b6ff5dd128c05da639be89cae10c12a4d73f67edfeb4f0" address="unix:///run/containerd/s/a10c7b688cbb806792b203563a51b5f3d5ac19ad07c734bf63e514998d248b37" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:04:35.565550 kubelet[2821]: E1105 16:04:35.565531 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.565550 kubelet[2821]: W1105 16:04:35.565545 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.565650 kubelet[2821]: E1105 16:04:35.565556 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.565650 kubelet[2821]: I1105 16:04:35.565584 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/998850e6-5a3e-41d3-948e-1a886bae0358-socket-dir\") pod \"csi-node-driver-qm7k4\" (UID: \"998850e6-5a3e-41d3-948e-1a886bae0358\") " pod="calico-system/csi-node-driver-qm7k4" Nov 5 16:04:35.565858 kubelet[2821]: E1105 16:04:35.565842 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.565858 kubelet[2821]: W1105 16:04:35.565854 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.565905 kubelet[2821]: E1105 16:04:35.565863 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.566034 kubelet[2821]: I1105 16:04:35.566011 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/998850e6-5a3e-41d3-948e-1a886bae0358-kubelet-dir\") pod \"csi-node-driver-qm7k4\" (UID: \"998850e6-5a3e-41d3-948e-1a886bae0358\") " pod="calico-system/csi-node-driver-qm7k4" Nov 5 16:04:35.566249 kubelet[2821]: E1105 16:04:35.566232 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.566249 kubelet[2821]: W1105 16:04:35.566245 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.566337 kubelet[2821]: E1105 16:04:35.566254 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.566337 kubelet[2821]: I1105 16:04:35.566272 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/998850e6-5a3e-41d3-948e-1a886bae0358-registration-dir\") pod \"csi-node-driver-qm7k4\" (UID: \"998850e6-5a3e-41d3-948e-1a886bae0358\") " pod="calico-system/csi-node-driver-qm7k4" Nov 5 16:04:35.566511 kubelet[2821]: E1105 16:04:35.566495 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.566511 kubelet[2821]: W1105 16:04:35.566507 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.566576 kubelet[2821]: E1105 16:04:35.566518 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.566576 kubelet[2821]: I1105 16:04:35.566539 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/998850e6-5a3e-41d3-948e-1a886bae0358-varrun\") pod \"csi-node-driver-qm7k4\" (UID: \"998850e6-5a3e-41d3-948e-1a886bae0358\") " pod="calico-system/csi-node-driver-qm7k4" Nov 5 16:04:35.567153 kubelet[2821]: E1105 16:04:35.566742 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.567196 kubelet[2821]: W1105 16:04:35.567160 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.567196 kubelet[2821]: E1105 16:04:35.567174 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.567241 kubelet[2821]: I1105 16:04:35.567194 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp2vh\" (UniqueName: \"kubernetes.io/projected/998850e6-5a3e-41d3-948e-1a886bae0358-kube-api-access-jp2vh\") pod \"csi-node-driver-qm7k4\" (UID: \"998850e6-5a3e-41d3-948e-1a886bae0358\") " pod="calico-system/csi-node-driver-qm7k4" Nov 5 16:04:35.569406 kubelet[2821]: E1105 16:04:35.568260 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.569406 kubelet[2821]: W1105 16:04:35.568292 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.569406 kubelet[2821]: E1105 16:04:35.568323 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.569406 kubelet[2821]: E1105 16:04:35.568739 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.569406 kubelet[2821]: W1105 16:04:35.568819 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.569406 kubelet[2821]: E1105 16:04:35.568868 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.569406 kubelet[2821]: E1105 16:04:35.569271 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.569406 kubelet[2821]: W1105 16:04:35.569285 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.569406 kubelet[2821]: E1105 16:04:35.569296 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.569853 kubelet[2821]: E1105 16:04:35.569810 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.569853 kubelet[2821]: W1105 16:04:35.569832 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.569907 kubelet[2821]: E1105 16:04:35.569857 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.570430 kubelet[2821]: E1105 16:04:35.570410 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.570430 kubelet[2821]: W1105 16:04:35.570426 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.570430 kubelet[2821]: E1105 16:04:35.570438 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.570894 kubelet[2821]: E1105 16:04:35.570877 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.570894 kubelet[2821]: W1105 16:04:35.570892 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.570993 kubelet[2821]: E1105 16:04:35.570973 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.571634 kubelet[2821]: E1105 16:04:35.571594 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.571634 kubelet[2821]: W1105 16:04:35.571621 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.571719 kubelet[2821]: E1105 16:04:35.571637 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.572014 kubelet[2821]: E1105 16:04:35.571994 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.572014 kubelet[2821]: W1105 16:04:35.572011 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.572084 kubelet[2821]: E1105 16:04:35.572025 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.572331 kubelet[2821]: E1105 16:04:35.572309 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.572375 kubelet[2821]: W1105 16:04:35.572328 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.572408 kubelet[2821]: E1105 16:04:35.572398 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.572889 kubelet[2821]: E1105 16:04:35.572870 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.572889 kubelet[2821]: W1105 16:04:35.572885 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.572955 kubelet[2821]: E1105 16:04:35.572898 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.584561 systemd[1]: Started cri-containerd-9717ad04dd14357424b6ff5dd128c05da639be89cae10c12a4d73f67edfeb4f0.scope - libcontainer container 9717ad04dd14357424b6ff5dd128c05da639be89cae10c12a4d73f67edfeb4f0. Nov 5 16:04:35.612630 kubelet[2821]: E1105 16:04:35.612563 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:35.613582 containerd[1624]: time="2025-11-05T16:04:35.613538989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rz4hz,Uid:5f624a00-ba16-4097-a41a-4b4ecaa10c9d,Namespace:calico-system,Attempt:0,}" Nov 5 16:04:35.636312 containerd[1624]: time="2025-11-05T16:04:35.635543321Z" level=info msg="connecting to shim 7e8d05141313d06103bdc3dbe93c77db3de9cefe9f543ca696629061c2aefb32" address="unix:///run/containerd/s/cc0220e2c32e4a0855232be3d973b59396fe8161d3997d9ba1cf6edc1d0514f7" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:04:35.645975 containerd[1624]: time="2025-11-05T16:04:35.645908971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c5599c9fb-hlzl6,Uid:da774599-337b-4aaf-a57a-18dc1bde9b17,Namespace:calico-system,Attempt:0,} returns sandbox id \"9717ad04dd14357424b6ff5dd128c05da639be89cae10c12a4d73f67edfeb4f0\"" Nov 5 16:04:35.646868 kubelet[2821]: E1105 16:04:35.646835 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:35.648839 containerd[1624]: time="2025-11-05T16:04:35.648809119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 5 16:04:35.667530 systemd[1]: Started cri-containerd-7e8d05141313d06103bdc3dbe93c77db3de9cefe9f543ca696629061c2aefb32.scope - libcontainer container 7e8d05141313d06103bdc3dbe93c77db3de9cefe9f543ca696629061c2aefb32. Nov 5 16:04:35.668966 kubelet[2821]: E1105 16:04:35.668935 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.668966 kubelet[2821]: W1105 16:04:35.668959 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.669082 kubelet[2821]: E1105 16:04:35.668979 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.669402 kubelet[2821]: E1105 16:04:35.669368 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.669402 kubelet[2821]: W1105 16:04:35.669395 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.669489 kubelet[2821]: E1105 16:04:35.669426 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.669964 kubelet[2821]: E1105 16:04:35.669892 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.669964 kubelet[2821]: W1105 16:04:35.669907 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.669964 kubelet[2821]: E1105 16:04:35.669918 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.670215 kubelet[2821]: E1105 16:04:35.670197 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.670215 kubelet[2821]: W1105 16:04:35.670211 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.670286 kubelet[2821]: E1105 16:04:35.670222 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.670586 kubelet[2821]: E1105 16:04:35.670566 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.670586 kubelet[2821]: W1105 16:04:35.670580 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.670586 kubelet[2821]: E1105 16:04:35.670591 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.671040 kubelet[2821]: E1105 16:04:35.671021 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.671040 kubelet[2821]: W1105 16:04:35.671035 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.671119 kubelet[2821]: E1105 16:04:35.671049 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.671890 kubelet[2821]: E1105 16:04:35.671839 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.671890 kubelet[2821]: W1105 16:04:35.671888 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.672052 kubelet[2821]: E1105 16:04:35.671922 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.672294 kubelet[2821]: E1105 16:04:35.672269 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.672294 kubelet[2821]: W1105 16:04:35.672282 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.672364 kubelet[2821]: E1105 16:04:35.672313 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.672551 kubelet[2821]: E1105 16:04:35.672526 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.672551 kubelet[2821]: W1105 16:04:35.672539 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.672551 kubelet[2821]: E1105 16:04:35.672551 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.672781 kubelet[2821]: E1105 16:04:35.672757 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.672781 kubelet[2821]: W1105 16:04:35.672770 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.672781 kubelet[2821]: E1105 16:04:35.672779 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.673292 kubelet[2821]: E1105 16:04:35.673141 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.673292 kubelet[2821]: W1105 16:04:35.673156 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.673292 kubelet[2821]: E1105 16:04:35.673169 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.673463 kubelet[2821]: E1105 16:04:35.673450 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.673514 kubelet[2821]: W1105 16:04:35.673503 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.673563 kubelet[2821]: E1105 16:04:35.673552 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.673875 kubelet[2821]: E1105 16:04:35.673856 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.673875 kubelet[2821]: W1105 16:04:35.673870 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.673930 kubelet[2821]: E1105 16:04:35.673881 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.674164 kubelet[2821]: E1105 16:04:35.674114 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.674164 kubelet[2821]: W1105 16:04:35.674126 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.674164 kubelet[2821]: E1105 16:04:35.674135 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.674392 kubelet[2821]: E1105 16:04:35.674372 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.674392 kubelet[2821]: W1105 16:04:35.674385 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.674454 kubelet[2821]: E1105 16:04:35.674394 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.674630 kubelet[2821]: E1105 16:04:35.674600 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.674630 kubelet[2821]: W1105 16:04:35.674624 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.674691 kubelet[2821]: E1105 16:04:35.674633 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.674886 kubelet[2821]: E1105 16:04:35.674856 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.674886 kubelet[2821]: W1105 16:04:35.674880 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.674945 kubelet[2821]: E1105 16:04:35.674888 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.675164 kubelet[2821]: E1105 16:04:35.675141 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.675164 kubelet[2821]: W1105 16:04:35.675155 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.675164 kubelet[2821]: E1105 16:04:35.675163 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.675474 kubelet[2821]: E1105 16:04:35.675459 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.675540 kubelet[2821]: W1105 16:04:35.675528 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.675592 kubelet[2821]: E1105 16:04:35.675580 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.675877 kubelet[2821]: E1105 16:04:35.675863 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.675941 kubelet[2821]: W1105 16:04:35.675930 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.675997 kubelet[2821]: E1105 16:04:35.675986 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.676307 kubelet[2821]: E1105 16:04:35.676287 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.676307 kubelet[2821]: W1105 16:04:35.676301 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.676386 kubelet[2821]: E1105 16:04:35.676312 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.676568 kubelet[2821]: E1105 16:04:35.676551 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.676568 kubelet[2821]: W1105 16:04:35.676563 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.676631 kubelet[2821]: E1105 16:04:35.676572 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.676810 kubelet[2821]: E1105 16:04:35.676793 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.676810 kubelet[2821]: W1105 16:04:35.676803 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.676810 kubelet[2821]: E1105 16:04:35.676812 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.677008 kubelet[2821]: E1105 16:04:35.676994 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.677008 kubelet[2821]: W1105 16:04:35.677004 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.677052 kubelet[2821]: E1105 16:04:35.677013 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.677237 kubelet[2821]: E1105 16:04:35.677221 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.677237 kubelet[2821]: W1105 16:04:35.677231 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.677287 kubelet[2821]: E1105 16:04:35.677240 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.705050 kubelet[2821]: E1105 16:04:35.705006 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:35.705050 kubelet[2821]: W1105 16:04:35.705026 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:35.705050 kubelet[2821]: E1105 16:04:35.705050 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:35.764265 containerd[1624]: time="2025-11-05T16:04:35.764152656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rz4hz,Uid:5f624a00-ba16-4097-a41a-4b4ecaa10c9d,Namespace:calico-system,Attempt:0,} returns sandbox id \"7e8d05141313d06103bdc3dbe93c77db3de9cefe9f543ca696629061c2aefb32\"" Nov 5 16:04:35.765424 kubelet[2821]: E1105 16:04:35.764639 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:37.361291 kubelet[2821]: E1105 16:04:37.361219 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qm7k4" podUID="998850e6-5a3e-41d3-948e-1a886bae0358" Nov 5 16:04:37.804577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2691987803.mount: Deactivated successfully. Nov 5 16:04:38.669767 containerd[1624]: time="2025-11-05T16:04:38.669692424Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:04:38.670680 containerd[1624]: time="2025-11-05T16:04:38.670656906Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 5 16:04:38.672562 containerd[1624]: time="2025-11-05T16:04:38.672487803Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:04:38.676797 containerd[1624]: time="2025-11-05T16:04:38.676736602Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:04:38.677325 containerd[1624]: time="2025-11-05T16:04:38.677280754Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.02844187s" Nov 5 16:04:38.677325 containerd[1624]: time="2025-11-05T16:04:38.677318204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 5 16:04:38.678307 containerd[1624]: time="2025-11-05T16:04:38.678277476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 5 16:04:38.690361 containerd[1624]: time="2025-11-05T16:04:38.690305583Z" level=info msg="CreateContainer within sandbox \"9717ad04dd14357424b6ff5dd128c05da639be89cae10c12a4d73f67edfeb4f0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 5 16:04:38.700384 containerd[1624]: time="2025-11-05T16:04:38.698043603Z" level=info msg="Container d141ff771d8e9dc9aedf4b21670c03c9aa94660864988e577cb0e32a5f8c57f6: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:04:38.706336 containerd[1624]: time="2025-11-05T16:04:38.706288656Z" level=info msg="CreateContainer within sandbox \"9717ad04dd14357424b6ff5dd128c05da639be89cae10c12a4d73f67edfeb4f0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d141ff771d8e9dc9aedf4b21670c03c9aa94660864988e577cb0e32a5f8c57f6\"" Nov 5 16:04:38.706939 containerd[1624]: time="2025-11-05T16:04:38.706906366Z" level=info msg="StartContainer for \"d141ff771d8e9dc9aedf4b21670c03c9aa94660864988e577cb0e32a5f8c57f6\"" Nov 5 16:04:38.708248 containerd[1624]: time="2025-11-05T16:04:38.708178014Z" level=info msg="connecting to shim d141ff771d8e9dc9aedf4b21670c03c9aa94660864988e577cb0e32a5f8c57f6" address="unix:///run/containerd/s/a10c7b688cbb806792b203563a51b5f3d5ac19ad07c734bf63e514998d248b37" protocol=ttrpc version=3 Nov 5 16:04:38.730492 systemd[1]: Started cri-containerd-d141ff771d8e9dc9aedf4b21670c03c9aa94660864988e577cb0e32a5f8c57f6.scope - libcontainer container d141ff771d8e9dc9aedf4b21670c03c9aa94660864988e577cb0e32a5f8c57f6. Nov 5 16:04:38.787883 containerd[1624]: time="2025-11-05T16:04:38.787844204Z" level=info msg="StartContainer for \"d141ff771d8e9dc9aedf4b21670c03c9aa94660864988e577cb0e32a5f8c57f6\" returns successfully" Nov 5 16:04:39.361244 kubelet[2821]: E1105 16:04:39.361179 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qm7k4" podUID="998850e6-5a3e-41d3-948e-1a886bae0358" Nov 5 16:04:39.426767 kubelet[2821]: E1105 16:04:39.426720 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:39.436796 kubelet[2821]: I1105 16:04:39.436597 2821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6c5599c9fb-hlzl6" podStartSLOduration=1.406971014 podStartE2EDuration="4.4365773s" podCreationTimestamp="2025-11-05 16:04:35 +0000 UTC" firstStartedPulling="2025-11-05 16:04:35.6485223 +0000 UTC m=+19.407675923" lastFinishedPulling="2025-11-05 16:04:38.678128586 +0000 UTC m=+22.437282209" observedRunningTime="2025-11-05 16:04:39.436062554 +0000 UTC m=+23.195216187" watchObservedRunningTime="2025-11-05 16:04:39.4365773 +0000 UTC m=+23.195730923" Nov 5 16:04:39.477521 kubelet[2821]: E1105 16:04:39.477482 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:39.477521 kubelet[2821]: W1105 16:04:39.477516 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:39.481631 kubelet[2821]: E1105 16:04:39.481594 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:39.481846 kubelet[2821]: E1105 16:04:39.481819 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:39.481846 kubelet[2821]: W1105 16:04:39.481832 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:39.481846 kubelet[2821]: E1105 16:04:39.481842 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:39.482067 kubelet[2821]: E1105 16:04:39.482036 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:39.482067 kubelet[2821]: W1105 16:04:39.482050 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:39.482067 kubelet[2821]: E1105 16:04:39.482060 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:39.482295 kubelet[2821]: E1105 16:04:39.482269 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:39.482295 kubelet[2821]: W1105 16:04:39.482280 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:39.482295 kubelet[2821]: E1105 16:04:39.482288 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:39.482553 kubelet[2821]: E1105 16:04:39.482525 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:39.482553 kubelet[2821]: W1105 16:04:39.482538 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:39.482553 kubelet[2821]: E1105 16:04:39.482546 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:39.482712 kubelet[2821]: E1105 16:04:39.482694 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:39.482712 kubelet[2821]: W1105 16:04:39.482704 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:39.482712 kubelet[2821]: E1105 16:04:39.482712 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:39.482895 kubelet[2821]: E1105 16:04:39.482874 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:39.482895 kubelet[2821]: W1105 16:04:39.482887 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:39.482960 kubelet[2821]: E1105 16:04:39.482899 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:39.483076 kubelet[2821]: E1105 16:04:39.483059 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:39.483076 kubelet[2821]: W1105 16:04:39.483069 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:39.483076 kubelet[2821]: E1105 16:04:39.483077 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:39.483243 kubelet[2821]: E1105 16:04:39.483226 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:39.483243 kubelet[2821]: W1105 16:04:39.483236 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:39.483243 kubelet[2821]: E1105 16:04:39.483244 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:39.483421 kubelet[2821]: E1105 16:04:39.483404 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:39.483421 kubelet[2821]: W1105 16:04:39.483414 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:39.483421 kubelet[2821]: E1105 16:04:39.483422 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:39.483594 kubelet[2821]: E1105 16:04:39.483577 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:39.483594 kubelet[2821]: W1105 16:04:39.483587 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:39.483594 kubelet[2821]: E1105 16:04:39.483595 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:39.483751 kubelet[2821]: E1105 16:04:39.483734 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:39.483751 kubelet[2821]: W1105 16:04:39.483745 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:39.483751 kubelet[2821]: E1105 16:04:39.483752 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:39.483919 kubelet[2821]: E1105 16:04:39.483900 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:39.483919 kubelet[2821]: W1105 16:04:39.483910 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:39.483919 kubelet[2821]: E1105 16:04:39.483918 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:39.484078 kubelet[2821]: E1105 16:04:39.484061 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:39.484078 kubelet[2821]: W1105 16:04:39.484071 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:39.484078 kubelet[2821]: E1105 16:04:39.484079 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:39.484240 kubelet[2821]: E1105 16:04:39.484223 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:39.484240 kubelet[2821]: W1105 16:04:39.484233 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:39.484240 kubelet[2821]: E1105 16:04:39.484240 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:39.498725 kubelet[2821]: E1105 16:04:39.498694 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:39.498725 kubelet[2821]: W1105 16:04:39.498713 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:39.498795 kubelet[2821]: E1105 16:04:39.498729 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:39.499024 kubelet[2821]: E1105 16:04:39.499002 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:39.499024 kubelet[2821]: W1105 16:04:39.499017 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:39.499089 kubelet[2821]: E1105 16:04:39.499030 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:39.499305 kubelet[2821]: E1105 16:04:39.499283 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:39.499305 kubelet[2821]: W1105 16:04:39.499299 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:39.499375 kubelet[2821]: E1105 16:04:39.499311 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:39.503049 kubelet[2821]: E1105 16:04:39.503017 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:39.503049 kubelet[2821]: W1105 16:04:39.503034 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:39.503049 kubelet[2821]: E1105 16:04:39.503045 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:39.503252 kubelet[2821]: E1105 16:04:39.503219 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:39.503252 kubelet[2821]: W1105 16:04:39.503227 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:39.503252 kubelet[2821]: E1105 16:04:39.503235 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:39.503424 kubelet[2821]: E1105 16:04:39.503399 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:39.503424 kubelet[2821]: W1105 16:04:39.503411 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:39.503424 kubelet[2821]: E1105 16:04:39.503419 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:39.503602 kubelet[2821]: E1105 16:04:39.503579 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:39.503602 kubelet[2821]: W1105 16:04:39.503591 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:39.503602 kubelet[2821]: E1105 16:04:39.503599 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:39.503764 kubelet[2821]: E1105 16:04:39.503742 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:39.503764 kubelet[2821]: W1105 16:04:39.503754 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:39.503764 kubelet[2821]: E1105 16:04:39.503763 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:39.504072 kubelet[2821]: E1105 16:04:39.504025 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:39.504072 kubelet[2821]: W1105 16:04:39.504049 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:39.504072 kubelet[2821]: E1105 16:04:39.504074 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:39.504382 kubelet[2821]: E1105 16:04:39.504342 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:39.504382 kubelet[2821]: W1105 16:04:39.504380 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:39.504428 kubelet[2821]: E1105 16:04:39.504389 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:39.504638 kubelet[2821]: E1105 16:04:39.504625 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:39.504638 kubelet[2821]: W1105 16:04:39.504634 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:39.504693 kubelet[2821]: E1105 16:04:39.504643 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:39.504864 kubelet[2821]: E1105 16:04:39.504852 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:39.504864 kubelet[2821]: W1105 16:04:39.504861 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:39.504911 kubelet[2821]: E1105 16:04:39.504869 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:39.505086 kubelet[2821]: E1105 16:04:39.505066 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:39.505086 kubelet[2821]: W1105 16:04:39.505079 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:39.505167 kubelet[2821]: E1105 16:04:39.505091 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:39.505272 kubelet[2821]: E1105 16:04:39.505257 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:39.505272 kubelet[2821]: W1105 16:04:39.505268 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:39.505316 kubelet[2821]: E1105 16:04:39.505276 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:39.505497 kubelet[2821]: E1105 16:04:39.505482 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:39.505497 kubelet[2821]: W1105 16:04:39.505493 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:39.505561 kubelet[2821]: E1105 16:04:39.505502 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:39.505716 kubelet[2821]: E1105 16:04:39.505699 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:39.505716 kubelet[2821]: W1105 16:04:39.505710 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:39.505771 kubelet[2821]: E1105 16:04:39.505718 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:39.506144 kubelet[2821]: E1105 16:04:39.506102 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:39.506144 kubelet[2821]: W1105 16:04:39.506134 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:39.506196 kubelet[2821]: E1105 16:04:39.506162 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:39.506445 kubelet[2821]: E1105 16:04:39.506426 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:04:39.506445 kubelet[2821]: W1105 16:04:39.506441 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:04:39.506503 kubelet[2821]: E1105 16:04:39.506453 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:04:40.374644 containerd[1624]: time="2025-11-05T16:04:40.374580928Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:04:40.375452 containerd[1624]: time="2025-11-05T16:04:40.375416517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 5 16:04:40.377178 containerd[1624]: time="2025-11-05T16:04:40.377144411Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:04:40.379616 containerd[1624]: time="2025-11-05T16:04:40.379565567Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:04:40.380026 containerd[1624]: time="2025-11-05T16:04:40.379992999Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.701688282s" Nov 5 16:04:40.380026 containerd[1624]: time="2025-11-05T16:04:40.380021633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 5 16:04:40.383400 containerd[1624]: time="2025-11-05T16:04:40.383367335Z" level=info msg="CreateContainer within sandbox \"7e8d05141313d06103bdc3dbe93c77db3de9cefe9f543ca696629061c2aefb32\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 5 16:04:40.392926 containerd[1624]: time="2025-11-05T16:04:40.392571056Z" level=info msg="Container 5fee0ef74e01757a916007d368f34be40ca77b2fb65987c88bc7c9e9770638c2: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:04:40.400108 containerd[1624]: time="2025-11-05T16:04:40.400056450Z" level=info msg="CreateContainer within sandbox \"7e8d05141313d06103bdc3dbe93c77db3de9cefe9f543ca696629061c2aefb32\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5fee0ef74e01757a916007d368f34be40ca77b2fb65987c88bc7c9e9770638c2\"" Nov 5 16:04:40.400869 containerd[1624]: time="2025-11-05T16:04:40.400825624Z" level=info msg="StartContainer for \"5fee0ef74e01757a916007d368f34be40ca77b2fb65987c88bc7c9e9770638c2\"" Nov 5 16:04:40.402414 containerd[1624]: time="2025-11-05T16:04:40.402386103Z" level=info msg="connecting to shim 5fee0ef74e01757a916007d368f34be40ca77b2fb65987c88bc7c9e9770638c2" address="unix:///run/containerd/s/cc0220e2c32e4a0855232be3d973b59396fe8161d3997d9ba1cf6edc1d0514f7" protocol=ttrpc version=3 Nov 5 16:04:40.429180 kubelet[2821]: I1105 16:04:40.429145 2821 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 16:04:40.429696 kubelet[2821]: E1105 16:04:40.429479 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:40.429541 systemd[1]: Started cri-containerd-5fee0ef74e01757a916007d368f34be40ca77b2fb65987c88bc7c9e9770638c2.scope - libcontainer container 5fee0ef74e01757a916007d368f34be40ca77b2fb65987c88bc7c9e9770638c2. Nov 5 16:04:40.476958 containerd[1624]: time="2025-11-05T16:04:40.476913301Z" level=info msg="StartContainer for \"5fee0ef74e01757a916007d368f34be40ca77b2fb65987c88bc7c9e9770638c2\" returns successfully" Nov 5 16:04:40.486702 systemd[1]: cri-containerd-5fee0ef74e01757a916007d368f34be40ca77b2fb65987c88bc7c9e9770638c2.scope: Deactivated successfully. Nov 5 16:04:40.489452 containerd[1624]: time="2025-11-05T16:04:40.489403352Z" level=info msg="received exit event container_id:\"5fee0ef74e01757a916007d368f34be40ca77b2fb65987c88bc7c9e9770638c2\" id:\"5fee0ef74e01757a916007d368f34be40ca77b2fb65987c88bc7c9e9770638c2\" pid:3519 exited_at:{seconds:1762358680 nanos:489015623}" Nov 5 16:04:40.489532 containerd[1624]: time="2025-11-05T16:04:40.489468955Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5fee0ef74e01757a916007d368f34be40ca77b2fb65987c88bc7c9e9770638c2\" id:\"5fee0ef74e01757a916007d368f34be40ca77b2fb65987c88bc7c9e9770638c2\" pid:3519 exited_at:{seconds:1762358680 nanos:489015623}" Nov 5 16:04:40.513695 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5fee0ef74e01757a916007d368f34be40ca77b2fb65987c88bc7c9e9770638c2-rootfs.mount: Deactivated successfully. Nov 5 16:04:41.360367 kubelet[2821]: E1105 16:04:41.360291 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qm7k4" podUID="998850e6-5a3e-41d3-948e-1a886bae0358" Nov 5 16:04:41.433994 kubelet[2821]: E1105 16:04:41.433952 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:41.434707 containerd[1624]: time="2025-11-05T16:04:41.434651189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 5 16:04:43.114854 kubelet[2821]: I1105 16:04:43.114806 2821 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 16:04:43.117043 kubelet[2821]: E1105 16:04:43.117015 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:43.361269 kubelet[2821]: E1105 16:04:43.361203 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qm7k4" podUID="998850e6-5a3e-41d3-948e-1a886bae0358" Nov 5 16:04:43.437253 kubelet[2821]: E1105 16:04:43.437151 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:44.412413 containerd[1624]: time="2025-11-05T16:04:44.412322885Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:04:44.413136 containerd[1624]: time="2025-11-05T16:04:44.413094022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 5 16:04:44.414257 containerd[1624]: time="2025-11-05T16:04:44.414207031Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:04:44.417051 containerd[1624]: time="2025-11-05T16:04:44.416316742Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:04:44.417051 containerd[1624]: time="2025-11-05T16:04:44.416939721Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.982245371s" Nov 5 16:04:44.417051 containerd[1624]: time="2025-11-05T16:04:44.416963666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 5 16:04:44.421504 containerd[1624]: time="2025-11-05T16:04:44.421457951Z" level=info msg="CreateContainer within sandbox \"7e8d05141313d06103bdc3dbe93c77db3de9cefe9f543ca696629061c2aefb32\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 5 16:04:44.431043 containerd[1624]: time="2025-11-05T16:04:44.430994993Z" level=info msg="Container 7456bdf199f95653df953ab5ac8dc2d3b7fc04d4c4ff1855995d7c892bf22d25: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:04:44.444364 containerd[1624]: time="2025-11-05T16:04:44.444268999Z" level=info msg="CreateContainer within sandbox \"7e8d05141313d06103bdc3dbe93c77db3de9cefe9f543ca696629061c2aefb32\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7456bdf199f95653df953ab5ac8dc2d3b7fc04d4c4ff1855995d7c892bf22d25\"" Nov 5 16:04:44.444981 containerd[1624]: time="2025-11-05T16:04:44.444916214Z" level=info msg="StartContainer for \"7456bdf199f95653df953ab5ac8dc2d3b7fc04d4c4ff1855995d7c892bf22d25\"" Nov 5 16:04:44.446863 containerd[1624]: time="2025-11-05T16:04:44.446827641Z" level=info msg="connecting to shim 7456bdf199f95653df953ab5ac8dc2d3b7fc04d4c4ff1855995d7c892bf22d25" address="unix:///run/containerd/s/cc0220e2c32e4a0855232be3d973b59396fe8161d3997d9ba1cf6edc1d0514f7" protocol=ttrpc version=3 Nov 5 16:04:44.478512 systemd[1]: Started cri-containerd-7456bdf199f95653df953ab5ac8dc2d3b7fc04d4c4ff1855995d7c892bf22d25.scope - libcontainer container 7456bdf199f95653df953ab5ac8dc2d3b7fc04d4c4ff1855995d7c892bf22d25. Nov 5 16:04:44.529024 containerd[1624]: time="2025-11-05T16:04:44.528967200Z" level=info msg="StartContainer for \"7456bdf199f95653df953ab5ac8dc2d3b7fc04d4c4ff1855995d7c892bf22d25\" returns successfully" Nov 5 16:04:45.360795 kubelet[2821]: E1105 16:04:45.360730 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qm7k4" podUID="998850e6-5a3e-41d3-948e-1a886bae0358" Nov 5 16:04:45.444029 kubelet[2821]: E1105 16:04:45.443988 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:45.513183 systemd[1]: cri-containerd-7456bdf199f95653df953ab5ac8dc2d3b7fc04d4c4ff1855995d7c892bf22d25.scope: Deactivated successfully. Nov 5 16:04:45.513567 systemd[1]: cri-containerd-7456bdf199f95653df953ab5ac8dc2d3b7fc04d4c4ff1855995d7c892bf22d25.scope: Consumed 637ms CPU time, 176.9M memory peak, 3.8M read from disk, 171.3M written to disk. Nov 5 16:04:45.514233 containerd[1624]: time="2025-11-05T16:04:45.513224186Z" level=info msg="received exit event container_id:\"7456bdf199f95653df953ab5ac8dc2d3b7fc04d4c4ff1855995d7c892bf22d25\" id:\"7456bdf199f95653df953ab5ac8dc2d3b7fc04d4c4ff1855995d7c892bf22d25\" pid:3579 exited_at:{seconds:1762358685 nanos:512922610}" Nov 5 16:04:45.514233 containerd[1624]: time="2025-11-05T16:04:45.513393594Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7456bdf199f95653df953ab5ac8dc2d3b7fc04d4c4ff1855995d7c892bf22d25\" id:\"7456bdf199f95653df953ab5ac8dc2d3b7fc04d4c4ff1855995d7c892bf22d25\" pid:3579 exited_at:{seconds:1762358685 nanos:512922610}" Nov 5 16:04:45.538238 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7456bdf199f95653df953ab5ac8dc2d3b7fc04d4c4ff1855995d7c892bf22d25-rootfs.mount: Deactivated successfully. Nov 5 16:04:45.597743 kubelet[2821]: I1105 16:04:45.597698 2821 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 5 16:04:45.936572 systemd[1]: Created slice kubepods-besteffort-podf5389104_99ae_4ef4_ba0e_916e3b8ce467.slice - libcontainer container kubepods-besteffort-podf5389104_99ae_4ef4_ba0e_916e3b8ce467.slice. Nov 5 16:04:45.943162 kubelet[2821]: I1105 16:04:45.943080 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5389104-99ae-4ef4-ba0e-916e3b8ce467-tigera-ca-bundle\") pod \"calico-kube-controllers-546f546666-6794m\" (UID: \"f5389104-99ae-4ef4-ba0e-916e3b8ce467\") " pod="calico-system/calico-kube-controllers-546f546666-6794m" Nov 5 16:04:45.943292 kubelet[2821]: I1105 16:04:45.943180 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqz4r\" (UniqueName: \"kubernetes.io/projected/f5389104-99ae-4ef4-ba0e-916e3b8ce467-kube-api-access-hqz4r\") pod \"calico-kube-controllers-546f546666-6794m\" (UID: \"f5389104-99ae-4ef4-ba0e-916e3b8ce467\") " pod="calico-system/calico-kube-controllers-546f546666-6794m" Nov 5 16:04:45.947973 systemd[1]: Created slice kubepods-burstable-podb04f7cec_01c3_4233_b501_2b57e869475f.slice - libcontainer container kubepods-burstable-podb04f7cec_01c3_4233_b501_2b57e869475f.slice. Nov 5 16:04:45.956235 systemd[1]: Created slice kubepods-burstable-pod231e5ad8_3fa0_49fe_9747_a9fe616049e3.slice - libcontainer container kubepods-burstable-pod231e5ad8_3fa0_49fe_9747_a9fe616049e3.slice. Nov 5 16:04:45.962416 systemd[1]: Created slice kubepods-besteffort-podb05ca89b_5f9b_44f1_a3ba_63e56589f0e4.slice - libcontainer container kubepods-besteffort-podb05ca89b_5f9b_44f1_a3ba_63e56589f0e4.slice. Nov 5 16:04:45.969808 systemd[1]: Created slice kubepods-besteffort-podc385d739_2fbd_49ea_95d6_32a0c449fade.slice - libcontainer container kubepods-besteffort-podc385d739_2fbd_49ea_95d6_32a0c449fade.slice. Nov 5 16:04:45.975322 systemd[1]: Created slice kubepods-besteffort-pod95105778_77b0_4ad6_94f0_b022607ec4da.slice - libcontainer container kubepods-besteffort-pod95105778_77b0_4ad6_94f0_b022607ec4da.slice. Nov 5 16:04:45.980918 systemd[1]: Created slice kubepods-besteffort-pod3b0524f3_6d33_4a2f_8ac8_972312ac8fcc.slice - libcontainer container kubepods-besteffort-pod3b0524f3_6d33_4a2f_8ac8_972312ac8fcc.slice. Nov 5 16:04:46.044414 kubelet[2821]: I1105 16:04:46.044302 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk5mj\" (UniqueName: \"kubernetes.io/projected/231e5ad8-3fa0-49fe-9747-a9fe616049e3-kube-api-access-zk5mj\") pod \"coredns-674b8bbfcf-htsp7\" (UID: \"231e5ad8-3fa0-49fe-9747-a9fe616049e3\") " pod="kube-system/coredns-674b8bbfcf-htsp7" Nov 5 16:04:46.044414 kubelet[2821]: I1105 16:04:46.044390 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5w2k\" (UniqueName: \"kubernetes.io/projected/c385d739-2fbd-49ea-95d6-32a0c449fade-kube-api-access-w5w2k\") pod \"whisker-75f9989487-589w8\" (UID: \"c385d739-2fbd-49ea-95d6-32a0c449fade\") " pod="calico-system/whisker-75f9989487-589w8" Nov 5 16:04:46.044414 kubelet[2821]: I1105 16:04:46.044411 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b05ca89b-5f9b-44f1-a3ba-63e56589f0e4-calico-apiserver-certs\") pod \"calico-apiserver-55c4bf75cc-g4fsv\" (UID: \"b05ca89b-5f9b-44f1-a3ba-63e56589f0e4\") " pod="calico-apiserver/calico-apiserver-55c4bf75cc-g4fsv" Nov 5 16:04:46.044616 kubelet[2821]: I1105 16:04:46.044427 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/231e5ad8-3fa0-49fe-9747-a9fe616049e3-config-volume\") pod \"coredns-674b8bbfcf-htsp7\" (UID: \"231e5ad8-3fa0-49fe-9747-a9fe616049e3\") " pod="kube-system/coredns-674b8bbfcf-htsp7" Nov 5 16:04:46.044616 kubelet[2821]: I1105 16:04:46.044441 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95105778-77b0-4ad6-94f0-b022607ec4da-goldmane-ca-bundle\") pod \"goldmane-666569f655-ghhp8\" (UID: \"95105778-77b0-4ad6-94f0-b022607ec4da\") " pod="calico-system/goldmane-666569f655-ghhp8" Nov 5 16:04:46.044616 kubelet[2821]: I1105 16:04:46.044465 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgxdr\" (UniqueName: \"kubernetes.io/projected/b05ca89b-5f9b-44f1-a3ba-63e56589f0e4-kube-api-access-tgxdr\") pod \"calico-apiserver-55c4bf75cc-g4fsv\" (UID: \"b05ca89b-5f9b-44f1-a3ba-63e56589f0e4\") " pod="calico-apiserver/calico-apiserver-55c4bf75cc-g4fsv" Nov 5 16:04:46.044616 kubelet[2821]: I1105 16:04:46.044478 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/95105778-77b0-4ad6-94f0-b022607ec4da-goldmane-key-pair\") pod \"goldmane-666569f655-ghhp8\" (UID: \"95105778-77b0-4ad6-94f0-b022607ec4da\") " pod="calico-system/goldmane-666569f655-ghhp8" Nov 5 16:04:46.044616 kubelet[2821]: I1105 16:04:46.044491 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c385d739-2fbd-49ea-95d6-32a0c449fade-whisker-ca-bundle\") pod \"whisker-75f9989487-589w8\" (UID: \"c385d739-2fbd-49ea-95d6-32a0c449fade\") " pod="calico-system/whisker-75f9989487-589w8" Nov 5 16:04:46.044728 kubelet[2821]: I1105 16:04:46.044506 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b04f7cec-01c3-4233-b501-2b57e869475f-config-volume\") pod \"coredns-674b8bbfcf-jzmkb\" (UID: \"b04f7cec-01c3-4233-b501-2b57e869475f\") " pod="kube-system/coredns-674b8bbfcf-jzmkb" Nov 5 16:04:46.044728 kubelet[2821]: I1105 16:04:46.044519 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3b0524f3-6d33-4a2f-8ac8-972312ac8fcc-calico-apiserver-certs\") pod \"calico-apiserver-55c4bf75cc-smcwt\" (UID: \"3b0524f3-6d33-4a2f-8ac8-972312ac8fcc\") " pod="calico-apiserver/calico-apiserver-55c4bf75cc-smcwt" Nov 5 16:04:46.044855 kubelet[2821]: I1105 16:04:46.044546 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbbmr\" (UniqueName: \"kubernetes.io/projected/95105778-77b0-4ad6-94f0-b022607ec4da-kube-api-access-zbbmr\") pod \"goldmane-666569f655-ghhp8\" (UID: \"95105778-77b0-4ad6-94f0-b022607ec4da\") " pod="calico-system/goldmane-666569f655-ghhp8" Nov 5 16:04:46.044912 kubelet[2821]: I1105 16:04:46.044867 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c385d739-2fbd-49ea-95d6-32a0c449fade-whisker-backend-key-pair\") pod \"whisker-75f9989487-589w8\" (UID: \"c385d739-2fbd-49ea-95d6-32a0c449fade\") " pod="calico-system/whisker-75f9989487-589w8" Nov 5 16:04:46.044912 kubelet[2821]: I1105 16:04:46.044882 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k88wr\" (UniqueName: \"kubernetes.io/projected/b04f7cec-01c3-4233-b501-2b57e869475f-kube-api-access-k88wr\") pod \"coredns-674b8bbfcf-jzmkb\" (UID: \"b04f7cec-01c3-4233-b501-2b57e869475f\") " pod="kube-system/coredns-674b8bbfcf-jzmkb" Nov 5 16:04:46.044912 kubelet[2821]: I1105 16:04:46.044901 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95105778-77b0-4ad6-94f0-b022607ec4da-config\") pod \"goldmane-666569f655-ghhp8\" (UID: \"95105778-77b0-4ad6-94f0-b022607ec4da\") " pod="calico-system/goldmane-666569f655-ghhp8" Nov 5 16:04:46.044912 kubelet[2821]: I1105 16:04:46.044918 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n89g\" (UniqueName: \"kubernetes.io/projected/3b0524f3-6d33-4a2f-8ac8-972312ac8fcc-kube-api-access-8n89g\") pod \"calico-apiserver-55c4bf75cc-smcwt\" (UID: \"3b0524f3-6d33-4a2f-8ac8-972312ac8fcc\") " pod="calico-apiserver/calico-apiserver-55c4bf75cc-smcwt" Nov 5 16:04:46.243891 containerd[1624]: time="2025-11-05T16:04:46.243830473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-546f546666-6794m,Uid:f5389104-99ae-4ef4-ba0e-916e3b8ce467,Namespace:calico-system,Attempt:0,}" Nov 5 16:04:46.252480 kubelet[2821]: E1105 16:04:46.252446 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:46.252975 containerd[1624]: time="2025-11-05T16:04:46.252951741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jzmkb,Uid:b04f7cec-01c3-4233-b501-2b57e869475f,Namespace:kube-system,Attempt:0,}" Nov 5 16:04:46.262668 kubelet[2821]: E1105 16:04:46.262630 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:46.265297 containerd[1624]: time="2025-11-05T16:04:46.264268571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-htsp7,Uid:231e5ad8-3fa0-49fe-9747-a9fe616049e3,Namespace:kube-system,Attempt:0,}" Nov 5 16:04:46.266034 containerd[1624]: time="2025-11-05T16:04:46.265898461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55c4bf75cc-g4fsv,Uid:b05ca89b-5f9b-44f1-a3ba-63e56589f0e4,Namespace:calico-apiserver,Attempt:0,}" Nov 5 16:04:46.274703 containerd[1624]: time="2025-11-05T16:04:46.274645387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75f9989487-589w8,Uid:c385d739-2fbd-49ea-95d6-32a0c449fade,Namespace:calico-system,Attempt:0,}" Nov 5 16:04:46.278934 containerd[1624]: time="2025-11-05T16:04:46.278902898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ghhp8,Uid:95105778-77b0-4ad6-94f0-b022607ec4da,Namespace:calico-system,Attempt:0,}" Nov 5 16:04:46.294260 containerd[1624]: time="2025-11-05T16:04:46.291558841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55c4bf75cc-smcwt,Uid:3b0524f3-6d33-4a2f-8ac8-972312ac8fcc,Namespace:calico-apiserver,Attempt:0,}" Nov 5 16:04:46.481143 kubelet[2821]: E1105 16:04:46.481088 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:46.491943 containerd[1624]: time="2025-11-05T16:04:46.491849092Z" level=error msg="Failed to destroy network for sandbox \"a92b437a90d99e8af68b06b3ad8362f90d2c3a03920e0fc37f3398ddc9987a76\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:04:46.493546 containerd[1624]: time="2025-11-05T16:04:46.493519979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 5 16:04:46.499562 containerd[1624]: time="2025-11-05T16:04:46.498944881Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-htsp7,Uid:231e5ad8-3fa0-49fe-9747-a9fe616049e3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a92b437a90d99e8af68b06b3ad8362f90d2c3a03920e0fc37f3398ddc9987a76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:04:46.499714 kubelet[2821]: E1105 16:04:46.499681 2821 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a92b437a90d99e8af68b06b3ad8362f90d2c3a03920e0fc37f3398ddc9987a76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:04:46.499764 kubelet[2821]: E1105 16:04:46.499747 2821 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a92b437a90d99e8af68b06b3ad8362f90d2c3a03920e0fc37f3398ddc9987a76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-htsp7" Nov 5 16:04:46.499810 kubelet[2821]: E1105 16:04:46.499773 2821 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a92b437a90d99e8af68b06b3ad8362f90d2c3a03920e0fc37f3398ddc9987a76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-htsp7" Nov 5 16:04:46.499898 kubelet[2821]: E1105 16:04:46.499838 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-htsp7_kube-system(231e5ad8-3fa0-49fe-9747-a9fe616049e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-htsp7_kube-system(231e5ad8-3fa0-49fe-9747-a9fe616049e3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a92b437a90d99e8af68b06b3ad8362f90d2c3a03920e0fc37f3398ddc9987a76\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-htsp7" podUID="231e5ad8-3fa0-49fe-9747-a9fe616049e3" Nov 5 16:04:46.503723 containerd[1624]: time="2025-11-05T16:04:46.503679086Z" level=error msg="Failed to destroy network for sandbox \"ce7649ad78d53ad0ac14e13cf3ed04062198795091f89aa7182c3934ad2dd556\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:04:46.506400 containerd[1624]: time="2025-11-05T16:04:46.505916284Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-546f546666-6794m,Uid:f5389104-99ae-4ef4-ba0e-916e3b8ce467,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce7649ad78d53ad0ac14e13cf3ed04062198795091f89aa7182c3934ad2dd556\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:04:46.506569 kubelet[2821]: E1105 16:04:46.506525 2821 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce7649ad78d53ad0ac14e13cf3ed04062198795091f89aa7182c3934ad2dd556\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:04:46.506609 kubelet[2821]: E1105 16:04:46.506592 2821 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce7649ad78d53ad0ac14e13cf3ed04062198795091f89aa7182c3934ad2dd556\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-546f546666-6794m" Nov 5 16:04:46.506641 kubelet[2821]: E1105 16:04:46.506614 2821 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce7649ad78d53ad0ac14e13cf3ed04062198795091f89aa7182c3934ad2dd556\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-546f546666-6794m" Nov 5 16:04:46.506750 kubelet[2821]: E1105 16:04:46.506670 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-546f546666-6794m_calico-system(f5389104-99ae-4ef4-ba0e-916e3b8ce467)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-546f546666-6794m_calico-system(f5389104-99ae-4ef4-ba0e-916e3b8ce467)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce7649ad78d53ad0ac14e13cf3ed04062198795091f89aa7182c3934ad2dd556\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-546f546666-6794m" podUID="f5389104-99ae-4ef4-ba0e-916e3b8ce467" Nov 5 16:04:46.510280 containerd[1624]: time="2025-11-05T16:04:46.510213881Z" level=error msg="Failed to destroy network for sandbox \"72bca0d310c7a5e56a2e4bd300170aa8805f868d13a77a5ce994c42bd2816007\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:04:46.517274 containerd[1624]: time="2025-11-05T16:04:46.517216984Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ghhp8,Uid:95105778-77b0-4ad6-94f0-b022607ec4da,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"72bca0d310c7a5e56a2e4bd300170aa8805f868d13a77a5ce994c42bd2816007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:04:46.517721 kubelet[2821]: E1105 16:04:46.517460 2821 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72bca0d310c7a5e56a2e4bd300170aa8805f868d13a77a5ce994c42bd2816007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:04:46.517721 kubelet[2821]: E1105 16:04:46.517517 2821 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72bca0d310c7a5e56a2e4bd300170aa8805f868d13a77a5ce994c42bd2816007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-ghhp8" Nov 5 16:04:46.517721 kubelet[2821]: E1105 16:04:46.517540 2821 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72bca0d310c7a5e56a2e4bd300170aa8805f868d13a77a5ce994c42bd2816007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-ghhp8" Nov 5 16:04:46.517868 kubelet[2821]: E1105 16:04:46.517583 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-ghhp8_calico-system(95105778-77b0-4ad6-94f0-b022607ec4da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-ghhp8_calico-system(95105778-77b0-4ad6-94f0-b022607ec4da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72bca0d310c7a5e56a2e4bd300170aa8805f868d13a77a5ce994c42bd2816007\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-ghhp8" podUID="95105778-77b0-4ad6-94f0-b022607ec4da" Nov 5 16:04:46.519008 containerd[1624]: time="2025-11-05T16:04:46.518948404Z" level=error msg="Failed to destroy network for sandbox \"c6f395019b0038ecbcb118fbcf18dcbea0f784e2328b0e5c866e080b99154b0d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:04:46.520445 containerd[1624]: time="2025-11-05T16:04:46.520286456Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75f9989487-589w8,Uid:c385d739-2fbd-49ea-95d6-32a0c449fade,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6f395019b0038ecbcb118fbcf18dcbea0f784e2328b0e5c866e080b99154b0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:04:46.520819 kubelet[2821]: E1105 16:04:46.520778 2821 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6f395019b0038ecbcb118fbcf18dcbea0f784e2328b0e5c866e080b99154b0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:04:46.520871 kubelet[2821]: E1105 16:04:46.520833 2821 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6f395019b0038ecbcb118fbcf18dcbea0f784e2328b0e5c866e080b99154b0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75f9989487-589w8" Nov 5 16:04:46.520871 kubelet[2821]: E1105 16:04:46.520854 2821 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6f395019b0038ecbcb118fbcf18dcbea0f784e2328b0e5c866e080b99154b0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75f9989487-589w8" Nov 5 16:04:46.520916 kubelet[2821]: E1105 16:04:46.520898 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-75f9989487-589w8_calico-system(c385d739-2fbd-49ea-95d6-32a0c449fade)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-75f9989487-589w8_calico-system(c385d739-2fbd-49ea-95d6-32a0c449fade)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6f395019b0038ecbcb118fbcf18dcbea0f784e2328b0e5c866e080b99154b0d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-75f9989487-589w8" podUID="c385d739-2fbd-49ea-95d6-32a0c449fade" Nov 5 16:04:46.524764 containerd[1624]: time="2025-11-05T16:04:46.524647450Z" level=error msg="Failed to destroy network for sandbox \"933e3809f4f666015e7ed35501497888781e66add6421bd1a5fb8a0b8898da09\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:04:46.526501 containerd[1624]: time="2025-11-05T16:04:46.526463479Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55c4bf75cc-g4fsv,Uid:b05ca89b-5f9b-44f1-a3ba-63e56589f0e4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"933e3809f4f666015e7ed35501497888781e66add6421bd1a5fb8a0b8898da09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:04:46.526687 kubelet[2821]: E1105 16:04:46.526649 2821 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"933e3809f4f666015e7ed35501497888781e66add6421bd1a5fb8a0b8898da09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:04:46.526771 kubelet[2821]: E1105 16:04:46.526744 2821 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"933e3809f4f666015e7ed35501497888781e66add6421bd1a5fb8a0b8898da09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55c4bf75cc-g4fsv" Nov 5 16:04:46.526803 kubelet[2821]: E1105 16:04:46.526775 2821 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"933e3809f4f666015e7ed35501497888781e66add6421bd1a5fb8a0b8898da09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55c4bf75cc-g4fsv" Nov 5 16:04:46.526857 kubelet[2821]: E1105 16:04:46.526826 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55c4bf75cc-g4fsv_calico-apiserver(b05ca89b-5f9b-44f1-a3ba-63e56589f0e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55c4bf75cc-g4fsv_calico-apiserver(b05ca89b-5f9b-44f1-a3ba-63e56589f0e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"933e3809f4f666015e7ed35501497888781e66add6421bd1a5fb8a0b8898da09\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55c4bf75cc-g4fsv" podUID="b05ca89b-5f9b-44f1-a3ba-63e56589f0e4" Nov 5 16:04:46.528413 containerd[1624]: time="2025-11-05T16:04:46.528314342Z" level=error msg="Failed to destroy network for sandbox \"7bd319d5abfaa1eca4e19caaf413ce640df694d68c159e102eecbac15d3bf106\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:04:46.530461 containerd[1624]: time="2025-11-05T16:04:46.530424914Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jzmkb,Uid:b04f7cec-01c3-4233-b501-2b57e869475f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bd319d5abfaa1eca4e19caaf413ce640df694d68c159e102eecbac15d3bf106\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:04:46.530601 kubelet[2821]: E1105 16:04:46.530568 2821 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bd319d5abfaa1eca4e19caaf413ce640df694d68c159e102eecbac15d3bf106\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:04:46.530654 kubelet[2821]: E1105 16:04:46.530609 2821 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bd319d5abfaa1eca4e19caaf413ce640df694d68c159e102eecbac15d3bf106\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jzmkb" Nov 5 16:04:46.530654 kubelet[2821]: E1105 16:04:46.530627 2821 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bd319d5abfaa1eca4e19caaf413ce640df694d68c159e102eecbac15d3bf106\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jzmkb" Nov 5 16:04:46.530715 kubelet[2821]: E1105 16:04:46.530659 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-jzmkb_kube-system(b04f7cec-01c3-4233-b501-2b57e869475f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-jzmkb_kube-system(b04f7cec-01c3-4233-b501-2b57e869475f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7bd319d5abfaa1eca4e19caaf413ce640df694d68c159e102eecbac15d3bf106\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jzmkb" podUID="b04f7cec-01c3-4233-b501-2b57e869475f" Nov 5 16:04:46.540676 containerd[1624]: time="2025-11-05T16:04:46.540627192Z" level=error msg="Failed to destroy network for sandbox \"c33fc0da15603accb404e6b7aebe1f362a03596461756916fec73d438499faf7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:04:46.542003 containerd[1624]: time="2025-11-05T16:04:46.541890493Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55c4bf75cc-smcwt,Uid:3b0524f3-6d33-4a2f-8ac8-972312ac8fcc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c33fc0da15603accb404e6b7aebe1f362a03596461756916fec73d438499faf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:04:46.542238 kubelet[2821]: E1105 16:04:46.542200 2821 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c33fc0da15603accb404e6b7aebe1f362a03596461756916fec73d438499faf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:04:46.542297 kubelet[2821]: E1105 16:04:46.542258 2821 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c33fc0da15603accb404e6b7aebe1f362a03596461756916fec73d438499faf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55c4bf75cc-smcwt" Nov 5 16:04:46.542297 kubelet[2821]: E1105 16:04:46.542279 2821 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c33fc0da15603accb404e6b7aebe1f362a03596461756916fec73d438499faf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55c4bf75cc-smcwt" Nov 5 16:04:46.542384 kubelet[2821]: E1105 16:04:46.542356 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55c4bf75cc-smcwt_calico-apiserver(3b0524f3-6d33-4a2f-8ac8-972312ac8fcc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55c4bf75cc-smcwt_calico-apiserver(3b0524f3-6d33-4a2f-8ac8-972312ac8fcc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c33fc0da15603accb404e6b7aebe1f362a03596461756916fec73d438499faf7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55c4bf75cc-smcwt" podUID="3b0524f3-6d33-4a2f-8ac8-972312ac8fcc" Nov 5 16:04:46.542540 systemd[1]: run-netns-cni\x2d9d54d516\x2d3e93\x2de903\x2df638\x2d36852e05adba.mount: Deactivated successfully. Nov 5 16:04:47.367310 systemd[1]: Created slice kubepods-besteffort-pod998850e6_5a3e_41d3_948e_1a886bae0358.slice - libcontainer container kubepods-besteffort-pod998850e6_5a3e_41d3_948e_1a886bae0358.slice. Nov 5 16:04:47.369881 containerd[1624]: time="2025-11-05T16:04:47.369841357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qm7k4,Uid:998850e6-5a3e-41d3-948e-1a886bae0358,Namespace:calico-system,Attempt:0,}" Nov 5 16:04:47.422886 containerd[1624]: time="2025-11-05T16:04:47.422802471Z" level=error msg="Failed to destroy network for sandbox \"fefaa214bffe79787404ef355f3666801175780a2b18735d7afd98949caf35b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:04:47.424267 containerd[1624]: time="2025-11-05T16:04:47.424158546Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qm7k4,Uid:998850e6-5a3e-41d3-948e-1a886bae0358,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fefaa214bffe79787404ef355f3666801175780a2b18735d7afd98949caf35b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:04:47.424911 kubelet[2821]: E1105 16:04:47.424509 2821 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fefaa214bffe79787404ef355f3666801175780a2b18735d7afd98949caf35b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:04:47.424911 kubelet[2821]: E1105 16:04:47.424581 2821 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fefaa214bffe79787404ef355f3666801175780a2b18735d7afd98949caf35b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qm7k4" Nov 5 16:04:47.424911 kubelet[2821]: E1105 16:04:47.424603 2821 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fefaa214bffe79787404ef355f3666801175780a2b18735d7afd98949caf35b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qm7k4" Nov 5 16:04:47.425014 kubelet[2821]: E1105 16:04:47.424660 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qm7k4_calico-system(998850e6-5a3e-41d3-948e-1a886bae0358)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qm7k4_calico-system(998850e6-5a3e-41d3-948e-1a886bae0358)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fefaa214bffe79787404ef355f3666801175780a2b18735d7afd98949caf35b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qm7k4" podUID="998850e6-5a3e-41d3-948e-1a886bae0358" Nov 5 16:04:47.425699 systemd[1]: run-netns-cni\x2d6beb5c39\x2d77fa\x2d075b\x2db81a\x2df3e52c1ee799.mount: Deactivated successfully. Nov 5 16:04:53.738670 systemd[1]: Started sshd@8-10.0.0.150:22-10.0.0.1:35544.service - OpenSSH per-connection server daemon (10.0.0.1:35544). Nov 5 16:04:53.819578 sshd[3892]: Accepted publickey for core from 10.0.0.1 port 35544 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 16:04:53.821361 sshd-session[3892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:53.827540 systemd-logind[1592]: New session 8 of user core. Nov 5 16:04:53.833512 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 5 16:04:53.954307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2396530969.mount: Deactivated successfully. Nov 5 16:04:53.984370 sshd[3895]: Connection closed by 10.0.0.1 port 35544 Nov 5 16:04:53.985572 sshd-session[3892]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:53.989131 systemd[1]: sshd@8-10.0.0.150:22-10.0.0.1:35544.service: Deactivated successfully. Nov 5 16:04:53.991589 systemd[1]: session-8.scope: Deactivated successfully. Nov 5 16:04:53.993252 systemd-logind[1592]: Session 8 logged out. Waiting for processes to exit. Nov 5 16:04:53.994830 systemd-logind[1592]: Removed session 8. Nov 5 16:04:56.884002 containerd[1624]: time="2025-11-05T16:04:56.883941137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:04:56.884875 containerd[1624]: time="2025-11-05T16:04:56.884817806Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 5 16:04:56.886028 containerd[1624]: time="2025-11-05T16:04:56.886001260Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:04:56.888108 containerd[1624]: time="2025-11-05T16:04:56.888042707Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:04:56.888648 containerd[1624]: time="2025-11-05T16:04:56.888619835Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 10.395069881s" Nov 5 16:04:56.888698 containerd[1624]: time="2025-11-05T16:04:56.888651897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 5 16:04:56.906568 containerd[1624]: time="2025-11-05T16:04:56.906519911Z" level=info msg="CreateContainer within sandbox \"7e8d05141313d06103bdc3dbe93c77db3de9cefe9f543ca696629061c2aefb32\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 5 16:04:56.917507 containerd[1624]: time="2025-11-05T16:04:56.917459956Z" level=info msg="Container 28c25102036bb67446e69c1f33a0400551281b99614a3bbf8fd55416ca162697: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:04:56.928341 containerd[1624]: time="2025-11-05T16:04:56.928285961Z" level=info msg="CreateContainer within sandbox \"7e8d05141313d06103bdc3dbe93c77db3de9cefe9f543ca696629061c2aefb32\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"28c25102036bb67446e69c1f33a0400551281b99614a3bbf8fd55416ca162697\"" Nov 5 16:04:56.928839 containerd[1624]: time="2025-11-05T16:04:56.928811139Z" level=info msg="StartContainer for \"28c25102036bb67446e69c1f33a0400551281b99614a3bbf8fd55416ca162697\"" Nov 5 16:04:56.930273 containerd[1624]: time="2025-11-05T16:04:56.930246251Z" level=info msg="connecting to shim 28c25102036bb67446e69c1f33a0400551281b99614a3bbf8fd55416ca162697" address="unix:///run/containerd/s/cc0220e2c32e4a0855232be3d973b59396fe8161d3997d9ba1cf6edc1d0514f7" protocol=ttrpc version=3 Nov 5 16:04:56.963505 systemd[1]: Started cri-containerd-28c25102036bb67446e69c1f33a0400551281b99614a3bbf8fd55416ca162697.scope - libcontainer container 28c25102036bb67446e69c1f33a0400551281b99614a3bbf8fd55416ca162697. Nov 5 16:04:57.007287 containerd[1624]: time="2025-11-05T16:04:57.007201532Z" level=info msg="StartContainer for \"28c25102036bb67446e69c1f33a0400551281b99614a3bbf8fd55416ca162697\" returns successfully" Nov 5 16:04:57.087480 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 5 16:04:57.087596 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 5 16:04:57.315542 kubelet[2821]: I1105 16:04:57.315481 2821 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c385d739-2fbd-49ea-95d6-32a0c449fade-whisker-backend-key-pair\") pod \"c385d739-2fbd-49ea-95d6-32a0c449fade\" (UID: \"c385d739-2fbd-49ea-95d6-32a0c449fade\") " Nov 5 16:04:57.315542 kubelet[2821]: I1105 16:04:57.315558 2821 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5w2k\" (UniqueName: \"kubernetes.io/projected/c385d739-2fbd-49ea-95d6-32a0c449fade-kube-api-access-w5w2k\") pod \"c385d739-2fbd-49ea-95d6-32a0c449fade\" (UID: \"c385d739-2fbd-49ea-95d6-32a0c449fade\") " Nov 5 16:04:57.316072 kubelet[2821]: I1105 16:04:57.315576 2821 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c385d739-2fbd-49ea-95d6-32a0c449fade-whisker-ca-bundle\") pod \"c385d739-2fbd-49ea-95d6-32a0c449fade\" (UID: \"c385d739-2fbd-49ea-95d6-32a0c449fade\") " Nov 5 16:04:57.316909 kubelet[2821]: I1105 16:04:57.316845 2821 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c385d739-2fbd-49ea-95d6-32a0c449fade-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c385d739-2fbd-49ea-95d6-32a0c449fade" (UID: "c385d739-2fbd-49ea-95d6-32a0c449fade"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 16:04:57.322614 kubelet[2821]: I1105 16:04:57.322575 2821 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c385d739-2fbd-49ea-95d6-32a0c449fade-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c385d739-2fbd-49ea-95d6-32a0c449fade" (UID: "c385d739-2fbd-49ea-95d6-32a0c449fade"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 5 16:04:57.323283 kubelet[2821]: I1105 16:04:57.323235 2821 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c385d739-2fbd-49ea-95d6-32a0c449fade-kube-api-access-w5w2k" (OuterVolumeSpecName: "kube-api-access-w5w2k") pod "c385d739-2fbd-49ea-95d6-32a0c449fade" (UID: "c385d739-2fbd-49ea-95d6-32a0c449fade"). InnerVolumeSpecName "kube-api-access-w5w2k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 16:04:57.361168 containerd[1624]: time="2025-11-05T16:04:57.361133447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55c4bf75cc-g4fsv,Uid:b05ca89b-5f9b-44f1-a3ba-63e56589f0e4,Namespace:calico-apiserver,Attempt:0,}" Nov 5 16:04:57.416284 kubelet[2821]: I1105 16:04:57.416210 2821 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c385d739-2fbd-49ea-95d6-32a0c449fade-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 5 16:04:57.416284 kubelet[2821]: I1105 16:04:57.416246 2821 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w5w2k\" (UniqueName: \"kubernetes.io/projected/c385d739-2fbd-49ea-95d6-32a0c449fade-kube-api-access-w5w2k\") on node \"localhost\" DevicePath \"\"" Nov 5 16:04:57.416284 kubelet[2821]: I1105 16:04:57.416257 2821 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c385d739-2fbd-49ea-95d6-32a0c449fade-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 5 16:04:57.505199 kubelet[2821]: E1105 16:04:57.505154 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:57.506817 systemd-networkd[1509]: cali0ffcdd84305: Link UP Nov 5 16:04:57.508992 systemd-networkd[1509]: cali0ffcdd84305: Gained carrier Nov 5 16:04:57.514784 systemd[1]: Removed slice kubepods-besteffort-podc385d739_2fbd_49ea_95d6_32a0c449fade.slice - libcontainer container kubepods-besteffort-podc385d739_2fbd_49ea_95d6_32a0c449fade.slice. Nov 5 16:04:57.532176 kubelet[2821]: I1105 16:04:57.532098 2821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rz4hz" podStartSLOduration=1.409050935 podStartE2EDuration="22.532076802s" podCreationTimestamp="2025-11-05 16:04:35 +0000 UTC" firstStartedPulling="2025-11-05 16:04:35.766332631 +0000 UTC m=+19.525486254" lastFinishedPulling="2025-11-05 16:04:56.889358498 +0000 UTC m=+40.648512121" observedRunningTime="2025-11-05 16:04:57.528449317 +0000 UTC m=+41.287602940" watchObservedRunningTime="2025-11-05 16:04:57.532076802 +0000 UTC m=+41.291230426" Nov 5 16:04:57.539551 containerd[1624]: 2025-11-05 16:04:57.384 [INFO][3973] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 16:04:57.539551 containerd[1624]: 2025-11-05 16:04:57.402 [INFO][3973] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--55c4bf75cc--g4fsv-eth0 calico-apiserver-55c4bf75cc- calico-apiserver b05ca89b-5f9b-44f1-a3ba-63e56589f0e4 898 0 2025-11-05 16:04:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55c4bf75cc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-55c4bf75cc-g4fsv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0ffcdd84305 [] [] }} ContainerID="b5dde52a808db117a1e3b3a023eb351361332ccbe7829b94e462a844cb706aee" Namespace="calico-apiserver" Pod="calico-apiserver-55c4bf75cc-g4fsv" WorkloadEndpoint="localhost-k8s-calico--apiserver--55c4bf75cc--g4fsv-" Nov 5 16:04:57.539551 containerd[1624]: 2025-11-05 16:04:57.402 [INFO][3973] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b5dde52a808db117a1e3b3a023eb351361332ccbe7829b94e462a844cb706aee" Namespace="calico-apiserver" Pod="calico-apiserver-55c4bf75cc-g4fsv" WorkloadEndpoint="localhost-k8s-calico--apiserver--55c4bf75cc--g4fsv-eth0" Nov 5 16:04:57.539551 containerd[1624]: 2025-11-05 16:04:57.463 [INFO][3988] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b5dde52a808db117a1e3b3a023eb351361332ccbe7829b94e462a844cb706aee" HandleID="k8s-pod-network.b5dde52a808db117a1e3b3a023eb351361332ccbe7829b94e462a844cb706aee" Workload="localhost-k8s-calico--apiserver--55c4bf75cc--g4fsv-eth0" Nov 5 16:04:57.539778 containerd[1624]: 2025-11-05 16:04:57.464 [INFO][3988] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b5dde52a808db117a1e3b3a023eb351361332ccbe7829b94e462a844cb706aee" HandleID="k8s-pod-network.b5dde52a808db117a1e3b3a023eb351361332ccbe7829b94e462a844cb706aee" Workload="localhost-k8s-calico--apiserver--55c4bf75cc--g4fsv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-55c4bf75cc-g4fsv", "timestamp":"2025-11-05 16:04:57.463962975 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 16:04:57.539778 containerd[1624]: 2025-11-05 16:04:57.464 [INFO][3988] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 16:04:57.539778 containerd[1624]: 2025-11-05 16:04:57.464 [INFO][3988] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 16:04:57.539778 containerd[1624]: 2025-11-05 16:04:57.465 [INFO][3988] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 16:04:57.539778 containerd[1624]: 2025-11-05 16:04:57.471 [INFO][3988] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b5dde52a808db117a1e3b3a023eb351361332ccbe7829b94e462a844cb706aee" host="localhost" Nov 5 16:04:57.539778 containerd[1624]: 2025-11-05 16:04:57.477 [INFO][3988] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 16:04:57.539778 containerd[1624]: 2025-11-05 16:04:57.481 [INFO][3988] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 16:04:57.539778 containerd[1624]: 2025-11-05 16:04:57.483 [INFO][3988] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 16:04:57.539778 containerd[1624]: 2025-11-05 16:04:57.484 [INFO][3988] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 16:04:57.539778 containerd[1624]: 2025-11-05 16:04:57.484 [INFO][3988] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b5dde52a808db117a1e3b3a023eb351361332ccbe7829b94e462a844cb706aee" host="localhost" Nov 5 16:04:57.540014 containerd[1624]: 2025-11-05 16:04:57.485 [INFO][3988] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b5dde52a808db117a1e3b3a023eb351361332ccbe7829b94e462a844cb706aee Nov 5 16:04:57.540014 containerd[1624]: 2025-11-05 16:04:57.489 [INFO][3988] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b5dde52a808db117a1e3b3a023eb351361332ccbe7829b94e462a844cb706aee" host="localhost" Nov 5 16:04:57.540014 containerd[1624]: 2025-11-05 16:04:57.494 [INFO][3988] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.b5dde52a808db117a1e3b3a023eb351361332ccbe7829b94e462a844cb706aee" host="localhost" Nov 5 16:04:57.540014 containerd[1624]: 2025-11-05 16:04:57.494 [INFO][3988] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.b5dde52a808db117a1e3b3a023eb351361332ccbe7829b94e462a844cb706aee" host="localhost" Nov 5 16:04:57.540014 containerd[1624]: 2025-11-05 16:04:57.494 [INFO][3988] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 16:04:57.540014 containerd[1624]: 2025-11-05 16:04:57.494 [INFO][3988] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="b5dde52a808db117a1e3b3a023eb351361332ccbe7829b94e462a844cb706aee" HandleID="k8s-pod-network.b5dde52a808db117a1e3b3a023eb351361332ccbe7829b94e462a844cb706aee" Workload="localhost-k8s-calico--apiserver--55c4bf75cc--g4fsv-eth0" Nov 5 16:04:57.540340 containerd[1624]: 2025-11-05 16:04:57.498 [INFO][3973] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b5dde52a808db117a1e3b3a023eb351361332ccbe7829b94e462a844cb706aee" Namespace="calico-apiserver" Pod="calico-apiserver-55c4bf75cc-g4fsv" WorkloadEndpoint="localhost-k8s-calico--apiserver--55c4bf75cc--g4fsv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55c4bf75cc--g4fsv-eth0", GenerateName:"calico-apiserver-55c4bf75cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"b05ca89b-5f9b-44f1-a3ba-63e56589f0e4", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 4, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55c4bf75cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-55c4bf75cc-g4fsv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ffcdd84305", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:04:57.540420 containerd[1624]: 2025-11-05 16:04:57.498 [INFO][3973] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="b5dde52a808db117a1e3b3a023eb351361332ccbe7829b94e462a844cb706aee" Namespace="calico-apiserver" Pod="calico-apiserver-55c4bf75cc-g4fsv" WorkloadEndpoint="localhost-k8s-calico--apiserver--55c4bf75cc--g4fsv-eth0" Nov 5 16:04:57.540420 containerd[1624]: 2025-11-05 16:04:57.498 [INFO][3973] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0ffcdd84305 ContainerID="b5dde52a808db117a1e3b3a023eb351361332ccbe7829b94e462a844cb706aee" Namespace="calico-apiserver" Pod="calico-apiserver-55c4bf75cc-g4fsv" WorkloadEndpoint="localhost-k8s-calico--apiserver--55c4bf75cc--g4fsv-eth0" Nov 5 16:04:57.540420 containerd[1624]: 2025-11-05 16:04:57.506 [INFO][3973] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b5dde52a808db117a1e3b3a023eb351361332ccbe7829b94e462a844cb706aee" Namespace="calico-apiserver" Pod="calico-apiserver-55c4bf75cc-g4fsv" WorkloadEndpoint="localhost-k8s-calico--apiserver--55c4bf75cc--g4fsv-eth0" Nov 5 16:04:57.540491 containerd[1624]: 2025-11-05 16:04:57.509 [INFO][3973] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b5dde52a808db117a1e3b3a023eb351361332ccbe7829b94e462a844cb706aee" Namespace="calico-apiserver" Pod="calico-apiserver-55c4bf75cc-g4fsv" WorkloadEndpoint="localhost-k8s-calico--apiserver--55c4bf75cc--g4fsv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55c4bf75cc--g4fsv-eth0", GenerateName:"calico-apiserver-55c4bf75cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"b05ca89b-5f9b-44f1-a3ba-63e56589f0e4", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 4, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55c4bf75cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b5dde52a808db117a1e3b3a023eb351361332ccbe7829b94e462a844cb706aee", Pod:"calico-apiserver-55c4bf75cc-g4fsv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ffcdd84305", MAC:"f6:d6:d4:f1:5c:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:04:57.540538 containerd[1624]: 2025-11-05 16:04:57.525 [INFO][3973] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b5dde52a808db117a1e3b3a023eb351361332ccbe7829b94e462a844cb706aee" Namespace="calico-apiserver" Pod="calico-apiserver-55c4bf75cc-g4fsv" WorkloadEndpoint="localhost-k8s-calico--apiserver--55c4bf75cc--g4fsv-eth0" Nov 5 16:04:57.681514 systemd[1]: Created slice kubepods-besteffort-pod02617f58_0688_49a8_be3f_9c86c801f751.slice - libcontainer container kubepods-besteffort-pod02617f58_0688_49a8_be3f_9c86c801f751.slice. Nov 5 16:04:57.718772 kubelet[2821]: I1105 16:04:57.718703 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/02617f58-0688-49a8-be3f-9c86c801f751-whisker-backend-key-pair\") pod \"whisker-5864c6d54c-kz7l7\" (UID: \"02617f58-0688-49a8-be3f-9c86c801f751\") " pod="calico-system/whisker-5864c6d54c-kz7l7" Nov 5 16:04:57.718772 kubelet[2821]: I1105 16:04:57.718751 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02617f58-0688-49a8-be3f-9c86c801f751-whisker-ca-bundle\") pod \"whisker-5864c6d54c-kz7l7\" (UID: \"02617f58-0688-49a8-be3f-9c86c801f751\") " pod="calico-system/whisker-5864c6d54c-kz7l7" Nov 5 16:04:57.718772 kubelet[2821]: I1105 16:04:57.718776 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctl7b\" (UniqueName: \"kubernetes.io/projected/02617f58-0688-49a8-be3f-9c86c801f751-kube-api-access-ctl7b\") pod \"whisker-5864c6d54c-kz7l7\" (UID: \"02617f58-0688-49a8-be3f-9c86c801f751\") " pod="calico-system/whisker-5864c6d54c-kz7l7" Nov 5 16:04:57.746524 containerd[1624]: time="2025-11-05T16:04:57.746472034Z" level=info msg="TaskExit event in podsandbox handler container_id:\"28c25102036bb67446e69c1f33a0400551281b99614a3bbf8fd55416ca162697\" id:\"babcb33c80309979f21edb85890242a0d0902bb7701f7e7cb03f3b610b68028c\" pid:4014 exit_status:1 exited_at:{seconds:1762358697 nanos:745952418}" Nov 5 16:04:57.772613 containerd[1624]: time="2025-11-05T16:04:57.772537122Z" level=info msg="connecting to shim b5dde52a808db117a1e3b3a023eb351361332ccbe7829b94e462a844cb706aee" address="unix:///run/containerd/s/b6e25d14c3a82bc5fbf9bcaa5df573df4384b728d4f27b5d0c4c71bd86195a5f" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:04:57.795522 systemd[1]: Started cri-containerd-b5dde52a808db117a1e3b3a023eb351361332ccbe7829b94e462a844cb706aee.scope - libcontainer container b5dde52a808db117a1e3b3a023eb351361332ccbe7829b94e462a844cb706aee. Nov 5 16:04:57.809221 systemd-resolved[1304]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 16:04:57.898731 systemd[1]: var-lib-kubelet-pods-c385d739\x2d2fbd\x2d49ea\x2d95d6\x2d32a0c449fade-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw5w2k.mount: Deactivated successfully. Nov 5 16:04:57.898842 systemd[1]: var-lib-kubelet-pods-c385d739\x2d2fbd\x2d49ea\x2d95d6\x2d32a0c449fade-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 5 16:04:58.134928 containerd[1624]: time="2025-11-05T16:04:58.134859679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5864c6d54c-kz7l7,Uid:02617f58-0688-49a8-be3f-9c86c801f751,Namespace:calico-system,Attempt:0,}" Nov 5 16:04:58.136103 containerd[1624]: time="2025-11-05T16:04:58.136053990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55c4bf75cc-g4fsv,Uid:b05ca89b-5f9b-44f1-a3ba-63e56589f0e4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b5dde52a808db117a1e3b3a023eb351361332ccbe7829b94e462a844cb706aee\"" Nov 5 16:04:58.147637 containerd[1624]: time="2025-11-05T16:04:58.147595081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 16:04:58.230933 systemd-networkd[1509]: calie39281c1316: Link UP Nov 5 16:04:58.231337 systemd-networkd[1509]: calie39281c1316: Gained carrier Nov 5 16:04:58.245747 containerd[1624]: 2025-11-05 16:04:58.161 [INFO][4073] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 16:04:58.245747 containerd[1624]: 2025-11-05 16:04:58.171 [INFO][4073] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5864c6d54c--kz7l7-eth0 whisker-5864c6d54c- calico-system 02617f58-0688-49a8-be3f-9c86c801f751 1029 0 2025-11-05 16:04:57 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5864c6d54c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5864c6d54c-kz7l7 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie39281c1316 [] [] }} ContainerID="af122704d3eb7dbd3dd4298c2f5d83e74b975418076e970dff83f2bff3903339" Namespace="calico-system" Pod="whisker-5864c6d54c-kz7l7" WorkloadEndpoint="localhost-k8s-whisker--5864c6d54c--kz7l7-" Nov 5 16:04:58.245747 containerd[1624]: 2025-11-05 16:04:58.171 [INFO][4073] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="af122704d3eb7dbd3dd4298c2f5d83e74b975418076e970dff83f2bff3903339" Namespace="calico-system" Pod="whisker-5864c6d54c-kz7l7" WorkloadEndpoint="localhost-k8s-whisker--5864c6d54c--kz7l7-eth0" Nov 5 16:04:58.245747 containerd[1624]: 2025-11-05 16:04:58.196 [INFO][4089] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="af122704d3eb7dbd3dd4298c2f5d83e74b975418076e970dff83f2bff3903339" HandleID="k8s-pod-network.af122704d3eb7dbd3dd4298c2f5d83e74b975418076e970dff83f2bff3903339" Workload="localhost-k8s-whisker--5864c6d54c--kz7l7-eth0" Nov 5 16:04:58.245959 containerd[1624]: 2025-11-05 16:04:58.196 [INFO][4089] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="af122704d3eb7dbd3dd4298c2f5d83e74b975418076e970dff83f2bff3903339" HandleID="k8s-pod-network.af122704d3eb7dbd3dd4298c2f5d83e74b975418076e970dff83f2bff3903339" Workload="localhost-k8s-whisker--5864c6d54c--kz7l7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f510), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5864c6d54c-kz7l7", "timestamp":"2025-11-05 16:04:58.196634266 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 16:04:58.245959 containerd[1624]: 2025-11-05 16:04:58.196 [INFO][4089] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 16:04:58.245959 containerd[1624]: 2025-11-05 16:04:58.196 [INFO][4089] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 16:04:58.245959 containerd[1624]: 2025-11-05 16:04:58.196 [INFO][4089] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 16:04:58.245959 containerd[1624]: 2025-11-05 16:04:58.203 [INFO][4089] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.af122704d3eb7dbd3dd4298c2f5d83e74b975418076e970dff83f2bff3903339" host="localhost" Nov 5 16:04:58.245959 containerd[1624]: 2025-11-05 16:04:58.208 [INFO][4089] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 16:04:58.245959 containerd[1624]: 2025-11-05 16:04:58.212 [INFO][4089] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 16:04:58.245959 containerd[1624]: 2025-11-05 16:04:58.214 [INFO][4089] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 16:04:58.245959 containerd[1624]: 2025-11-05 16:04:58.216 [INFO][4089] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 16:04:58.245959 containerd[1624]: 2025-11-05 16:04:58.216 [INFO][4089] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.af122704d3eb7dbd3dd4298c2f5d83e74b975418076e970dff83f2bff3903339" host="localhost" Nov 5 16:04:58.246257 containerd[1624]: 2025-11-05 16:04:58.217 [INFO][4089] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.af122704d3eb7dbd3dd4298c2f5d83e74b975418076e970dff83f2bff3903339 Nov 5 16:04:58.246257 containerd[1624]: 2025-11-05 16:04:58.221 [INFO][4089] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.af122704d3eb7dbd3dd4298c2f5d83e74b975418076e970dff83f2bff3903339" host="localhost" Nov 5 16:04:58.246257 containerd[1624]: 2025-11-05 16:04:58.225 [INFO][4089] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.af122704d3eb7dbd3dd4298c2f5d83e74b975418076e970dff83f2bff3903339" host="localhost" Nov 5 16:04:58.246257 containerd[1624]: 2025-11-05 16:04:58.225 [INFO][4089] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.af122704d3eb7dbd3dd4298c2f5d83e74b975418076e970dff83f2bff3903339" host="localhost" Nov 5 16:04:58.246257 containerd[1624]: 2025-11-05 16:04:58.225 [INFO][4089] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 16:04:58.246257 containerd[1624]: 2025-11-05 16:04:58.225 [INFO][4089] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="af122704d3eb7dbd3dd4298c2f5d83e74b975418076e970dff83f2bff3903339" HandleID="k8s-pod-network.af122704d3eb7dbd3dd4298c2f5d83e74b975418076e970dff83f2bff3903339" Workload="localhost-k8s-whisker--5864c6d54c--kz7l7-eth0" Nov 5 16:04:58.246397 containerd[1624]: 2025-11-05 16:04:58.228 [INFO][4073] cni-plugin/k8s.go 418: Populated endpoint ContainerID="af122704d3eb7dbd3dd4298c2f5d83e74b975418076e970dff83f2bff3903339" Namespace="calico-system" Pod="whisker-5864c6d54c-kz7l7" WorkloadEndpoint="localhost-k8s-whisker--5864c6d54c--kz7l7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5864c6d54c--kz7l7-eth0", GenerateName:"whisker-5864c6d54c-", Namespace:"calico-system", SelfLink:"", UID:"02617f58-0688-49a8-be3f-9c86c801f751", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 4, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5864c6d54c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5864c6d54c-kz7l7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie39281c1316", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:04:58.246397 containerd[1624]: 2025-11-05 16:04:58.229 [INFO][4073] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="af122704d3eb7dbd3dd4298c2f5d83e74b975418076e970dff83f2bff3903339" Namespace="calico-system" Pod="whisker-5864c6d54c-kz7l7" WorkloadEndpoint="localhost-k8s-whisker--5864c6d54c--kz7l7-eth0" Nov 5 16:04:58.246475 containerd[1624]: 2025-11-05 16:04:58.229 [INFO][4073] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie39281c1316 ContainerID="af122704d3eb7dbd3dd4298c2f5d83e74b975418076e970dff83f2bff3903339" Namespace="calico-system" Pod="whisker-5864c6d54c-kz7l7" WorkloadEndpoint="localhost-k8s-whisker--5864c6d54c--kz7l7-eth0" Nov 5 16:04:58.246475 containerd[1624]: 2025-11-05 16:04:58.231 [INFO][4073] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="af122704d3eb7dbd3dd4298c2f5d83e74b975418076e970dff83f2bff3903339" Namespace="calico-system" Pod="whisker-5864c6d54c-kz7l7" WorkloadEndpoint="localhost-k8s-whisker--5864c6d54c--kz7l7-eth0" Nov 5 16:04:58.246525 containerd[1624]: 2025-11-05 16:04:58.231 [INFO][4073] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="af122704d3eb7dbd3dd4298c2f5d83e74b975418076e970dff83f2bff3903339" Namespace="calico-system" Pod="whisker-5864c6d54c-kz7l7" WorkloadEndpoint="localhost-k8s-whisker--5864c6d54c--kz7l7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5864c6d54c--kz7l7-eth0", GenerateName:"whisker-5864c6d54c-", Namespace:"calico-system", SelfLink:"", UID:"02617f58-0688-49a8-be3f-9c86c801f751", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 4, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5864c6d54c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"af122704d3eb7dbd3dd4298c2f5d83e74b975418076e970dff83f2bff3903339", Pod:"whisker-5864c6d54c-kz7l7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie39281c1316", MAC:"b6:3b:05:65:03:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:04:58.246581 containerd[1624]: 2025-11-05 16:04:58.242 [INFO][4073] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="af122704d3eb7dbd3dd4298c2f5d83e74b975418076e970dff83f2bff3903339" Namespace="calico-system" Pod="whisker-5864c6d54c-kz7l7" WorkloadEndpoint="localhost-k8s-whisker--5864c6d54c--kz7l7-eth0" Nov 5 16:04:58.269448 containerd[1624]: time="2025-11-05T16:04:58.269405340Z" level=info msg="connecting to shim af122704d3eb7dbd3dd4298c2f5d83e74b975418076e970dff83f2bff3903339" address="unix:///run/containerd/s/835b08d95ebfafc7defb4ac0d9140c48ab04b107e8bec5b535f54ba46739b950" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:04:58.302556 systemd[1]: Started cri-containerd-af122704d3eb7dbd3dd4298c2f5d83e74b975418076e970dff83f2bff3903339.scope - libcontainer container af122704d3eb7dbd3dd4298c2f5d83e74b975418076e970dff83f2bff3903339. Nov 5 16:04:58.315613 systemd-resolved[1304]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 16:04:58.345128 containerd[1624]: time="2025-11-05T16:04:58.345079691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5864c6d54c-kz7l7,Uid:02617f58-0688-49a8-be3f-9c86c801f751,Namespace:calico-system,Attempt:0,} returns sandbox id \"af122704d3eb7dbd3dd4298c2f5d83e74b975418076e970dff83f2bff3903339\"" Nov 5 16:04:58.361452 kubelet[2821]: E1105 16:04:58.361304 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:58.363515 containerd[1624]: time="2025-11-05T16:04:58.362700074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-htsp7,Uid:231e5ad8-3fa0-49fe-9747-a9fe616049e3,Namespace:kube-system,Attempt:0,}" Nov 5 16:04:58.369340 kubelet[2821]: I1105 16:04:58.369295 2821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c385d739-2fbd-49ea-95d6-32a0c449fade" path="/var/lib/kubelet/pods/c385d739-2fbd-49ea-95d6-32a0c449fade/volumes" Nov 5 16:04:58.520566 kubelet[2821]: E1105 16:04:58.520387 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:58.566294 containerd[1624]: time="2025-11-05T16:04:58.566245612Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:04:58.637067 containerd[1624]: time="2025-11-05T16:04:58.636989393Z" level=info msg="TaskExit event in podsandbox handler container_id:\"28c25102036bb67446e69c1f33a0400551281b99614a3bbf8fd55416ca162697\" id:\"23ebc466add0f9bab24b7e4246f67483e5d94c7c67ef788aebd217ab9ae5d1ca\" pid:4284 exit_status:1 exited_at:{seconds:1762358698 nanos:636598617}" Nov 5 16:04:58.914160 containerd[1624]: time="2025-11-05T16:04:58.914013452Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 16:04:58.918952 containerd[1624]: time="2025-11-05T16:04:58.918889517Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 16:04:58.919264 kubelet[2821]: E1105 16:04:58.919204 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:04:58.919328 kubelet[2821]: E1105 16:04:58.919277 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:04:58.919844 containerd[1624]: time="2025-11-05T16:04:58.919627795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 16:04:58.928792 kubelet[2821]: E1105 16:04:58.928699 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tgxdr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-55c4bf75cc-g4fsv_calico-apiserver(b05ca89b-5f9b-44f1-a3ba-63e56589f0e4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 16:04:58.929950 kubelet[2821]: E1105 16:04:58.929906 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-55c4bf75cc-g4fsv" podUID="b05ca89b-5f9b-44f1-a3ba-63e56589f0e4" Nov 5 16:04:58.932561 systemd-networkd[1509]: calica347ed9531: Link UP Nov 5 16:04:58.933033 systemd-networkd[1509]: calica347ed9531: Gained carrier Nov 5 16:04:58.990189 containerd[1624]: 2025-11-05 16:04:58.428 [INFO][4172] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 16:04:58.990189 containerd[1624]: 2025-11-05 16:04:58.447 [INFO][4172] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--htsp7-eth0 coredns-674b8bbfcf- kube-system 231e5ad8-3fa0-49fe-9747-a9fe616049e3 896 0 2025-11-05 16:04:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-htsp7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calica347ed9531 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="50458cfd61237ec3e59476f000227067497621ae5265ea1c242dcd05615d12e2" Namespace="kube-system" Pod="coredns-674b8bbfcf-htsp7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--htsp7-" Nov 5 16:04:58.990189 containerd[1624]: 2025-11-05 16:04:58.448 [INFO][4172] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="50458cfd61237ec3e59476f000227067497621ae5265ea1c242dcd05615d12e2" Namespace="kube-system" Pod="coredns-674b8bbfcf-htsp7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--htsp7-eth0" Nov 5 16:04:58.990189 containerd[1624]: 2025-11-05 16:04:58.508 [INFO][4256] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="50458cfd61237ec3e59476f000227067497621ae5265ea1c242dcd05615d12e2" HandleID="k8s-pod-network.50458cfd61237ec3e59476f000227067497621ae5265ea1c242dcd05615d12e2" Workload="localhost-k8s-coredns--674b8bbfcf--htsp7-eth0" Nov 5 16:04:58.990471 containerd[1624]: 2025-11-05 16:04:58.508 [INFO][4256] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="50458cfd61237ec3e59476f000227067497621ae5265ea1c242dcd05615d12e2" HandleID="k8s-pod-network.50458cfd61237ec3e59476f000227067497621ae5265ea1c242dcd05615d12e2" Workload="localhost-k8s-coredns--674b8bbfcf--htsp7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000124810), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-htsp7", "timestamp":"2025-11-05 16:04:58.508549396 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 16:04:58.990471 containerd[1624]: 2025-11-05 16:04:58.508 [INFO][4256] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 16:04:58.990471 containerd[1624]: 2025-11-05 16:04:58.508 [INFO][4256] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 16:04:58.990471 containerd[1624]: 2025-11-05 16:04:58.508 [INFO][4256] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 16:04:58.990471 containerd[1624]: 2025-11-05 16:04:58.518 [INFO][4256] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.50458cfd61237ec3e59476f000227067497621ae5265ea1c242dcd05615d12e2" host="localhost" Nov 5 16:04:58.990471 containerd[1624]: 2025-11-05 16:04:58.524 [INFO][4256] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 16:04:58.990471 containerd[1624]: 2025-11-05 16:04:58.581 [INFO][4256] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 16:04:58.990471 containerd[1624]: 2025-11-05 16:04:58.583 [INFO][4256] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 16:04:58.990471 containerd[1624]: 2025-11-05 16:04:58.585 [INFO][4256] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 16:04:58.990471 containerd[1624]: 2025-11-05 16:04:58.585 [INFO][4256] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.50458cfd61237ec3e59476f000227067497621ae5265ea1c242dcd05615d12e2" host="localhost" Nov 5 16:04:58.990762 containerd[1624]: 2025-11-05 16:04:58.586 [INFO][4256] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.50458cfd61237ec3e59476f000227067497621ae5265ea1c242dcd05615d12e2 Nov 5 16:04:58.990762 containerd[1624]: 2025-11-05 16:04:58.811 [INFO][4256] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.50458cfd61237ec3e59476f000227067497621ae5265ea1c242dcd05615d12e2" host="localhost" Nov 5 16:04:58.990762 containerd[1624]: 2025-11-05 16:04:58.924 [INFO][4256] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.50458cfd61237ec3e59476f000227067497621ae5265ea1c242dcd05615d12e2" host="localhost" Nov 5 16:04:58.990762 containerd[1624]: 2025-11-05 16:04:58.924 [INFO][4256] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.50458cfd61237ec3e59476f000227067497621ae5265ea1c242dcd05615d12e2" host="localhost" Nov 5 16:04:58.990762 containerd[1624]: 2025-11-05 16:04:58.924 [INFO][4256] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 16:04:58.990762 containerd[1624]: 2025-11-05 16:04:58.924 [INFO][4256] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="50458cfd61237ec3e59476f000227067497621ae5265ea1c242dcd05615d12e2" HandleID="k8s-pod-network.50458cfd61237ec3e59476f000227067497621ae5265ea1c242dcd05615d12e2" Workload="localhost-k8s-coredns--674b8bbfcf--htsp7-eth0" Nov 5 16:04:58.990910 containerd[1624]: 2025-11-05 16:04:58.930 [INFO][4172] cni-plugin/k8s.go 418: Populated endpoint ContainerID="50458cfd61237ec3e59476f000227067497621ae5265ea1c242dcd05615d12e2" Namespace="kube-system" Pod="coredns-674b8bbfcf-htsp7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--htsp7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--htsp7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"231e5ad8-3fa0-49fe-9747-a9fe616049e3", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 4, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-htsp7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calica347ed9531", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:04:58.990970 containerd[1624]: 2025-11-05 16:04:58.930 [INFO][4172] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="50458cfd61237ec3e59476f000227067497621ae5265ea1c242dcd05615d12e2" Namespace="kube-system" Pod="coredns-674b8bbfcf-htsp7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--htsp7-eth0" Nov 5 16:04:58.990970 containerd[1624]: 2025-11-05 16:04:58.930 [INFO][4172] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calica347ed9531 ContainerID="50458cfd61237ec3e59476f000227067497621ae5265ea1c242dcd05615d12e2" Namespace="kube-system" Pod="coredns-674b8bbfcf-htsp7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--htsp7-eth0" Nov 5 16:04:58.990970 containerd[1624]: 2025-11-05 16:04:58.933 [INFO][4172] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="50458cfd61237ec3e59476f000227067497621ae5265ea1c242dcd05615d12e2" Namespace="kube-system" Pod="coredns-674b8bbfcf-htsp7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--htsp7-eth0" Nov 5 16:04:58.991043 containerd[1624]: 2025-11-05 16:04:58.933 [INFO][4172] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="50458cfd61237ec3e59476f000227067497621ae5265ea1c242dcd05615d12e2" Namespace="kube-system" Pod="coredns-674b8bbfcf-htsp7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--htsp7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--htsp7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"231e5ad8-3fa0-49fe-9747-a9fe616049e3", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 4, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"50458cfd61237ec3e59476f000227067497621ae5265ea1c242dcd05615d12e2", Pod:"coredns-674b8bbfcf-htsp7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calica347ed9531", MAC:"be:ee:d3:c7:c7:b9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:04:58.991043 containerd[1624]: 2025-11-05 16:04:58.986 [INFO][4172] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="50458cfd61237ec3e59476f000227067497621ae5265ea1c242dcd05615d12e2" Namespace="kube-system" Pod="coredns-674b8bbfcf-htsp7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--htsp7-eth0" Nov 5 16:04:58.998608 systemd[1]: Started sshd@9-10.0.0.150:22-10.0.0.1:35558.service - OpenSSH per-connection server daemon (10.0.0.1:35558). Nov 5 16:04:59.016145 containerd[1624]: time="2025-11-05T16:04:59.016085486Z" level=info msg="connecting to shim 50458cfd61237ec3e59476f000227067497621ae5265ea1c242dcd05615d12e2" address="unix:///run/containerd/s/031de2e34c75573215f7f4bb84999eea9c50d45453c72ee46a26960fdf83bce7" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:04:59.053677 systemd[1]: Started cri-containerd-50458cfd61237ec3e59476f000227067497621ae5265ea1c242dcd05615d12e2.scope - libcontainer container 50458cfd61237ec3e59476f000227067497621ae5265ea1c242dcd05615d12e2. Nov 5 16:04:59.072173 systemd-resolved[1304]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 16:04:59.076156 sshd[4324]: Accepted publickey for core from 10.0.0.1 port 35558 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 16:04:59.079649 sshd-session[4324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:59.085359 systemd-logind[1592]: New session 9 of user core. Nov 5 16:04:59.092725 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 5 16:04:59.106532 containerd[1624]: time="2025-11-05T16:04:59.106494159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-htsp7,Uid:231e5ad8-3fa0-49fe-9747-a9fe616049e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"50458cfd61237ec3e59476f000227067497621ae5265ea1c242dcd05615d12e2\"" Nov 5 16:04:59.107297 kubelet[2821]: E1105 16:04:59.107272 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:59.119839 containerd[1624]: time="2025-11-05T16:04:59.119794752Z" level=info msg="CreateContainer within sandbox \"50458cfd61237ec3e59476f000227067497621ae5265ea1c242dcd05615d12e2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 16:04:59.135594 containerd[1624]: time="2025-11-05T16:04:59.135554620Z" level=info msg="Container c56c6c0973f4113df5a5dea312f316eb24c83c78ebb2e4424e8fec3a24263527: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:04:59.141510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount152176104.mount: Deactivated successfully. Nov 5 16:04:59.152623 containerd[1624]: time="2025-11-05T16:04:59.152577469Z" level=info msg="CreateContainer within sandbox \"50458cfd61237ec3e59476f000227067497621ae5265ea1c242dcd05615d12e2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c56c6c0973f4113df5a5dea312f316eb24c83c78ebb2e4424e8fec3a24263527\"" Nov 5 16:04:59.155221 containerd[1624]: time="2025-11-05T16:04:59.155189869Z" level=info msg="StartContainer for \"c56c6c0973f4113df5a5dea312f316eb24c83c78ebb2e4424e8fec3a24263527\"" Nov 5 16:04:59.157917 containerd[1624]: time="2025-11-05T16:04:59.157878217Z" level=info msg="connecting to shim c56c6c0973f4113df5a5dea312f316eb24c83c78ebb2e4424e8fec3a24263527" address="unix:///run/containerd/s/031de2e34c75573215f7f4bb84999eea9c50d45453c72ee46a26960fdf83bce7" protocol=ttrpc version=3 Nov 5 16:04:59.167556 systemd-networkd[1509]: cali0ffcdd84305: Gained IPv6LL Nov 5 16:04:59.186507 systemd[1]: Started cri-containerd-c56c6c0973f4113df5a5dea312f316eb24c83c78ebb2e4424e8fec3a24263527.scope - libcontainer container c56c6c0973f4113df5a5dea312f316eb24c83c78ebb2e4424e8fec3a24263527. Nov 5 16:04:59.231764 containerd[1624]: time="2025-11-05T16:04:59.231709752Z" level=info msg="StartContainer for \"c56c6c0973f4113df5a5dea312f316eb24c83c78ebb2e4424e8fec3a24263527\" returns successfully" Nov 5 16:04:59.257681 sshd[4384]: Connection closed by 10.0.0.1 port 35558 Nov 5 16:04:59.258552 sshd-session[4324]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:59.265508 systemd-logind[1592]: Session 9 logged out. Waiting for processes to exit. Nov 5 16:04:59.266376 systemd[1]: sshd@9-10.0.0.150:22-10.0.0.1:35558.service: Deactivated successfully. Nov 5 16:04:59.269756 systemd[1]: session-9.scope: Deactivated successfully. Nov 5 16:04:59.272216 systemd-logind[1592]: Removed session 9. Nov 5 16:04:59.294566 systemd-networkd[1509]: calie39281c1316: Gained IPv6LL Nov 5 16:04:59.295734 containerd[1624]: time="2025-11-05T16:04:59.295687607Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:04:59.296994 containerd[1624]: time="2025-11-05T16:04:59.296956330Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 16:04:59.297061 containerd[1624]: time="2025-11-05T16:04:59.297042626Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 16:04:59.297327 kubelet[2821]: E1105 16:04:59.297283 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 16:04:59.297501 kubelet[2821]: E1105 16:04:59.297339 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 16:04:59.297697 kubelet[2821]: E1105 16:04:59.297646 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2d7ccb3c6a0c49d4ae276381119de287,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ctl7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5864c6d54c-kz7l7_calico-system(02617f58-0688-49a8-be3f-9c86c801f751): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 16:04:59.299887 containerd[1624]: time="2025-11-05T16:04:59.299787644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 16:04:59.361264 containerd[1624]: time="2025-11-05T16:04:59.361203596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ghhp8,Uid:95105778-77b0-4ad6-94f0-b022607ec4da,Namespace:calico-system,Attempt:0,}" Nov 5 16:04:59.396628 systemd-networkd[1509]: vxlan.calico: Link UP Nov 5 16:04:59.396636 systemd-networkd[1509]: vxlan.calico: Gained carrier Nov 5 16:04:59.486404 systemd-networkd[1509]: cali7e74f11fdc2: Link UP Nov 5 16:04:59.487038 systemd-networkd[1509]: cali7e74f11fdc2: Gained carrier Nov 5 16:04:59.502937 containerd[1624]: 2025-11-05 16:04:59.410 [INFO][4448] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--ghhp8-eth0 goldmane-666569f655- calico-system 95105778-77b0-4ad6-94f0-b022607ec4da 899 0 2025-11-05 16:04:33 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-ghhp8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7e74f11fdc2 [] [] }} ContainerID="c7be8481f73b3e98adcdb8ed1ec9ea73b1e321d8399fcffc673062d5155137fe" Namespace="calico-system" Pod="goldmane-666569f655-ghhp8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ghhp8-" Nov 5 16:04:59.502937 containerd[1624]: 2025-11-05 16:04:59.410 [INFO][4448] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c7be8481f73b3e98adcdb8ed1ec9ea73b1e321d8399fcffc673062d5155137fe" Namespace="calico-system" Pod="goldmane-666569f655-ghhp8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ghhp8-eth0" Nov 5 16:04:59.502937 containerd[1624]: 2025-11-05 16:04:59.451 [INFO][4472] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c7be8481f73b3e98adcdb8ed1ec9ea73b1e321d8399fcffc673062d5155137fe" HandleID="k8s-pod-network.c7be8481f73b3e98adcdb8ed1ec9ea73b1e321d8399fcffc673062d5155137fe" Workload="localhost-k8s-goldmane--666569f655--ghhp8-eth0" Nov 5 16:04:59.502937 containerd[1624]: 2025-11-05 16:04:59.451 [INFO][4472] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c7be8481f73b3e98adcdb8ed1ec9ea73b1e321d8399fcffc673062d5155137fe" HandleID="k8s-pod-network.c7be8481f73b3e98adcdb8ed1ec9ea73b1e321d8399fcffc673062d5155137fe" Workload="localhost-k8s-goldmane--666569f655--ghhp8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000127af0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-ghhp8", "timestamp":"2025-11-05 16:04:59.451721299 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 16:04:59.502937 containerd[1624]: 2025-11-05 16:04:59.451 [INFO][4472] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 16:04:59.502937 containerd[1624]: 2025-11-05 16:04:59.451 [INFO][4472] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 16:04:59.502937 containerd[1624]: 2025-11-05 16:04:59.452 [INFO][4472] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 16:04:59.502937 containerd[1624]: 2025-11-05 16:04:59.458 [INFO][4472] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c7be8481f73b3e98adcdb8ed1ec9ea73b1e321d8399fcffc673062d5155137fe" host="localhost" Nov 5 16:04:59.502937 containerd[1624]: 2025-11-05 16:04:59.461 [INFO][4472] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 16:04:59.502937 containerd[1624]: 2025-11-05 16:04:59.465 [INFO][4472] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 16:04:59.502937 containerd[1624]: 2025-11-05 16:04:59.466 [INFO][4472] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 16:04:59.502937 containerd[1624]: 2025-11-05 16:04:59.468 [INFO][4472] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 16:04:59.502937 containerd[1624]: 2025-11-05 16:04:59.468 [INFO][4472] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c7be8481f73b3e98adcdb8ed1ec9ea73b1e321d8399fcffc673062d5155137fe" host="localhost" Nov 5 16:04:59.502937 containerd[1624]: 2025-11-05 16:04:59.470 [INFO][4472] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c7be8481f73b3e98adcdb8ed1ec9ea73b1e321d8399fcffc673062d5155137fe Nov 5 16:04:59.502937 containerd[1624]: 2025-11-05 16:04:59.473 [INFO][4472] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c7be8481f73b3e98adcdb8ed1ec9ea73b1e321d8399fcffc673062d5155137fe" host="localhost" Nov 5 16:04:59.502937 containerd[1624]: 2025-11-05 16:04:59.479 [INFO][4472] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c7be8481f73b3e98adcdb8ed1ec9ea73b1e321d8399fcffc673062d5155137fe" host="localhost" Nov 5 16:04:59.502937 containerd[1624]: 2025-11-05 16:04:59.479 [INFO][4472] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c7be8481f73b3e98adcdb8ed1ec9ea73b1e321d8399fcffc673062d5155137fe" host="localhost" Nov 5 16:04:59.502937 containerd[1624]: 2025-11-05 16:04:59.479 [INFO][4472] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 16:04:59.502937 containerd[1624]: 2025-11-05 16:04:59.479 [INFO][4472] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c7be8481f73b3e98adcdb8ed1ec9ea73b1e321d8399fcffc673062d5155137fe" HandleID="k8s-pod-network.c7be8481f73b3e98adcdb8ed1ec9ea73b1e321d8399fcffc673062d5155137fe" Workload="localhost-k8s-goldmane--666569f655--ghhp8-eth0" Nov 5 16:04:59.503550 containerd[1624]: 2025-11-05 16:04:59.483 [INFO][4448] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c7be8481f73b3e98adcdb8ed1ec9ea73b1e321d8399fcffc673062d5155137fe" Namespace="calico-system" Pod="goldmane-666569f655-ghhp8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ghhp8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--ghhp8-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"95105778-77b0-4ad6-94f0-b022607ec4da", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 4, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-ghhp8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7e74f11fdc2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:04:59.503550 containerd[1624]: 2025-11-05 16:04:59.483 [INFO][4448] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c7be8481f73b3e98adcdb8ed1ec9ea73b1e321d8399fcffc673062d5155137fe" Namespace="calico-system" Pod="goldmane-666569f655-ghhp8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ghhp8-eth0" Nov 5 16:04:59.503550 containerd[1624]: 2025-11-05 16:04:59.483 [INFO][4448] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7e74f11fdc2 ContainerID="c7be8481f73b3e98adcdb8ed1ec9ea73b1e321d8399fcffc673062d5155137fe" Namespace="calico-system" Pod="goldmane-666569f655-ghhp8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ghhp8-eth0" Nov 5 16:04:59.503550 containerd[1624]: 2025-11-05 16:04:59.486 [INFO][4448] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c7be8481f73b3e98adcdb8ed1ec9ea73b1e321d8399fcffc673062d5155137fe" Namespace="calico-system" Pod="goldmane-666569f655-ghhp8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ghhp8-eth0" Nov 5 16:04:59.503550 containerd[1624]: 2025-11-05 16:04:59.487 [INFO][4448] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c7be8481f73b3e98adcdb8ed1ec9ea73b1e321d8399fcffc673062d5155137fe" Namespace="calico-system" Pod="goldmane-666569f655-ghhp8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ghhp8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--ghhp8-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"95105778-77b0-4ad6-94f0-b022607ec4da", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 4, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c7be8481f73b3e98adcdb8ed1ec9ea73b1e321d8399fcffc673062d5155137fe", Pod:"goldmane-666569f655-ghhp8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7e74f11fdc2", MAC:"b2:30:0b:42:9a:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:04:59.503550 containerd[1624]: 2025-11-05 16:04:59.498 [INFO][4448] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c7be8481f73b3e98adcdb8ed1ec9ea73b1e321d8399fcffc673062d5155137fe" Namespace="calico-system" Pod="goldmane-666569f655-ghhp8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ghhp8-eth0" Nov 5 16:04:59.530010 kubelet[2821]: E1105 16:04:59.529865 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:59.531032 kubelet[2821]: E1105 16:04:59.530183 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-55c4bf75cc-g4fsv" podUID="b05ca89b-5f9b-44f1-a3ba-63e56589f0e4" Nov 5 16:04:59.537540 containerd[1624]: time="2025-11-05T16:04:59.537491274Z" level=info msg="connecting to shim c7be8481f73b3e98adcdb8ed1ec9ea73b1e321d8399fcffc673062d5155137fe" address="unix:///run/containerd/s/066546cde0192d7e9e62996871cb6672b4a14187a55d56ca2e444d1a3d766a98" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:04:59.560372 kubelet[2821]: I1105 16:04:59.560294 2821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-htsp7" podStartSLOduration=37.560236787 podStartE2EDuration="37.560236787s" podCreationTimestamp="2025-11-05 16:04:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 16:04:59.545710424 +0000 UTC m=+43.304864047" watchObservedRunningTime="2025-11-05 16:04:59.560236787 +0000 UTC m=+43.319390410" Nov 5 16:04:59.574561 systemd[1]: Started cri-containerd-c7be8481f73b3e98adcdb8ed1ec9ea73b1e321d8399fcffc673062d5155137fe.scope - libcontainer container c7be8481f73b3e98adcdb8ed1ec9ea73b1e321d8399fcffc673062d5155137fe. Nov 5 16:04:59.595797 systemd-resolved[1304]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 16:04:59.626154 containerd[1624]: time="2025-11-05T16:04:59.626073476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ghhp8,Uid:95105778-77b0-4ad6-94f0-b022607ec4da,Namespace:calico-system,Attempt:0,} returns sandbox id \"c7be8481f73b3e98adcdb8ed1ec9ea73b1e321d8399fcffc673062d5155137fe\"" Nov 5 16:04:59.628181 containerd[1624]: time="2025-11-05T16:04:59.628129100Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:04:59.629392 containerd[1624]: time="2025-11-05T16:04:59.629325132Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 16:04:59.629572 kubelet[2821]: E1105 16:04:59.629525 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 16:04:59.629622 kubelet[2821]: E1105 16:04:59.629584 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 16:04:59.629844 kubelet[2821]: E1105 16:04:59.629791 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ctl7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5864c6d54c-kz7l7_calico-system(02617f58-0688-49a8-be3f-9c86c801f751): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 16:04:59.630920 kubelet[2821]: E1105 16:04:59.630879 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5864c6d54c-kz7l7" podUID="02617f58-0688-49a8-be3f-9c86c801f751" Nov 5 16:04:59.643496 containerd[1624]: time="2025-11-05T16:04:59.629397312Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 16:04:59.643496 containerd[1624]: time="2025-11-05T16:04:59.630897763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 16:05:00.018785 containerd[1624]: time="2025-11-05T16:05:00.018742386Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:05:00.019764 containerd[1624]: time="2025-11-05T16:05:00.019723851Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 16:05:00.019919 containerd[1624]: time="2025-11-05T16:05:00.019799899Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 16:05:00.020025 kubelet[2821]: E1105 16:05:00.019974 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 16:05:00.020071 kubelet[2821]: E1105 16:05:00.020030 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 16:05:00.020266 kubelet[2821]: E1105 16:05:00.020170 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zbbmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ghhp8_calico-system(95105778-77b0-4ad6-94f0-b022607ec4da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 16:05:00.021490 kubelet[2821]: E1105 16:05:00.021431 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ghhp8" podUID="95105778-77b0-4ad6-94f0-b022607ec4da" Nov 5 16:05:00.361193 containerd[1624]: time="2025-11-05T16:05:00.361026294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55c4bf75cc-smcwt,Uid:3b0524f3-6d33-4a2f-8ac8-972312ac8fcc,Namespace:calico-apiserver,Attempt:0,}" Nov 5 16:05:00.361898 containerd[1624]: time="2025-11-05T16:05:00.361250868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-546f546666-6794m,Uid:f5389104-99ae-4ef4-ba0e-916e3b8ce467,Namespace:calico-system,Attempt:0,}" Nov 5 16:05:00.556629 kubelet[2821]: E1105 16:05:00.556087 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:05:00.557732 kubelet[2821]: E1105 16:05:00.557703 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5864c6d54c-kz7l7" podUID="02617f58-0688-49a8-be3f-9c86c801f751" Nov 5 16:05:00.558057 kubelet[2821]: E1105 16:05:00.558035 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ghhp8" podUID="95105778-77b0-4ad6-94f0-b022607ec4da" Nov 5 16:05:00.830628 systemd-networkd[1509]: cali7e74f11fdc2: Gained IPv6LL Nov 5 16:05:00.894554 systemd-networkd[1509]: calica347ed9531: Gained IPv6LL Nov 5 16:05:01.032703 systemd-networkd[1509]: calic7783c295f8: Link UP Nov 5 16:05:01.033839 systemd-networkd[1509]: calic7783c295f8: Gained carrier Nov 5 16:05:01.077739 containerd[1624]: 2025-11-05 16:05:00.517 [INFO][4592] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--55c4bf75cc--smcwt-eth0 calico-apiserver-55c4bf75cc- calico-apiserver 3b0524f3-6d33-4a2f-8ac8-972312ac8fcc 900 0 2025-11-05 16:04:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55c4bf75cc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-55c4bf75cc-smcwt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic7783c295f8 [] [] }} ContainerID="b141b53b1b69de3ac6b1b6128231dec4e632ac650ec504848b8e6532ecf81d5d" Namespace="calico-apiserver" Pod="calico-apiserver-55c4bf75cc-smcwt" WorkloadEndpoint="localhost-k8s-calico--apiserver--55c4bf75cc--smcwt-" Nov 5 16:05:01.077739 containerd[1624]: 2025-11-05 16:05:00.517 [INFO][4592] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b141b53b1b69de3ac6b1b6128231dec4e632ac650ec504848b8e6532ecf81d5d" Namespace="calico-apiserver" Pod="calico-apiserver-55c4bf75cc-smcwt" WorkloadEndpoint="localhost-k8s-calico--apiserver--55c4bf75cc--smcwt-eth0" Nov 5 16:05:01.077739 containerd[1624]: 2025-11-05 16:05:00.549 [INFO][4622] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b141b53b1b69de3ac6b1b6128231dec4e632ac650ec504848b8e6532ecf81d5d" HandleID="k8s-pod-network.b141b53b1b69de3ac6b1b6128231dec4e632ac650ec504848b8e6532ecf81d5d" Workload="localhost-k8s-calico--apiserver--55c4bf75cc--smcwt-eth0" Nov 5 16:05:01.077739 containerd[1624]: 2025-11-05 16:05:00.549 [INFO][4622] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b141b53b1b69de3ac6b1b6128231dec4e632ac650ec504848b8e6532ecf81d5d" HandleID="k8s-pod-network.b141b53b1b69de3ac6b1b6128231dec4e632ac650ec504848b8e6532ecf81d5d" Workload="localhost-k8s-calico--apiserver--55c4bf75cc--smcwt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002de030), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-55c4bf75cc-smcwt", "timestamp":"2025-11-05 16:05:00.549526254 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 16:05:01.077739 containerd[1624]: 2025-11-05 16:05:00.549 [INFO][4622] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 16:05:01.077739 containerd[1624]: 2025-11-05 16:05:00.549 [INFO][4622] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 16:05:01.077739 containerd[1624]: 2025-11-05 16:05:00.550 [INFO][4622] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 16:05:01.077739 containerd[1624]: 2025-11-05 16:05:00.560 [INFO][4622] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b141b53b1b69de3ac6b1b6128231dec4e632ac650ec504848b8e6532ecf81d5d" host="localhost" Nov 5 16:05:01.077739 containerd[1624]: 2025-11-05 16:05:00.600 [INFO][4622] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 16:05:01.077739 containerd[1624]: 2025-11-05 16:05:00.657 [INFO][4622] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 16:05:01.077739 containerd[1624]: 2025-11-05 16:05:00.811 [INFO][4622] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 16:05:01.077739 containerd[1624]: 2025-11-05 16:05:00.814 [INFO][4622] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 16:05:01.077739 containerd[1624]: 2025-11-05 16:05:00.814 [INFO][4622] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b141b53b1b69de3ac6b1b6128231dec4e632ac650ec504848b8e6532ecf81d5d" host="localhost" Nov 5 16:05:01.077739 containerd[1624]: 2025-11-05 16:05:00.874 [INFO][4622] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b141b53b1b69de3ac6b1b6128231dec4e632ac650ec504848b8e6532ecf81d5d Nov 5 16:05:01.077739 containerd[1624]: 2025-11-05 16:05:01.003 [INFO][4622] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b141b53b1b69de3ac6b1b6128231dec4e632ac650ec504848b8e6532ecf81d5d" host="localhost" Nov 5 16:05:01.077739 containerd[1624]: 2025-11-05 16:05:01.025 [INFO][4622] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.b141b53b1b69de3ac6b1b6128231dec4e632ac650ec504848b8e6532ecf81d5d" host="localhost" Nov 5 16:05:01.077739 containerd[1624]: 2025-11-05 16:05:01.026 [INFO][4622] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.b141b53b1b69de3ac6b1b6128231dec4e632ac650ec504848b8e6532ecf81d5d" host="localhost" Nov 5 16:05:01.077739 containerd[1624]: 2025-11-05 16:05:01.026 [INFO][4622] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 16:05:01.077739 containerd[1624]: 2025-11-05 16:05:01.026 [INFO][4622] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="b141b53b1b69de3ac6b1b6128231dec4e632ac650ec504848b8e6532ecf81d5d" HandleID="k8s-pod-network.b141b53b1b69de3ac6b1b6128231dec4e632ac650ec504848b8e6532ecf81d5d" Workload="localhost-k8s-calico--apiserver--55c4bf75cc--smcwt-eth0" Nov 5 16:05:01.078535 containerd[1624]: 2025-11-05 16:05:01.029 [INFO][4592] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b141b53b1b69de3ac6b1b6128231dec4e632ac650ec504848b8e6532ecf81d5d" Namespace="calico-apiserver" Pod="calico-apiserver-55c4bf75cc-smcwt" WorkloadEndpoint="localhost-k8s-calico--apiserver--55c4bf75cc--smcwt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55c4bf75cc--smcwt-eth0", GenerateName:"calico-apiserver-55c4bf75cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"3b0524f3-6d33-4a2f-8ac8-972312ac8fcc", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 4, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55c4bf75cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-55c4bf75cc-smcwt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic7783c295f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:05:01.078535 containerd[1624]: 2025-11-05 16:05:01.029 [INFO][4592] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="b141b53b1b69de3ac6b1b6128231dec4e632ac650ec504848b8e6532ecf81d5d" Namespace="calico-apiserver" Pod="calico-apiserver-55c4bf75cc-smcwt" WorkloadEndpoint="localhost-k8s-calico--apiserver--55c4bf75cc--smcwt-eth0" Nov 5 16:05:01.078535 containerd[1624]: 2025-11-05 16:05:01.029 [INFO][4592] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic7783c295f8 ContainerID="b141b53b1b69de3ac6b1b6128231dec4e632ac650ec504848b8e6532ecf81d5d" Namespace="calico-apiserver" Pod="calico-apiserver-55c4bf75cc-smcwt" WorkloadEndpoint="localhost-k8s-calico--apiserver--55c4bf75cc--smcwt-eth0" Nov 5 16:05:01.078535 containerd[1624]: 2025-11-05 16:05:01.034 [INFO][4592] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b141b53b1b69de3ac6b1b6128231dec4e632ac650ec504848b8e6532ecf81d5d" Namespace="calico-apiserver" Pod="calico-apiserver-55c4bf75cc-smcwt" WorkloadEndpoint="localhost-k8s-calico--apiserver--55c4bf75cc--smcwt-eth0" Nov 5 16:05:01.078535 containerd[1624]: 2025-11-05 16:05:01.035 [INFO][4592] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b141b53b1b69de3ac6b1b6128231dec4e632ac650ec504848b8e6532ecf81d5d" Namespace="calico-apiserver" Pod="calico-apiserver-55c4bf75cc-smcwt" WorkloadEndpoint="localhost-k8s-calico--apiserver--55c4bf75cc--smcwt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55c4bf75cc--smcwt-eth0", GenerateName:"calico-apiserver-55c4bf75cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"3b0524f3-6d33-4a2f-8ac8-972312ac8fcc", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 4, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55c4bf75cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b141b53b1b69de3ac6b1b6128231dec4e632ac650ec504848b8e6532ecf81d5d", Pod:"calico-apiserver-55c4bf75cc-smcwt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic7783c295f8", MAC:"6a:52:c2:cc:dd:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:05:01.078535 containerd[1624]: 2025-11-05 16:05:01.073 [INFO][4592] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b141b53b1b69de3ac6b1b6128231dec4e632ac650ec504848b8e6532ecf81d5d" Namespace="calico-apiserver" Pod="calico-apiserver-55c4bf75cc-smcwt" WorkloadEndpoint="localhost-k8s-calico--apiserver--55c4bf75cc--smcwt-eth0" Nov 5 16:05:01.110927 systemd-networkd[1509]: calif71335e8c8d: Link UP Nov 5 16:05:01.114457 containerd[1624]: time="2025-11-05T16:05:01.113758941Z" level=info msg="connecting to shim b141b53b1b69de3ac6b1b6128231dec4e632ac650ec504848b8e6532ecf81d5d" address="unix:///run/containerd/s/07d7ef15440b7db8b7f9824260a50d69ac0d00f31e8156d54cdd5470c56698fc" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:05:01.113913 systemd-networkd[1509]: calif71335e8c8d: Gained carrier Nov 5 16:05:01.140465 containerd[1624]: 2025-11-05 16:05:00.523 [INFO][4596] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--546f546666--6794m-eth0 calico-kube-controllers-546f546666- calico-system f5389104-99ae-4ef4-ba0e-916e3b8ce467 895 0 2025-11-05 16:04:35 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:546f546666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-546f546666-6794m eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif71335e8c8d [] [] }} ContainerID="9432155b3456276b1ce9bb2b2bed67d9034f0c155a3f8b1556fc325de3432407" Namespace="calico-system" Pod="calico-kube-controllers-546f546666-6794m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--546f546666--6794m-" Nov 5 16:05:01.140465 containerd[1624]: 2025-11-05 16:05:00.523 [INFO][4596] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9432155b3456276b1ce9bb2b2bed67d9034f0c155a3f8b1556fc325de3432407" Namespace="calico-system" Pod="calico-kube-controllers-546f546666-6794m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--546f546666--6794m-eth0" Nov 5 16:05:01.140465 containerd[1624]: 2025-11-05 16:05:00.558 [INFO][4628] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9432155b3456276b1ce9bb2b2bed67d9034f0c155a3f8b1556fc325de3432407" HandleID="k8s-pod-network.9432155b3456276b1ce9bb2b2bed67d9034f0c155a3f8b1556fc325de3432407" Workload="localhost-k8s-calico--kube--controllers--546f546666--6794m-eth0" Nov 5 16:05:01.140465 containerd[1624]: 2025-11-05 16:05:00.558 [INFO][4628] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9432155b3456276b1ce9bb2b2bed67d9034f0c155a3f8b1556fc325de3432407" HandleID="k8s-pod-network.9432155b3456276b1ce9bb2b2bed67d9034f0c155a3f8b1556fc325de3432407" Workload="localhost-k8s-calico--kube--controllers--546f546666--6794m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df590), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-546f546666-6794m", "timestamp":"2025-11-05 16:05:00.558766032 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 16:05:01.140465 containerd[1624]: 2025-11-05 16:05:00.558 [INFO][4628] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 16:05:01.140465 containerd[1624]: 2025-11-05 16:05:01.026 [INFO][4628] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 16:05:01.140465 containerd[1624]: 2025-11-05 16:05:01.026 [INFO][4628] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 16:05:01.140465 containerd[1624]: 2025-11-05 16:05:01.033 [INFO][4628] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9432155b3456276b1ce9bb2b2bed67d9034f0c155a3f8b1556fc325de3432407" host="localhost" Nov 5 16:05:01.140465 containerd[1624]: 2025-11-05 16:05:01.038 [INFO][4628] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 16:05:01.140465 containerd[1624]: 2025-11-05 16:05:01.072 [INFO][4628] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 16:05:01.140465 containerd[1624]: 2025-11-05 16:05:01.076 [INFO][4628] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 16:05:01.140465 containerd[1624]: 2025-11-05 16:05:01.079 [INFO][4628] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 16:05:01.140465 containerd[1624]: 2025-11-05 16:05:01.079 [INFO][4628] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9432155b3456276b1ce9bb2b2bed67d9034f0c155a3f8b1556fc325de3432407" host="localhost" Nov 5 16:05:01.140465 containerd[1624]: 2025-11-05 16:05:01.082 [INFO][4628] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9432155b3456276b1ce9bb2b2bed67d9034f0c155a3f8b1556fc325de3432407 Nov 5 16:05:01.140465 containerd[1624]: 2025-11-05 16:05:01.087 [INFO][4628] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9432155b3456276b1ce9bb2b2bed67d9034f0c155a3f8b1556fc325de3432407" host="localhost" Nov 5 16:05:01.140465 containerd[1624]: 2025-11-05 16:05:01.093 [INFO][4628] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.9432155b3456276b1ce9bb2b2bed67d9034f0c155a3f8b1556fc325de3432407" host="localhost" Nov 5 16:05:01.140465 containerd[1624]: 2025-11-05 16:05:01.093 [INFO][4628] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.9432155b3456276b1ce9bb2b2bed67d9034f0c155a3f8b1556fc325de3432407" host="localhost" Nov 5 16:05:01.140465 containerd[1624]: 2025-11-05 16:05:01.093 [INFO][4628] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 16:05:01.140465 containerd[1624]: 2025-11-05 16:05:01.093 [INFO][4628] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="9432155b3456276b1ce9bb2b2bed67d9034f0c155a3f8b1556fc325de3432407" HandleID="k8s-pod-network.9432155b3456276b1ce9bb2b2bed67d9034f0c155a3f8b1556fc325de3432407" Workload="localhost-k8s-calico--kube--controllers--546f546666--6794m-eth0" Nov 5 16:05:01.141095 containerd[1624]: 2025-11-05 16:05:01.099 [INFO][4596] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9432155b3456276b1ce9bb2b2bed67d9034f0c155a3f8b1556fc325de3432407" Namespace="calico-system" Pod="calico-kube-controllers-546f546666-6794m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--546f546666--6794m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--546f546666--6794m-eth0", GenerateName:"calico-kube-controllers-546f546666-", Namespace:"calico-system", SelfLink:"", UID:"f5389104-99ae-4ef4-ba0e-916e3b8ce467", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 4, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"546f546666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-546f546666-6794m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif71335e8c8d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:05:01.141095 containerd[1624]: 2025-11-05 16:05:01.099 [INFO][4596] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="9432155b3456276b1ce9bb2b2bed67d9034f0c155a3f8b1556fc325de3432407" Namespace="calico-system" Pod="calico-kube-controllers-546f546666-6794m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--546f546666--6794m-eth0" Nov 5 16:05:01.141095 containerd[1624]: 2025-11-05 16:05:01.099 [INFO][4596] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif71335e8c8d ContainerID="9432155b3456276b1ce9bb2b2bed67d9034f0c155a3f8b1556fc325de3432407" Namespace="calico-system" Pod="calico-kube-controllers-546f546666-6794m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--546f546666--6794m-eth0" Nov 5 16:05:01.141095 containerd[1624]: 2025-11-05 16:05:01.115 [INFO][4596] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9432155b3456276b1ce9bb2b2bed67d9034f0c155a3f8b1556fc325de3432407" Namespace="calico-system" Pod="calico-kube-controllers-546f546666-6794m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--546f546666--6794m-eth0" Nov 5 16:05:01.141095 containerd[1624]: 2025-11-05 16:05:01.116 [INFO][4596] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9432155b3456276b1ce9bb2b2bed67d9034f0c155a3f8b1556fc325de3432407" Namespace="calico-system" Pod="calico-kube-controllers-546f546666-6794m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--546f546666--6794m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--546f546666--6794m-eth0", GenerateName:"calico-kube-controllers-546f546666-", Namespace:"calico-system", SelfLink:"", UID:"f5389104-99ae-4ef4-ba0e-916e3b8ce467", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 4, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"546f546666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9432155b3456276b1ce9bb2b2bed67d9034f0c155a3f8b1556fc325de3432407", Pod:"calico-kube-controllers-546f546666-6794m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif71335e8c8d", MAC:"56:88:87:ba:5f:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:05:01.141095 containerd[1624]: 2025-11-05 16:05:01.132 [INFO][4596] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9432155b3456276b1ce9bb2b2bed67d9034f0c155a3f8b1556fc325de3432407" Namespace="calico-system" Pod="calico-kube-controllers-546f546666-6794m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--546f546666--6794m-eth0" Nov 5 16:05:01.158680 systemd[1]: Started cri-containerd-b141b53b1b69de3ac6b1b6128231dec4e632ac650ec504848b8e6532ecf81d5d.scope - libcontainer container b141b53b1b69de3ac6b1b6128231dec4e632ac650ec504848b8e6532ecf81d5d. Nov 5 16:05:01.187374 systemd-resolved[1304]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 16:05:01.189655 containerd[1624]: time="2025-11-05T16:05:01.189601988Z" level=info msg="connecting to shim 9432155b3456276b1ce9bb2b2bed67d9034f0c155a3f8b1556fc325de3432407" address="unix:///run/containerd/s/225c20d60bb489a8d4353da68a49b47304ef61e04bab1d11318cf968944196fd" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:05:01.229536 systemd[1]: Started cri-containerd-9432155b3456276b1ce9bb2b2bed67d9034f0c155a3f8b1556fc325de3432407.scope - libcontainer container 9432155b3456276b1ce9bb2b2bed67d9034f0c155a3f8b1556fc325de3432407. Nov 5 16:05:01.232406 containerd[1624]: time="2025-11-05T16:05:01.232309249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55c4bf75cc-smcwt,Uid:3b0524f3-6d33-4a2f-8ac8-972312ac8fcc,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b141b53b1b69de3ac6b1b6128231dec4e632ac650ec504848b8e6532ecf81d5d\"" Nov 5 16:05:01.235760 containerd[1624]: time="2025-11-05T16:05:01.235715654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 16:05:01.248267 systemd-resolved[1304]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 16:05:01.288816 containerd[1624]: time="2025-11-05T16:05:01.288759205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-546f546666-6794m,Uid:f5389104-99ae-4ef4-ba0e-916e3b8ce467,Namespace:calico-system,Attempt:0,} returns sandbox id \"9432155b3456276b1ce9bb2b2bed67d9034f0c155a3f8b1556fc325de3432407\"" Nov 5 16:05:01.342575 systemd-networkd[1509]: vxlan.calico: Gained IPv6LL Nov 5 16:05:01.361592 containerd[1624]: time="2025-11-05T16:05:01.361538739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qm7k4,Uid:998850e6-5a3e-41d3-948e-1a886bae0358,Namespace:calico-system,Attempt:0,}" Nov 5 16:05:01.473168 systemd-networkd[1509]: cali28359765711: Link UP Nov 5 16:05:01.474453 systemd-networkd[1509]: cali28359765711: Gained carrier Nov 5 16:05:01.489479 containerd[1624]: 2025-11-05 16:05:01.403 [INFO][4748] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--qm7k4-eth0 csi-node-driver- calico-system 998850e6-5a3e-41d3-948e-1a886bae0358 777 0 2025-11-05 16:04:35 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-qm7k4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali28359765711 [] [] }} ContainerID="3f50222727be9ba3b656b4deca9161b92a78c9fb73343a5578c6b7d061c1c284" Namespace="calico-system" Pod="csi-node-driver-qm7k4" WorkloadEndpoint="localhost-k8s-csi--node--driver--qm7k4-" Nov 5 16:05:01.489479 containerd[1624]: 2025-11-05 16:05:01.403 [INFO][4748] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3f50222727be9ba3b656b4deca9161b92a78c9fb73343a5578c6b7d061c1c284" Namespace="calico-system" Pod="csi-node-driver-qm7k4" WorkloadEndpoint="localhost-k8s-csi--node--driver--qm7k4-eth0" Nov 5 16:05:01.489479 containerd[1624]: 2025-11-05 16:05:01.433 [INFO][4763] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f50222727be9ba3b656b4deca9161b92a78c9fb73343a5578c6b7d061c1c284" HandleID="k8s-pod-network.3f50222727be9ba3b656b4deca9161b92a78c9fb73343a5578c6b7d061c1c284" Workload="localhost-k8s-csi--node--driver--qm7k4-eth0" Nov 5 16:05:01.489479 containerd[1624]: 2025-11-05 16:05:01.433 [INFO][4763] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3f50222727be9ba3b656b4deca9161b92a78c9fb73343a5578c6b7d061c1c284" HandleID="k8s-pod-network.3f50222727be9ba3b656b4deca9161b92a78c9fb73343a5578c6b7d061c1c284" Workload="localhost-k8s-csi--node--driver--qm7k4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c70e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-qm7k4", "timestamp":"2025-11-05 16:05:01.433572834 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 16:05:01.489479 containerd[1624]: 2025-11-05 16:05:01.433 [INFO][4763] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 16:05:01.489479 containerd[1624]: 2025-11-05 16:05:01.433 [INFO][4763] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 16:05:01.489479 containerd[1624]: 2025-11-05 16:05:01.433 [INFO][4763] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 16:05:01.489479 containerd[1624]: 2025-11-05 16:05:01.441 [INFO][4763] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3f50222727be9ba3b656b4deca9161b92a78c9fb73343a5578c6b7d061c1c284" host="localhost" Nov 5 16:05:01.489479 containerd[1624]: 2025-11-05 16:05:01.447 [INFO][4763] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 16:05:01.489479 containerd[1624]: 2025-11-05 16:05:01.451 [INFO][4763] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 16:05:01.489479 containerd[1624]: 2025-11-05 16:05:01.453 [INFO][4763] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 16:05:01.489479 containerd[1624]: 2025-11-05 16:05:01.455 [INFO][4763] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 16:05:01.489479 containerd[1624]: 2025-11-05 16:05:01.455 [INFO][4763] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3f50222727be9ba3b656b4deca9161b92a78c9fb73343a5578c6b7d061c1c284" host="localhost" Nov 5 16:05:01.489479 containerd[1624]: 2025-11-05 16:05:01.457 [INFO][4763] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3f50222727be9ba3b656b4deca9161b92a78c9fb73343a5578c6b7d061c1c284 Nov 5 16:05:01.489479 containerd[1624]: 2025-11-05 16:05:01.461 [INFO][4763] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3f50222727be9ba3b656b4deca9161b92a78c9fb73343a5578c6b7d061c1c284" host="localhost" Nov 5 16:05:01.489479 containerd[1624]: 2025-11-05 16:05:01.466 [INFO][4763] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.3f50222727be9ba3b656b4deca9161b92a78c9fb73343a5578c6b7d061c1c284" host="localhost" Nov 5 16:05:01.489479 containerd[1624]: 2025-11-05 16:05:01.466 [INFO][4763] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.3f50222727be9ba3b656b4deca9161b92a78c9fb73343a5578c6b7d061c1c284" host="localhost" Nov 5 16:05:01.489479 containerd[1624]: 2025-11-05 16:05:01.467 [INFO][4763] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 16:05:01.489479 containerd[1624]: 2025-11-05 16:05:01.467 [INFO][4763] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="3f50222727be9ba3b656b4deca9161b92a78c9fb73343a5578c6b7d061c1c284" HandleID="k8s-pod-network.3f50222727be9ba3b656b4deca9161b92a78c9fb73343a5578c6b7d061c1c284" Workload="localhost-k8s-csi--node--driver--qm7k4-eth0" Nov 5 16:05:01.490121 containerd[1624]: 2025-11-05 16:05:01.470 [INFO][4748] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3f50222727be9ba3b656b4deca9161b92a78c9fb73343a5578c6b7d061c1c284" Namespace="calico-system" Pod="csi-node-driver-qm7k4" WorkloadEndpoint="localhost-k8s-csi--node--driver--qm7k4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qm7k4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"998850e6-5a3e-41d3-948e-1a886bae0358", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 4, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-qm7k4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali28359765711", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:05:01.490121 containerd[1624]: 2025-11-05 16:05:01.470 [INFO][4748] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="3f50222727be9ba3b656b4deca9161b92a78c9fb73343a5578c6b7d061c1c284" Namespace="calico-system" Pod="csi-node-driver-qm7k4" WorkloadEndpoint="localhost-k8s-csi--node--driver--qm7k4-eth0" Nov 5 16:05:01.490121 containerd[1624]: 2025-11-05 16:05:01.470 [INFO][4748] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali28359765711 ContainerID="3f50222727be9ba3b656b4deca9161b92a78c9fb73343a5578c6b7d061c1c284" Namespace="calico-system" Pod="csi-node-driver-qm7k4" WorkloadEndpoint="localhost-k8s-csi--node--driver--qm7k4-eth0" Nov 5 16:05:01.490121 containerd[1624]: 2025-11-05 16:05:01.474 [INFO][4748] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f50222727be9ba3b656b4deca9161b92a78c9fb73343a5578c6b7d061c1c284" Namespace="calico-system" Pod="csi-node-driver-qm7k4" WorkloadEndpoint="localhost-k8s-csi--node--driver--qm7k4-eth0" Nov 5 16:05:01.490121 containerd[1624]: 2025-11-05 16:05:01.476 [INFO][4748] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3f50222727be9ba3b656b4deca9161b92a78c9fb73343a5578c6b7d061c1c284" Namespace="calico-system" Pod="csi-node-driver-qm7k4" WorkloadEndpoint="localhost-k8s-csi--node--driver--qm7k4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qm7k4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"998850e6-5a3e-41d3-948e-1a886bae0358", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 4, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3f50222727be9ba3b656b4deca9161b92a78c9fb73343a5578c6b7d061c1c284", Pod:"csi-node-driver-qm7k4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali28359765711", MAC:"4a:a9:29:a8:b3:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:05:01.490121 containerd[1624]: 2025-11-05 16:05:01.485 [INFO][4748] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3f50222727be9ba3b656b4deca9161b92a78c9fb73343a5578c6b7d061c1c284" Namespace="calico-system" Pod="csi-node-driver-qm7k4" WorkloadEndpoint="localhost-k8s-csi--node--driver--qm7k4-eth0" Nov 5 16:05:01.518171 containerd[1624]: time="2025-11-05T16:05:01.518084393Z" level=info msg="connecting to shim 3f50222727be9ba3b656b4deca9161b92a78c9fb73343a5578c6b7d061c1c284" address="unix:///run/containerd/s/d3b925053220621df6da0e8ff0ca9e94386608c657eda4439b2eb2a3334a8372" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:05:01.555503 systemd[1]: Started cri-containerd-3f50222727be9ba3b656b4deca9161b92a78c9fb73343a5578c6b7d061c1c284.scope - libcontainer container 3f50222727be9ba3b656b4deca9161b92a78c9fb73343a5578c6b7d061c1c284. Nov 5 16:05:01.562578 kubelet[2821]: E1105 16:05:01.562527 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ghhp8" podUID="95105778-77b0-4ad6-94f0-b022607ec4da" Nov 5 16:05:01.575402 systemd-resolved[1304]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 16:05:01.595684 containerd[1624]: time="2025-11-05T16:05:01.595633202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qm7k4,Uid:998850e6-5a3e-41d3-948e-1a886bae0358,Namespace:calico-system,Attempt:0,} returns sandbox id \"3f50222727be9ba3b656b4deca9161b92a78c9fb73343a5578c6b7d061c1c284\"" Nov 5 16:05:01.598018 containerd[1624]: time="2025-11-05T16:05:01.597956857Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:05:01.599147 containerd[1624]: time="2025-11-05T16:05:01.599108961Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 16:05:01.599418 containerd[1624]: time="2025-11-05T16:05:01.599171190Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 16:05:01.599460 kubelet[2821]: E1105 16:05:01.599387 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:05:01.599520 kubelet[2821]: E1105 16:05:01.599463 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:05:01.599888 containerd[1624]: time="2025-11-05T16:05:01.599846414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 16:05:01.599981 kubelet[2821]: E1105 16:05:01.599737 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8n89g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-55c4bf75cc-smcwt_calico-apiserver(3b0524f3-6d33-4a2f-8ac8-972312ac8fcc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 16:05:01.601133 kubelet[2821]: E1105 16:05:01.601064 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-55c4bf75cc-smcwt" podUID="3b0524f3-6d33-4a2f-8ac8-972312ac8fcc" Nov 5 16:05:02.311893 containerd[1624]: time="2025-11-05T16:05:02.311785869Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:05:02.313013 containerd[1624]: time="2025-11-05T16:05:02.312967838Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 16:05:02.313088 containerd[1624]: time="2025-11-05T16:05:02.313062270Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 16:05:02.313410 kubelet[2821]: E1105 16:05:02.313302 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 16:05:02.313473 kubelet[2821]: E1105 16:05:02.313406 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 16:05:02.314001 containerd[1624]: time="2025-11-05T16:05:02.313731702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 16:05:02.314052 kubelet[2821]: E1105 16:05:02.313737 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hqz4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-546f546666-6794m_calico-system(f5389104-99ae-4ef4-ba0e-916e3b8ce467): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 16:05:02.315160 kubelet[2821]: E1105 16:05:02.315090 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-546f546666-6794m" podUID="f5389104-99ae-4ef4-ba0e-916e3b8ce467" Nov 5 16:05:02.361454 kubelet[2821]: E1105 16:05:02.361416 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:05:02.362485 containerd[1624]: time="2025-11-05T16:05:02.362397080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jzmkb,Uid:b04f7cec-01c3-4233-b501-2b57e869475f,Namespace:kube-system,Attempt:0,}" Nov 5 16:05:02.480444 systemd-networkd[1509]: calieb5cd826248: Link UP Nov 5 16:05:02.481322 systemd-networkd[1509]: calieb5cd826248: Gained carrier Nov 5 16:05:02.504538 containerd[1624]: 2025-11-05 16:05:02.402 [INFO][4829] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--jzmkb-eth0 coredns-674b8bbfcf- kube-system b04f7cec-01c3-4233-b501-2b57e869475f 897 0 2025-11-05 16:04:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-jzmkb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calieb5cd826248 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5a37a3f386ff90e9c8687e00a851055a9903c804214cfc9196d4e81afe1b18e9" Namespace="kube-system" Pod="coredns-674b8bbfcf-jzmkb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jzmkb-" Nov 5 16:05:02.504538 containerd[1624]: 2025-11-05 16:05:02.403 [INFO][4829] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5a37a3f386ff90e9c8687e00a851055a9903c804214cfc9196d4e81afe1b18e9" Namespace="kube-system" Pod="coredns-674b8bbfcf-jzmkb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jzmkb-eth0" Nov 5 16:05:02.504538 containerd[1624]: 2025-11-05 16:05:02.434 [INFO][4843] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5a37a3f386ff90e9c8687e00a851055a9903c804214cfc9196d4e81afe1b18e9" HandleID="k8s-pod-network.5a37a3f386ff90e9c8687e00a851055a9903c804214cfc9196d4e81afe1b18e9" Workload="localhost-k8s-coredns--674b8bbfcf--jzmkb-eth0" Nov 5 16:05:02.504538 containerd[1624]: 2025-11-05 16:05:02.434 [INFO][4843] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5a37a3f386ff90e9c8687e00a851055a9903c804214cfc9196d4e81afe1b18e9" HandleID="k8s-pod-network.5a37a3f386ff90e9c8687e00a851055a9903c804214cfc9196d4e81afe1b18e9" Workload="localhost-k8s-coredns--674b8bbfcf--jzmkb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d57b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-jzmkb", "timestamp":"2025-11-05 16:05:02.434656172 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 16:05:02.504538 containerd[1624]: 2025-11-05 16:05:02.434 [INFO][4843] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 16:05:02.504538 containerd[1624]: 2025-11-05 16:05:02.434 [INFO][4843] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 16:05:02.504538 containerd[1624]: 2025-11-05 16:05:02.434 [INFO][4843] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 16:05:02.504538 containerd[1624]: 2025-11-05 16:05:02.441 [INFO][4843] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5a37a3f386ff90e9c8687e00a851055a9903c804214cfc9196d4e81afe1b18e9" host="localhost" Nov 5 16:05:02.504538 containerd[1624]: 2025-11-05 16:05:02.446 [INFO][4843] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 16:05:02.504538 containerd[1624]: 2025-11-05 16:05:02.451 [INFO][4843] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 16:05:02.504538 containerd[1624]: 2025-11-05 16:05:02.453 [INFO][4843] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 16:05:02.504538 containerd[1624]: 2025-11-05 16:05:02.456 [INFO][4843] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 16:05:02.504538 containerd[1624]: 2025-11-05 16:05:02.456 [INFO][4843] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5a37a3f386ff90e9c8687e00a851055a9903c804214cfc9196d4e81afe1b18e9" host="localhost" Nov 5 16:05:02.504538 containerd[1624]: 2025-11-05 16:05:02.458 [INFO][4843] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5a37a3f386ff90e9c8687e00a851055a9903c804214cfc9196d4e81afe1b18e9 Nov 5 16:05:02.504538 containerd[1624]: 2025-11-05 16:05:02.465 [INFO][4843] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5a37a3f386ff90e9c8687e00a851055a9903c804214cfc9196d4e81afe1b18e9" host="localhost" Nov 5 16:05:02.504538 containerd[1624]: 2025-11-05 16:05:02.472 [INFO][4843] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.5a37a3f386ff90e9c8687e00a851055a9903c804214cfc9196d4e81afe1b18e9" host="localhost" Nov 5 16:05:02.504538 containerd[1624]: 2025-11-05 16:05:02.472 [INFO][4843] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.5a37a3f386ff90e9c8687e00a851055a9903c804214cfc9196d4e81afe1b18e9" host="localhost" Nov 5 16:05:02.504538 containerd[1624]: 2025-11-05 16:05:02.472 [INFO][4843] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 16:05:02.504538 containerd[1624]: 2025-11-05 16:05:02.472 [INFO][4843] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="5a37a3f386ff90e9c8687e00a851055a9903c804214cfc9196d4e81afe1b18e9" HandleID="k8s-pod-network.5a37a3f386ff90e9c8687e00a851055a9903c804214cfc9196d4e81afe1b18e9" Workload="localhost-k8s-coredns--674b8bbfcf--jzmkb-eth0" Nov 5 16:05:02.506255 containerd[1624]: 2025-11-05 16:05:02.477 [INFO][4829] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5a37a3f386ff90e9c8687e00a851055a9903c804214cfc9196d4e81afe1b18e9" Namespace="kube-system" Pod="coredns-674b8bbfcf-jzmkb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jzmkb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--jzmkb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b04f7cec-01c3-4233-b501-2b57e869475f", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 4, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-jzmkb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieb5cd826248", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:05:02.506255 containerd[1624]: 2025-11-05 16:05:02.477 [INFO][4829] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="5a37a3f386ff90e9c8687e00a851055a9903c804214cfc9196d4e81afe1b18e9" Namespace="kube-system" Pod="coredns-674b8bbfcf-jzmkb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jzmkb-eth0" Nov 5 16:05:02.506255 containerd[1624]: 2025-11-05 16:05:02.477 [INFO][4829] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb5cd826248 ContainerID="5a37a3f386ff90e9c8687e00a851055a9903c804214cfc9196d4e81afe1b18e9" Namespace="kube-system" Pod="coredns-674b8bbfcf-jzmkb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jzmkb-eth0" Nov 5 16:05:02.506255 containerd[1624]: 2025-11-05 16:05:02.480 [INFO][4829] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5a37a3f386ff90e9c8687e00a851055a9903c804214cfc9196d4e81afe1b18e9" Namespace="kube-system" Pod="coredns-674b8bbfcf-jzmkb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jzmkb-eth0" Nov 5 16:05:02.506255 containerd[1624]: 2025-11-05 16:05:02.482 [INFO][4829] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5a37a3f386ff90e9c8687e00a851055a9903c804214cfc9196d4e81afe1b18e9" Namespace="kube-system" Pod="coredns-674b8bbfcf-jzmkb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jzmkb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--jzmkb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b04f7cec-01c3-4233-b501-2b57e869475f", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 4, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5a37a3f386ff90e9c8687e00a851055a9903c804214cfc9196d4e81afe1b18e9", Pod:"coredns-674b8bbfcf-jzmkb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieb5cd826248", MAC:"5a:65:b2:49:be:ae", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:05:02.506255 containerd[1624]: 2025-11-05 16:05:02.499 [INFO][4829] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5a37a3f386ff90e9c8687e00a851055a9903c804214cfc9196d4e81afe1b18e9" Namespace="kube-system" Pod="coredns-674b8bbfcf-jzmkb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jzmkb-eth0" Nov 5 16:05:02.555397 containerd[1624]: time="2025-11-05T16:05:02.555231459Z" level=info msg="connecting to shim 5a37a3f386ff90e9c8687e00a851055a9903c804214cfc9196d4e81afe1b18e9" address="unix:///run/containerd/s/506e22ad627a0a0c6a81bc4d1e6fb92c78766af86f49717522827a7f898d242d" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:05:02.573144 kubelet[2821]: E1105 16:05:02.572996 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-546f546666-6794m" podUID="f5389104-99ae-4ef4-ba0e-916e3b8ce467" Nov 5 16:05:02.575464 kubelet[2821]: E1105 16:05:02.573174 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-55c4bf75cc-smcwt" podUID="3b0524f3-6d33-4a2f-8ac8-972312ac8fcc" Nov 5 16:05:02.602549 systemd[1]: Started cri-containerd-5a37a3f386ff90e9c8687e00a851055a9903c804214cfc9196d4e81afe1b18e9.scope - libcontainer container 5a37a3f386ff90e9c8687e00a851055a9903c804214cfc9196d4e81afe1b18e9. Nov 5 16:05:02.624263 systemd-resolved[1304]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 16:05:02.668994 containerd[1624]: time="2025-11-05T16:05:02.668909944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jzmkb,Uid:b04f7cec-01c3-4233-b501-2b57e869475f,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a37a3f386ff90e9c8687e00a851055a9903c804214cfc9196d4e81afe1b18e9\"" Nov 5 16:05:02.670920 kubelet[2821]: E1105 16:05:02.670399 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:05:02.674878 containerd[1624]: time="2025-11-05T16:05:02.674836024Z" level=info msg="CreateContainer within sandbox \"5a37a3f386ff90e9c8687e00a851055a9903c804214cfc9196d4e81afe1b18e9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 16:05:02.690500 containerd[1624]: time="2025-11-05T16:05:02.690386255Z" level=info msg="Container 24ce45e98d9635d44e200bb9bc03d6ef8eb9bd782c5de7a2a4b2a55aa5911e57: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:05:02.694910 containerd[1624]: time="2025-11-05T16:05:02.694839944Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:05:02.696877 containerd[1624]: time="2025-11-05T16:05:02.696807499Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 16:05:02.697668 containerd[1624]: time="2025-11-05T16:05:02.696822118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 16:05:02.697720 kubelet[2821]: E1105 16:05:02.697195 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 16:05:02.697720 kubelet[2821]: E1105 16:05:02.697270 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 16:05:02.697720 kubelet[2821]: E1105 16:05:02.697493 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jp2vh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qm7k4_calico-system(998850e6-5a3e-41d3-948e-1a886bae0358): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 16:05:02.699661 containerd[1624]: time="2025-11-05T16:05:02.699601227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 16:05:02.700746 containerd[1624]: time="2025-11-05T16:05:02.700657534Z" level=info msg="CreateContainer within sandbox \"5a37a3f386ff90e9c8687e00a851055a9903c804214cfc9196d4e81afe1b18e9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"24ce45e98d9635d44e200bb9bc03d6ef8eb9bd782c5de7a2a4b2a55aa5911e57\"" Nov 5 16:05:02.701469 containerd[1624]: time="2025-11-05T16:05:02.701410226Z" level=info msg="StartContainer for \"24ce45e98d9635d44e200bb9bc03d6ef8eb9bd782c5de7a2a4b2a55aa5911e57\"" Nov 5 16:05:02.702669 containerd[1624]: time="2025-11-05T16:05:02.702581886Z" level=info msg="connecting to shim 24ce45e98d9635d44e200bb9bc03d6ef8eb9bd782c5de7a2a4b2a55aa5911e57" address="unix:///run/containerd/s/506e22ad627a0a0c6a81bc4d1e6fb92c78766af86f49717522827a7f898d242d" protocol=ttrpc version=3 Nov 5 16:05:02.736525 systemd[1]: Started cri-containerd-24ce45e98d9635d44e200bb9bc03d6ef8eb9bd782c5de7a2a4b2a55aa5911e57.scope - libcontainer container 24ce45e98d9635d44e200bb9bc03d6ef8eb9bd782c5de7a2a4b2a55aa5911e57. Nov 5 16:05:02.751727 systemd-networkd[1509]: calif71335e8c8d: Gained IPv6LL Nov 5 16:05:02.777086 containerd[1624]: time="2025-11-05T16:05:02.777031763Z" level=info msg="StartContainer for \"24ce45e98d9635d44e200bb9bc03d6ef8eb9bd782c5de7a2a4b2a55aa5911e57\" returns successfully" Nov 5 16:05:03.006595 systemd-networkd[1509]: calic7783c295f8: Gained IPv6LL Nov 5 16:05:03.027646 containerd[1624]: time="2025-11-05T16:05:03.027599932Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:05:03.221242 containerd[1624]: time="2025-11-05T16:05:03.221134543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 16:05:03.221320 containerd[1624]: time="2025-11-05T16:05:03.221204257Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 16:05:03.221584 kubelet[2821]: E1105 16:05:03.221533 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 16:05:03.221653 kubelet[2821]: E1105 16:05:03.221595 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 16:05:03.221798 kubelet[2821]: E1105 16:05:03.221751 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jp2vh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qm7k4_calico-system(998850e6-5a3e-41d3-948e-1a886bae0358): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 16:05:03.222972 kubelet[2821]: E1105 16:05:03.222929 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qm7k4" podUID="998850e6-5a3e-41d3-948e-1a886bae0358" Nov 5 16:05:03.326586 systemd-networkd[1509]: cali28359765711: Gained IPv6LL Nov 5 16:05:03.573809 kubelet[2821]: E1105 16:05:03.573766 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:05:03.575906 kubelet[2821]: E1105 16:05:03.575849 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qm7k4" podUID="998850e6-5a3e-41d3-948e-1a886bae0358" Nov 5 16:05:03.587145 kubelet[2821]: I1105 16:05:03.586935 2821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jzmkb" podStartSLOduration=41.586916629 podStartE2EDuration="41.586916629s" podCreationTimestamp="2025-11-05 16:04:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 16:05:03.584547453 +0000 UTC m=+47.343701076" watchObservedRunningTime="2025-11-05 16:05:03.586916629 +0000 UTC m=+47.346070252" Nov 5 16:05:03.839564 systemd-networkd[1509]: calieb5cd826248: Gained IPv6LL Nov 5 16:05:04.276993 systemd[1]: Started sshd@10-10.0.0.150:22-10.0.0.1:48784.service - OpenSSH per-connection server daemon (10.0.0.1:48784). Nov 5 16:05:04.360545 sshd[4951]: Accepted publickey for core from 10.0.0.1 port 48784 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 16:05:04.362167 sshd-session[4951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:05:04.366972 systemd-logind[1592]: New session 10 of user core. Nov 5 16:05:04.371481 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 5 16:05:04.500304 sshd[4955]: Connection closed by 10.0.0.1 port 48784 Nov 5 16:05:04.500704 sshd-session[4951]: pam_unix(sshd:session): session closed for user core Nov 5 16:05:04.506599 systemd[1]: sshd@10-10.0.0.150:22-10.0.0.1:48784.service: Deactivated successfully. Nov 5 16:05:04.509221 systemd[1]: session-10.scope: Deactivated successfully. Nov 5 16:05:04.510023 systemd-logind[1592]: Session 10 logged out. Waiting for processes to exit. Nov 5 16:05:04.511259 systemd-logind[1592]: Removed session 10. Nov 5 16:05:04.576302 kubelet[2821]: E1105 16:05:04.576175 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:05:05.577863 kubelet[2821]: E1105 16:05:05.577808 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:05:09.520403 systemd[1]: Started sshd@11-10.0.0.150:22-10.0.0.1:48798.service - OpenSSH per-connection server daemon (10.0.0.1:48798). Nov 5 16:05:09.576144 sshd[4985]: Accepted publickey for core from 10.0.0.1 port 48798 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 16:05:09.577488 sshd-session[4985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:05:09.581830 systemd-logind[1592]: New session 11 of user core. Nov 5 16:05:09.589511 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 5 16:05:09.701019 sshd[4988]: Connection closed by 10.0.0.1 port 48798 Nov 5 16:05:09.701428 sshd-session[4985]: pam_unix(sshd:session): session closed for user core Nov 5 16:05:09.711828 systemd[1]: sshd@11-10.0.0.150:22-10.0.0.1:48798.service: Deactivated successfully. Nov 5 16:05:09.714506 systemd[1]: session-11.scope: Deactivated successfully. Nov 5 16:05:09.715490 systemd-logind[1592]: Session 11 logged out. Waiting for processes to exit. Nov 5 16:05:09.719268 systemd[1]: Started sshd@12-10.0.0.150:22-10.0.0.1:48800.service - OpenSSH per-connection server daemon (10.0.0.1:48800). Nov 5 16:05:09.720312 systemd-logind[1592]: Removed session 11. Nov 5 16:05:09.772416 sshd[5003]: Accepted publickey for core from 10.0.0.1 port 48800 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 16:05:09.774253 sshd-session[5003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:05:09.779356 systemd-logind[1592]: New session 12 of user core. Nov 5 16:05:09.783537 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 5 16:05:09.923530 sshd[5012]: Connection closed by 10.0.0.1 port 48800 Nov 5 16:05:09.925697 sshd-session[5003]: pam_unix(sshd:session): session closed for user core Nov 5 16:05:09.934837 systemd[1]: sshd@12-10.0.0.150:22-10.0.0.1:48800.service: Deactivated successfully. Nov 5 16:05:09.938200 systemd[1]: session-12.scope: Deactivated successfully. Nov 5 16:05:09.939415 systemd-logind[1592]: Session 12 logged out. Waiting for processes to exit. Nov 5 16:05:09.944090 systemd[1]: Started sshd@13-10.0.0.150:22-10.0.0.1:48816.service - OpenSSH per-connection server daemon (10.0.0.1:48816). Nov 5 16:05:09.945131 systemd-logind[1592]: Removed session 12. Nov 5 16:05:09.993054 sshd[5023]: Accepted publickey for core from 10.0.0.1 port 48816 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 16:05:09.994303 sshd-session[5023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:05:09.998811 systemd-logind[1592]: New session 13 of user core. Nov 5 16:05:10.006514 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 5 16:05:10.117081 sshd[5026]: Connection closed by 10.0.0.1 port 48816 Nov 5 16:05:10.117328 sshd-session[5023]: pam_unix(sshd:session): session closed for user core Nov 5 16:05:10.122472 systemd[1]: sshd@13-10.0.0.150:22-10.0.0.1:48816.service: Deactivated successfully. Nov 5 16:05:10.124822 systemd[1]: session-13.scope: Deactivated successfully. Nov 5 16:05:10.125671 systemd-logind[1592]: Session 13 logged out. Waiting for processes to exit. Nov 5 16:05:10.127074 systemd-logind[1592]: Removed session 13. Nov 5 16:05:14.362488 containerd[1624]: time="2025-11-05T16:05:14.362414275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 16:05:14.876633 containerd[1624]: time="2025-11-05T16:05:14.876565703Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:05:14.877984 containerd[1624]: time="2025-11-05T16:05:14.877936026Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 16:05:14.877984 containerd[1624]: time="2025-11-05T16:05:14.877976062Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 16:05:14.878269 kubelet[2821]: E1105 16:05:14.878204 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 16:05:14.878669 kubelet[2821]: E1105 16:05:14.878282 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 16:05:14.878669 kubelet[2821]: E1105 16:05:14.878597 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zbbmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ghhp8_calico-system(95105778-77b0-4ad6-94f0-b022607ec4da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 16:05:14.878790 containerd[1624]: time="2025-11-05T16:05:14.878703113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 16:05:14.880371 kubelet[2821]: E1105 16:05:14.880307 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ghhp8" podUID="95105778-77b0-4ad6-94f0-b022607ec4da" Nov 5 16:05:15.131521 systemd[1]: Started sshd@14-10.0.0.150:22-10.0.0.1:48240.service - OpenSSH per-connection server daemon (10.0.0.1:48240). Nov 5 16:05:15.186720 sshd[5042]: Accepted publickey for core from 10.0.0.1 port 48240 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 16:05:15.188029 sshd-session[5042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:05:15.192560 systemd-logind[1592]: New session 14 of user core. Nov 5 16:05:15.204482 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 5 16:05:15.245851 containerd[1624]: time="2025-11-05T16:05:15.245807476Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:05:15.247102 containerd[1624]: time="2025-11-05T16:05:15.247058499Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 16:05:15.247159 containerd[1624]: time="2025-11-05T16:05:15.247142810Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 16:05:15.247389 kubelet[2821]: E1105 16:05:15.247309 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:05:15.247500 kubelet[2821]: E1105 16:05:15.247398 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:05:15.247621 kubelet[2821]: E1105 16:05:15.247579 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tgxdr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-55c4bf75cc-g4fsv_calico-apiserver(b05ca89b-5f9b-44f1-a3ba-63e56589f0e4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 16:05:15.248841 kubelet[2821]: E1105 16:05:15.248765 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-55c4bf75cc-g4fsv" podUID="b05ca89b-5f9b-44f1-a3ba-63e56589f0e4" Nov 5 16:05:15.314737 sshd[5045]: Connection closed by 10.0.0.1 port 48240 Nov 5 16:05:15.315060 sshd-session[5042]: pam_unix(sshd:session): session closed for user core Nov 5 16:05:15.320018 systemd[1]: sshd@14-10.0.0.150:22-10.0.0.1:48240.service: Deactivated successfully. Nov 5 16:05:15.322271 systemd[1]: session-14.scope: Deactivated successfully. Nov 5 16:05:15.323177 systemd-logind[1592]: Session 14 logged out. Waiting for processes to exit. Nov 5 16:05:15.324992 systemd-logind[1592]: Removed session 14. Nov 5 16:05:15.362209 containerd[1624]: time="2025-11-05T16:05:15.362180343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 16:05:16.804973 containerd[1624]: time="2025-11-05T16:05:16.804908029Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:05:16.833946 containerd[1624]: time="2025-11-05T16:05:16.833898041Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 16:05:16.834005 containerd[1624]: time="2025-11-05T16:05:16.833953216Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 16:05:16.834209 kubelet[2821]: E1105 16:05:16.834155 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 16:05:16.834209 kubelet[2821]: E1105 16:05:16.834205 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 16:05:16.834521 kubelet[2821]: E1105 16:05:16.834433 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2d7ccb3c6a0c49d4ae276381119de287,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ctl7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5864c6d54c-kz7l7_calico-system(02617f58-0688-49a8-be3f-9c86c801f751): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 16:05:16.834708 containerd[1624]: time="2025-11-05T16:05:16.834674345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 16:05:17.255485 containerd[1624]: time="2025-11-05T16:05:17.255434978Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:05:17.256689 containerd[1624]: time="2025-11-05T16:05:17.256613601Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 16:05:17.256689 containerd[1624]: time="2025-11-05T16:05:17.256668967Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 16:05:17.256885 kubelet[2821]: E1105 16:05:17.256841 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 16:05:17.256938 kubelet[2821]: E1105 16:05:17.256897 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 16:05:17.257258 containerd[1624]: time="2025-11-05T16:05:17.257223818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 16:05:17.257439 kubelet[2821]: E1105 16:05:17.257200 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hqz4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-546f546666-6794m_calico-system(f5389104-99ae-4ef4-ba0e-916e3b8ce467): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 16:05:17.259377 kubelet[2821]: E1105 16:05:17.258680 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-546f546666-6794m" podUID="f5389104-99ae-4ef4-ba0e-916e3b8ce467" Nov 5 16:05:17.758940 containerd[1624]: time="2025-11-05T16:05:17.758887768Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:05:17.760027 containerd[1624]: time="2025-11-05T16:05:17.759974295Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 16:05:17.760027 containerd[1624]: time="2025-11-05T16:05:17.760020082Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 16:05:17.760253 kubelet[2821]: E1105 16:05:17.760207 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 16:05:17.760308 kubelet[2821]: E1105 16:05:17.760268 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 16:05:17.760697 kubelet[2821]: E1105 16:05:17.760642 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ctl7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5864c6d54c-kz7l7_calico-system(02617f58-0688-49a8-be3f-9c86c801f751): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 16:05:17.760802 containerd[1624]: time="2025-11-05T16:05:17.760692928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 16:05:17.762746 kubelet[2821]: E1105 16:05:17.762223 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5864c6d54c-kz7l7" podUID="02617f58-0688-49a8-be3f-9c86c801f751" Nov 5 16:05:18.209261 containerd[1624]: time="2025-11-05T16:05:18.209203647Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:05:18.210442 containerd[1624]: time="2025-11-05T16:05:18.210402607Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 16:05:18.210521 containerd[1624]: time="2025-11-05T16:05:18.210479855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 16:05:18.210677 kubelet[2821]: E1105 16:05:18.210622 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 16:05:18.210677 kubelet[2821]: E1105 16:05:18.210672 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 16:05:18.211126 kubelet[2821]: E1105 16:05:18.210940 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jp2vh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qm7k4_calico-system(998850e6-5a3e-41d3-948e-1a886bae0358): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 16:05:18.211210 containerd[1624]: time="2025-11-05T16:05:18.210976564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 16:05:18.577147 containerd[1624]: time="2025-11-05T16:05:18.576979334Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:05:18.578137 containerd[1624]: time="2025-11-05T16:05:18.578095185Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 16:05:18.578307 containerd[1624]: time="2025-11-05T16:05:18.578133890Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 16:05:18.578450 kubelet[2821]: E1105 16:05:18.578402 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:05:18.578505 kubelet[2821]: E1105 16:05:18.578480 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:05:18.578817 kubelet[2821]: E1105 16:05:18.578733 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8n89g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-55c4bf75cc-smcwt_calico-apiserver(3b0524f3-6d33-4a2f-8ac8-972312ac8fcc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 16:05:18.579191 containerd[1624]: time="2025-11-05T16:05:18.579109954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 16:05:18.580213 kubelet[2821]: E1105 16:05:18.580160 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-55c4bf75cc-smcwt" podUID="3b0524f3-6d33-4a2f-8ac8-972312ac8fcc" Nov 5 16:05:18.960704 containerd[1624]: time="2025-11-05T16:05:18.960647811Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:05:18.961777 containerd[1624]: time="2025-11-05T16:05:18.961737653Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 16:05:18.961830 containerd[1624]: time="2025-11-05T16:05:18.961814720Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 16:05:18.962008 kubelet[2821]: E1105 16:05:18.961959 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 16:05:18.962091 kubelet[2821]: E1105 16:05:18.962014 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 16:05:18.962206 kubelet[2821]: E1105 16:05:18.962137 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jp2vh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qm7k4_calico-system(998850e6-5a3e-41d3-948e-1a886bae0358): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 16:05:18.963439 kubelet[2821]: E1105 16:05:18.963377 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qm7k4" podUID="998850e6-5a3e-41d3-948e-1a886bae0358" Nov 5 16:05:20.329534 systemd[1]: Started sshd@15-10.0.0.150:22-10.0.0.1:48070.service - OpenSSH per-connection server daemon (10.0.0.1:48070). Nov 5 16:05:20.394505 sshd[5070]: Accepted publickey for core from 10.0.0.1 port 48070 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 16:05:20.396275 sshd-session[5070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:05:20.400677 systemd-logind[1592]: New session 15 of user core. Nov 5 16:05:20.408483 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 5 16:05:20.533410 sshd[5073]: Connection closed by 10.0.0.1 port 48070 Nov 5 16:05:20.533780 sshd-session[5070]: pam_unix(sshd:session): session closed for user core Nov 5 16:05:20.538808 systemd[1]: sshd@15-10.0.0.150:22-10.0.0.1:48070.service: Deactivated successfully. Nov 5 16:05:20.541039 systemd[1]: session-15.scope: Deactivated successfully. Nov 5 16:05:20.541923 systemd-logind[1592]: Session 15 logged out. Waiting for processes to exit. Nov 5 16:05:20.543441 systemd-logind[1592]: Removed session 15. Nov 5 16:05:25.559305 systemd[1]: Started sshd@16-10.0.0.150:22-10.0.0.1:48084.service - OpenSSH per-connection server daemon (10.0.0.1:48084). Nov 5 16:05:25.615126 sshd[5090]: Accepted publickey for core from 10.0.0.1 port 48084 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 16:05:25.617235 sshd-session[5090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:05:25.622428 systemd-logind[1592]: New session 16 of user core. Nov 5 16:05:25.632529 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 5 16:05:25.740119 sshd[5093]: Connection closed by 10.0.0.1 port 48084 Nov 5 16:05:25.740459 sshd-session[5090]: pam_unix(sshd:session): session closed for user core Nov 5 16:05:25.745893 systemd[1]: sshd@16-10.0.0.150:22-10.0.0.1:48084.service: Deactivated successfully. Nov 5 16:05:25.747926 systemd[1]: session-16.scope: Deactivated successfully. Nov 5 16:05:25.748900 systemd-logind[1592]: Session 16 logged out. Waiting for processes to exit. Nov 5 16:05:25.750339 systemd-logind[1592]: Removed session 16. Nov 5 16:05:27.364089 kubelet[2821]: E1105 16:05:27.363963 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-55c4bf75cc-g4fsv" podUID="b05ca89b-5f9b-44f1-a3ba-63e56589f0e4" Nov 5 16:05:28.363180 kubelet[2821]: E1105 16:05:28.363101 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-546f546666-6794m" podUID="f5389104-99ae-4ef4-ba0e-916e3b8ce467" Nov 5 16:05:28.598949 containerd[1624]: time="2025-11-05T16:05:28.598902817Z" level=info msg="TaskExit event in podsandbox handler container_id:\"28c25102036bb67446e69c1f33a0400551281b99614a3bbf8fd55416ca162697\" id:\"c050f87d29211326ce18951ef5d8c929f0901b986cd52e4e5d0d38c32622e0fc\" pid:5119 exited_at:{seconds:1762358728 nanos:598527914}" Nov 5 16:05:28.601321 kubelet[2821]: E1105 16:05:28.601291 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:05:29.362759 kubelet[2821]: E1105 16:05:29.362707 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ghhp8" podUID="95105778-77b0-4ad6-94f0-b022607ec4da" Nov 5 16:05:30.364411 kubelet[2821]: E1105 16:05:30.364335 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-55c4bf75cc-smcwt" podUID="3b0524f3-6d33-4a2f-8ac8-972312ac8fcc" Nov 5 16:05:30.364878 kubelet[2821]: E1105 16:05:30.364207 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:05:30.754632 systemd[1]: Started sshd@17-10.0.0.150:22-10.0.0.1:49052.service - OpenSSH per-connection server daemon (10.0.0.1:49052). Nov 5 16:05:30.825262 sshd[5132]: Accepted publickey for core from 10.0.0.1 port 49052 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 16:05:30.827034 sshd-session[5132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:05:30.831990 systemd-logind[1592]: New session 17 of user core. Nov 5 16:05:30.839482 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 5 16:05:30.962225 sshd[5135]: Connection closed by 10.0.0.1 port 49052 Nov 5 16:05:30.962741 sshd-session[5132]: pam_unix(sshd:session): session closed for user core Nov 5 16:05:30.972318 systemd[1]: sshd@17-10.0.0.150:22-10.0.0.1:49052.service: Deactivated successfully. Nov 5 16:05:30.974692 systemd[1]: session-17.scope: Deactivated successfully. Nov 5 16:05:30.975630 systemd-logind[1592]: Session 17 logged out. Waiting for processes to exit. Nov 5 16:05:30.979400 systemd[1]: Started sshd@18-10.0.0.150:22-10.0.0.1:49056.service - OpenSSH per-connection server daemon (10.0.0.1:49056). Nov 5 16:05:30.980215 systemd-logind[1592]: Removed session 17. Nov 5 16:05:31.040303 sshd[5149]: Accepted publickey for core from 10.0.0.1 port 49056 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 16:05:31.042259 sshd-session[5149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:05:31.047281 systemd-logind[1592]: New session 18 of user core. Nov 5 16:05:31.054501 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 5 16:05:31.375023 kubelet[2821]: E1105 16:05:31.374646 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5864c6d54c-kz7l7" podUID="02617f58-0688-49a8-be3f-9c86c801f751" Nov 5 16:05:31.375023 kubelet[2821]: E1105 16:05:31.374686 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qm7k4" podUID="998850e6-5a3e-41d3-948e-1a886bae0358" Nov 5 16:05:31.683955 sshd[5152]: Connection closed by 10.0.0.1 port 49056 Nov 5 16:05:31.686429 sshd-session[5149]: pam_unix(sshd:session): session closed for user core Nov 5 16:05:31.695144 systemd[1]: sshd@18-10.0.0.150:22-10.0.0.1:49056.service: Deactivated successfully. Nov 5 16:05:31.698649 systemd[1]: session-18.scope: Deactivated successfully. Nov 5 16:05:31.699461 systemd-logind[1592]: Session 18 logged out. Waiting for processes to exit. Nov 5 16:05:31.702773 systemd[1]: Started sshd@19-10.0.0.150:22-10.0.0.1:49062.service - OpenSSH per-connection server daemon (10.0.0.1:49062). Nov 5 16:05:31.703917 systemd-logind[1592]: Removed session 18. Nov 5 16:05:31.765479 sshd[5166]: Accepted publickey for core from 10.0.0.1 port 49062 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 16:05:31.766971 sshd-session[5166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:05:31.771793 systemd-logind[1592]: New session 19 of user core. Nov 5 16:05:31.780480 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 5 16:05:32.278176 sshd[5169]: Connection closed by 10.0.0.1 port 49062 Nov 5 16:05:32.279596 sshd-session[5166]: pam_unix(sshd:session): session closed for user core Nov 5 16:05:32.294573 systemd[1]: sshd@19-10.0.0.150:22-10.0.0.1:49062.service: Deactivated successfully. Nov 5 16:05:32.299573 systemd[1]: session-19.scope: Deactivated successfully. Nov 5 16:05:32.302097 systemd-logind[1592]: Session 19 logged out. Waiting for processes to exit. Nov 5 16:05:32.306432 systemd[1]: Started sshd@20-10.0.0.150:22-10.0.0.1:49072.service - OpenSSH per-connection server daemon (10.0.0.1:49072). Nov 5 16:05:32.307758 systemd-logind[1592]: Removed session 19. Nov 5 16:05:32.358040 sshd[5190]: Accepted publickey for core from 10.0.0.1 port 49072 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 16:05:32.359419 sshd-session[5190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:05:32.364506 systemd-logind[1592]: New session 20 of user core. Nov 5 16:05:32.375499 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 5 16:05:32.580258 sshd[5193]: Connection closed by 10.0.0.1 port 49072 Nov 5 16:05:32.581521 sshd-session[5190]: pam_unix(sshd:session): session closed for user core Nov 5 16:05:32.590738 systemd[1]: sshd@20-10.0.0.150:22-10.0.0.1:49072.service: Deactivated successfully. Nov 5 16:05:32.593034 systemd[1]: session-20.scope: Deactivated successfully. Nov 5 16:05:32.596525 systemd-logind[1592]: Session 20 logged out. Waiting for processes to exit. Nov 5 16:05:32.597700 systemd[1]: Started sshd@21-10.0.0.150:22-10.0.0.1:49076.service - OpenSSH per-connection server daemon (10.0.0.1:49076). Nov 5 16:05:32.599102 systemd-logind[1592]: Removed session 20. Nov 5 16:05:32.664397 sshd[5205]: Accepted publickey for core from 10.0.0.1 port 49076 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 16:05:32.666448 sshd-session[5205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:05:32.671408 systemd-logind[1592]: New session 21 of user core. Nov 5 16:05:32.682533 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 5 16:05:32.801731 sshd[5209]: Connection closed by 10.0.0.1 port 49076 Nov 5 16:05:32.802081 sshd-session[5205]: pam_unix(sshd:session): session closed for user core Nov 5 16:05:32.806941 systemd[1]: sshd@21-10.0.0.150:22-10.0.0.1:49076.service: Deactivated successfully. Nov 5 16:05:32.809180 systemd[1]: session-21.scope: Deactivated successfully. Nov 5 16:05:32.810278 systemd-logind[1592]: Session 21 logged out. Waiting for processes to exit. Nov 5 16:05:32.812115 systemd-logind[1592]: Removed session 21. Nov 5 16:05:34.361444 kubelet[2821]: E1105 16:05:34.361388 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:05:35.361277 kubelet[2821]: E1105 16:05:35.361236 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:05:36.364376 kubelet[2821]: E1105 16:05:36.364009 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:05:37.819415 systemd[1]: Started sshd@22-10.0.0.150:22-10.0.0.1:49078.service - OpenSSH per-connection server daemon (10.0.0.1:49078). Nov 5 16:05:37.881673 sshd[5226]: Accepted publickey for core from 10.0.0.1 port 49078 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 16:05:37.883175 sshd-session[5226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:05:37.887593 systemd-logind[1592]: New session 22 of user core. Nov 5 16:05:37.899485 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 5 16:05:38.011177 sshd[5230]: Connection closed by 10.0.0.1 port 49078 Nov 5 16:05:38.011546 sshd-session[5226]: pam_unix(sshd:session): session closed for user core Nov 5 16:05:38.015653 systemd[1]: sshd@22-10.0.0.150:22-10.0.0.1:49078.service: Deactivated successfully. Nov 5 16:05:38.017565 systemd[1]: session-22.scope: Deactivated successfully. Nov 5 16:05:38.018372 systemd-logind[1592]: Session 22 logged out. Waiting for processes to exit. Nov 5 16:05:38.019535 systemd-logind[1592]: Removed session 22. Nov 5 16:05:41.361999 containerd[1624]: time="2025-11-05T16:05:41.361946318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 16:05:41.821561 containerd[1624]: time="2025-11-05T16:05:41.821494192Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:05:41.822880 containerd[1624]: time="2025-11-05T16:05:41.822843731Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 16:05:41.822974 containerd[1624]: time="2025-11-05T16:05:41.822930515Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 16:05:41.823224 kubelet[2821]: E1105 16:05:41.823166 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:05:41.823628 kubelet[2821]: E1105 16:05:41.823243 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:05:41.823628 kubelet[2821]: E1105 16:05:41.823495 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tgxdr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-55c4bf75cc-g4fsv_calico-apiserver(b05ca89b-5f9b-44f1-a3ba-63e56589f0e4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 16:05:41.824335 containerd[1624]: time="2025-11-05T16:05:41.824225160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 16:05:41.825562 kubelet[2821]: E1105 16:05:41.825505 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-55c4bf75cc-g4fsv" podUID="b05ca89b-5f9b-44f1-a3ba-63e56589f0e4" Nov 5 16:05:42.180381 containerd[1624]: time="2025-11-05T16:05:42.180206807Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:05:42.181520 containerd[1624]: time="2025-11-05T16:05:42.181480961Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 16:05:42.181613 containerd[1624]: time="2025-11-05T16:05:42.181546897Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 16:05:42.181753 kubelet[2821]: E1105 16:05:42.181704 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 16:05:42.181813 kubelet[2821]: E1105 16:05:42.181769 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 16:05:42.181994 kubelet[2821]: E1105 16:05:42.181937 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hqz4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-546f546666-6794m_calico-system(f5389104-99ae-4ef4-ba0e-916e3b8ce467): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 16:05:42.183118 kubelet[2821]: E1105 16:05:42.183087 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-546f546666-6794m" podUID="f5389104-99ae-4ef4-ba0e-916e3b8ce467" Nov 5 16:05:42.363546 containerd[1624]: time="2025-11-05T16:05:42.363160284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 16:05:42.714271 containerd[1624]: time="2025-11-05T16:05:42.714203997Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:05:42.715537 containerd[1624]: time="2025-11-05T16:05:42.715488992Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 16:05:42.715627 containerd[1624]: time="2025-11-05T16:05:42.715569205Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 16:05:42.715768 kubelet[2821]: E1105 16:05:42.715713 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:05:42.715835 kubelet[2821]: E1105 16:05:42.715772 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:05:42.716333 kubelet[2821]: E1105 16:05:42.715999 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8n89g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-55c4bf75cc-smcwt_calico-apiserver(3b0524f3-6d33-4a2f-8ac8-972312ac8fcc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 16:05:42.716540 containerd[1624]: time="2025-11-05T16:05:42.716110231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 16:05:42.717270 kubelet[2821]: E1105 16:05:42.717240 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-55c4bf75cc-smcwt" podUID="3b0524f3-6d33-4a2f-8ac8-972312ac8fcc" Nov 5 16:05:43.028792 systemd[1]: Started sshd@23-10.0.0.150:22-10.0.0.1:45972.service - OpenSSH per-connection server daemon (10.0.0.1:45972). Nov 5 16:05:43.060959 containerd[1624]: time="2025-11-05T16:05:43.060915535Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:05:43.062144 containerd[1624]: time="2025-11-05T16:05:43.062101041Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 16:05:43.062238 containerd[1624]: time="2025-11-05T16:05:43.062180612Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 16:05:43.062444 kubelet[2821]: E1105 16:05:43.062387 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 16:05:43.062848 kubelet[2821]: E1105 16:05:43.062458 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 16:05:43.062848 kubelet[2821]: E1105 16:05:43.062644 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jp2vh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qm7k4_calico-system(998850e6-5a3e-41d3-948e-1a886bae0358): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 16:05:43.064899 containerd[1624]: time="2025-11-05T16:05:43.064844780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 16:05:43.086800 sshd[5252]: Accepted publickey for core from 10.0.0.1 port 45972 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 16:05:43.088463 sshd-session[5252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:05:43.092891 systemd-logind[1592]: New session 23 of user core. Nov 5 16:05:43.103489 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 5 16:05:43.212494 sshd[5255]: Connection closed by 10.0.0.1 port 45972 Nov 5 16:05:43.212860 sshd-session[5252]: pam_unix(sshd:session): session closed for user core Nov 5 16:05:43.216531 systemd[1]: sshd@23-10.0.0.150:22-10.0.0.1:45972.service: Deactivated successfully. Nov 5 16:05:43.218796 systemd[1]: session-23.scope: Deactivated successfully. Nov 5 16:05:43.220500 systemd-logind[1592]: Session 23 logged out. Waiting for processes to exit. Nov 5 16:05:43.221948 systemd-logind[1592]: Removed session 23. Nov 5 16:05:43.435397 containerd[1624]: time="2025-11-05T16:05:43.435236637Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:05:43.436510 containerd[1624]: time="2025-11-05T16:05:43.436452371Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 16:05:43.436589 containerd[1624]: time="2025-11-05T16:05:43.436527334Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 16:05:43.436755 kubelet[2821]: E1105 16:05:43.436706 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 16:05:43.436825 kubelet[2821]: E1105 16:05:43.436766 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 16:05:43.436993 kubelet[2821]: E1105 16:05:43.436953 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jp2vh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qm7k4_calico-system(998850e6-5a3e-41d3-948e-1a886bae0358): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 16:05:43.437129 containerd[1624]: time="2025-11-05T16:05:43.437108344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 16:05:43.438272 kubelet[2821]: E1105 16:05:43.438215 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qm7k4" podUID="998850e6-5a3e-41d3-948e-1a886bae0358" Nov 5 16:05:43.898138 containerd[1624]: time="2025-11-05T16:05:43.898071905Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:05:43.900945 containerd[1624]: time="2025-11-05T16:05:43.900906336Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 16:05:43.901002 containerd[1624]: time="2025-11-05T16:05:43.900957693Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 16:05:43.901213 kubelet[2821]: E1105 16:05:43.901139 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 16:05:43.901281 kubelet[2821]: E1105 16:05:43.901212 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 16:05:43.901469 kubelet[2821]: E1105 16:05:43.901399 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zbbmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ghhp8_calico-system(95105778-77b0-4ad6-94f0-b022607ec4da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 16:05:43.902620 kubelet[2821]: E1105 16:05:43.902583 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ghhp8" podUID="95105778-77b0-4ad6-94f0-b022607ec4da" Nov 5 16:05:44.362396 containerd[1624]: time="2025-11-05T16:05:44.362291092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 16:05:44.715445 containerd[1624]: time="2025-11-05T16:05:44.715405338Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:05:44.717102 containerd[1624]: time="2025-11-05T16:05:44.717025548Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 16:05:44.717102 containerd[1624]: time="2025-11-05T16:05:44.717074691Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 16:05:44.717309 kubelet[2821]: E1105 16:05:44.717266 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 16:05:44.717625 kubelet[2821]: E1105 16:05:44.717321 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 16:05:44.717625 kubelet[2821]: E1105 16:05:44.717466 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2d7ccb3c6a0c49d4ae276381119de287,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ctl7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5864c6d54c-kz7l7_calico-system(02617f58-0688-49a8-be3f-9c86c801f751): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 16:05:44.719776 containerd[1624]: time="2025-11-05T16:05:44.719739680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 16:05:45.083491 containerd[1624]: time="2025-11-05T16:05:45.083317100Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:05:45.084542 containerd[1624]: time="2025-11-05T16:05:45.084481746Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 16:05:45.084610 containerd[1624]: time="2025-11-05T16:05:45.084546600Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 16:05:45.084746 kubelet[2821]: E1105 16:05:45.084709 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 16:05:45.084807 kubelet[2821]: E1105 16:05:45.084761 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 16:05:45.084918 kubelet[2821]: E1105 16:05:45.084882 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ctl7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5864c6d54c-kz7l7_calico-system(02617f58-0688-49a8-be3f-9c86c801f751): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 16:05:45.086109 kubelet[2821]: E1105 16:05:45.086055 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5864c6d54c-kz7l7" podUID="02617f58-0688-49a8-be3f-9c86c801f751" Nov 5 16:05:48.228558 systemd[1]: Started sshd@24-10.0.0.150:22-10.0.0.1:45984.service - OpenSSH per-connection server daemon (10.0.0.1:45984). Nov 5 16:05:48.285336 sshd[5268]: Accepted publickey for core from 10.0.0.1 port 45984 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 16:05:48.287025 sshd-session[5268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:05:48.291168 systemd-logind[1592]: New session 24 of user core. Nov 5 16:05:48.299476 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 5 16:05:48.407549 sshd[5271]: Connection closed by 10.0.0.1 port 45984 Nov 5 16:05:48.407849 sshd-session[5268]: pam_unix(sshd:session): session closed for user core Nov 5 16:05:48.412094 systemd[1]: sshd@24-10.0.0.150:22-10.0.0.1:45984.service: Deactivated successfully. Nov 5 16:05:48.414031 systemd[1]: session-24.scope: Deactivated successfully. Nov 5 16:05:48.414803 systemd-logind[1592]: Session 24 logged out. Waiting for processes to exit. Nov 5 16:05:48.415978 systemd-logind[1592]: Removed session 24.