Nov 4 23:51:46.669196 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 4 22:00:22 -00 2025 Nov 4 23:51:46.669221 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 4 23:51:46.669235 kernel: BIOS-provided physical RAM map: Nov 4 23:51:46.669242 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 4 23:51:46.669249 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Nov 4 23:51:46.669255 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Nov 4 23:51:46.669263 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Nov 4 23:51:46.669270 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Nov 4 23:51:46.669279 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Nov 4 23:51:46.669286 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Nov 4 23:51:46.669297 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Nov 4 23:51:46.669304 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Nov 4 23:51:46.669311 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Nov 4 23:51:46.669318 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Nov 4 23:51:46.669326 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Nov 4 23:51:46.669338 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Nov 4 23:51:46.669347 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Nov 4 23:51:46.669354 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Nov 4 23:51:46.669362 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Nov 4 23:51:46.669369 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Nov 4 23:51:46.669376 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Nov 4 23:51:46.669383 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Nov 4 23:51:46.669390 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 4 23:51:46.669398 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 4 23:51:46.669405 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Nov 4 23:51:46.669416 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 4 23:51:46.669423 kernel: NX (Execute Disable) protection: active Nov 4 23:51:46.669431 kernel: APIC: Static calls initialized Nov 4 23:51:46.669438 kernel: e820: update [mem 0x9b319018-0x9b322c57] usable ==> usable Nov 4 23:51:46.669445 kernel: e820: update [mem 0x9b2dc018-0x9b318e57] usable ==> usable Nov 4 23:51:46.669452 kernel: extended physical RAM map: Nov 4 23:51:46.669460 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 4 23:51:46.669467 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Nov 4 23:51:46.669474 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Nov 4 23:51:46.669481 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Nov 4 23:51:46.669489 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Nov 4 23:51:46.669500 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Nov 4 23:51:46.669507 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Nov 4 23:51:46.669515 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2dc017] usable Nov 4 23:51:46.669522 kernel: reserve setup_data: [mem 0x000000009b2dc018-0x000000009b318e57] usable Nov 4 23:51:46.669535 kernel: reserve setup_data: [mem 0x000000009b318e58-0x000000009b319017] usable Nov 4 23:51:46.669546 kernel: reserve setup_data: [mem 0x000000009b319018-0x000000009b322c57] usable Nov 4 23:51:46.669554 kernel: reserve setup_data: [mem 0x000000009b322c58-0x000000009bd3efff] usable Nov 4 23:51:46.669562 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Nov 4 23:51:46.669570 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Nov 4 23:51:46.669577 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Nov 4 23:51:46.669585 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Nov 4 23:51:46.669592 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Nov 4 23:51:46.669600 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Nov 4 23:51:46.669612 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Nov 4 23:51:46.669620 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Nov 4 23:51:46.669627 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Nov 4 23:51:46.669635 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Nov 4 23:51:46.669642 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Nov 4 23:51:46.669650 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 4 23:51:46.669657 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 4 23:51:46.669665 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Nov 4 23:51:46.669672 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 4 23:51:46.669682 kernel: efi: EFI v2.7 by EDK II Nov 4 23:51:46.669690 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Nov 4 23:51:46.669702 kernel: random: crng init done Nov 4 23:51:46.669712 kernel: efi: Remove mem150: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Nov 4 23:51:46.669719 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Nov 4 23:51:46.669729 kernel: secureboot: Secure boot disabled Nov 4 23:51:46.669736 kernel: SMBIOS 2.8 present. Nov 4 23:51:46.669777 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Nov 4 23:51:46.669790 kernel: DMI: Memory slots populated: 1/1 Nov 4 23:51:46.669800 kernel: Hypervisor detected: KVM Nov 4 23:51:46.669810 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Nov 4 23:51:46.669820 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 4 23:51:46.669830 kernel: kvm-clock: using sched offset of 4801172947 cycles Nov 4 23:51:46.669852 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 4 23:51:46.669860 kernel: tsc: Detected 2794.748 MHz processor Nov 4 23:51:46.669868 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 4 23:51:46.669876 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 4 23:51:46.669884 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Nov 4 23:51:46.669892 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 4 23:51:46.669900 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 4 23:51:46.669908 kernel: Using GB pages for direct mapping Nov 4 23:51:46.669922 kernel: ACPI: Early table checksum verification disabled Nov 4 23:51:46.669930 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Nov 4 23:51:46.669938 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Nov 4 23:51:46.669946 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:51:46.669954 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:51:46.669962 kernel: ACPI: FACS 0x000000009CBDD000 000040 Nov 4 23:51:46.669969 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:51:46.669982 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:51:46.669990 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:51:46.669998 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:51:46.670005 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Nov 4 23:51:46.670013 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Nov 4 23:51:46.670021 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Nov 4 23:51:46.670029 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Nov 4 23:51:46.670041 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Nov 4 23:51:46.670049 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Nov 4 23:51:46.670057 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Nov 4 23:51:46.670064 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Nov 4 23:51:46.670072 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Nov 4 23:51:46.670080 kernel: No NUMA configuration found Nov 4 23:51:46.670088 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Nov 4 23:51:46.670100 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Nov 4 23:51:46.670108 kernel: Zone ranges: Nov 4 23:51:46.670116 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 4 23:51:46.670124 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Nov 4 23:51:46.670132 kernel: Normal empty Nov 4 23:51:46.670139 kernel: Device empty Nov 4 23:51:46.670147 kernel: Movable zone start for each node Nov 4 23:51:46.670155 kernel: Early memory node ranges Nov 4 23:51:46.670167 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 4 23:51:46.670177 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Nov 4 23:51:46.670185 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Nov 4 23:51:46.670193 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Nov 4 23:51:46.670201 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Nov 4 23:51:46.670208 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Nov 4 23:51:46.670216 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Nov 4 23:51:46.670224 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Nov 4 23:51:46.670238 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Nov 4 23:51:46.670246 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 4 23:51:46.670267 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 4 23:51:46.670279 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Nov 4 23:51:46.670288 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 4 23:51:46.670295 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Nov 4 23:51:46.670306 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Nov 4 23:51:46.670317 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Nov 4 23:51:46.670329 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Nov 4 23:51:46.670344 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Nov 4 23:51:46.670352 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 4 23:51:46.670360 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 4 23:51:46.670368 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 4 23:51:46.670381 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 4 23:51:46.670389 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 4 23:51:46.670397 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 4 23:51:46.670406 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 4 23:51:46.670414 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 4 23:51:46.670422 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 4 23:51:46.670430 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 4 23:51:46.670443 kernel: TSC deadline timer available Nov 4 23:51:46.670451 kernel: CPU topo: Max. logical packages: 1 Nov 4 23:51:46.670459 kernel: CPU topo: Max. logical dies: 1 Nov 4 23:51:46.670468 kernel: CPU topo: Max. dies per package: 1 Nov 4 23:51:46.670476 kernel: CPU topo: Max. threads per core: 1 Nov 4 23:51:46.670484 kernel: CPU topo: Num. cores per package: 4 Nov 4 23:51:46.670492 kernel: CPU topo: Num. threads per package: 4 Nov 4 23:51:46.670505 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Nov 4 23:51:46.670513 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 4 23:51:46.670521 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 4 23:51:46.670529 kernel: kvm-guest: setup PV sched yield Nov 4 23:51:46.670537 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Nov 4 23:51:46.670545 kernel: Booting paravirtualized kernel on KVM Nov 4 23:51:46.670554 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 4 23:51:46.670563 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 4 23:51:46.670575 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Nov 4 23:51:46.670584 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Nov 4 23:51:46.670592 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 4 23:51:46.670600 kernel: kvm-guest: PV spinlocks enabled Nov 4 23:51:46.670608 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 4 23:51:46.670620 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 4 23:51:46.670632 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 4 23:51:46.670641 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 4 23:51:46.670649 kernel: Fallback order for Node 0: 0 Nov 4 23:51:46.670657 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Nov 4 23:51:46.670665 kernel: Policy zone: DMA32 Nov 4 23:51:46.670674 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 4 23:51:46.670682 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 4 23:51:46.670694 kernel: ftrace: allocating 40092 entries in 157 pages Nov 4 23:51:46.670703 kernel: ftrace: allocated 157 pages with 5 groups Nov 4 23:51:46.670711 kernel: Dynamic Preempt: voluntary Nov 4 23:51:46.670719 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 4 23:51:46.670728 kernel: rcu: RCU event tracing is enabled. Nov 4 23:51:46.670736 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 4 23:51:46.670767 kernel: Trampoline variant of Tasks RCU enabled. Nov 4 23:51:46.670781 kernel: Rude variant of Tasks RCU enabled. Nov 4 23:51:46.670790 kernel: Tracing variant of Tasks RCU enabled. Nov 4 23:51:46.670798 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 4 23:51:46.670807 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 4 23:51:46.670817 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 4 23:51:46.670825 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 4 23:51:46.670834 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 4 23:51:46.670842 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 4 23:51:46.670855 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 4 23:51:46.670863 kernel: Console: colour dummy device 80x25 Nov 4 23:51:46.670871 kernel: printk: legacy console [ttyS0] enabled Nov 4 23:51:46.670879 kernel: ACPI: Core revision 20240827 Nov 4 23:51:46.670887 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 4 23:51:46.670896 kernel: APIC: Switch to symmetric I/O mode setup Nov 4 23:51:46.670904 kernel: x2apic enabled Nov 4 23:51:46.670914 kernel: APIC: Switched APIC routing to: physical x2apic Nov 4 23:51:46.670923 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 4 23:51:46.670931 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 4 23:51:46.670939 kernel: kvm-guest: setup PV IPIs Nov 4 23:51:46.670947 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 4 23:51:46.670956 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 4 23:51:46.670964 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Nov 4 23:51:46.670976 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 4 23:51:46.670985 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 4 23:51:46.670993 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 4 23:51:46.671001 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 4 23:51:46.671010 kernel: Spectre V2 : Mitigation: Retpolines Nov 4 23:51:46.671021 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 4 23:51:46.671033 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 4 23:51:46.671047 kernel: active return thunk: retbleed_return_thunk Nov 4 23:51:46.671055 kernel: RETBleed: Mitigation: untrained return thunk Nov 4 23:51:46.671066 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 4 23:51:46.671074 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 4 23:51:46.671082 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 4 23:51:46.671091 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 4 23:51:46.671099 kernel: active return thunk: srso_return_thunk Nov 4 23:51:46.671112 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 4 23:51:46.671120 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 4 23:51:46.671129 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 4 23:51:46.671137 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 4 23:51:46.671150 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 4 23:51:46.671158 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 4 23:51:46.671166 kernel: Freeing SMP alternatives memory: 32K Nov 4 23:51:46.671180 kernel: pid_max: default: 32768 minimum: 301 Nov 4 23:51:46.671188 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 4 23:51:46.671196 kernel: landlock: Up and running. Nov 4 23:51:46.671204 kernel: SELinux: Initializing. Nov 4 23:51:46.671212 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 4 23:51:46.671221 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 4 23:51:46.671229 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 4 23:51:46.671242 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 4 23:51:46.671250 kernel: ... version: 0 Nov 4 23:51:46.671258 kernel: ... bit width: 48 Nov 4 23:51:46.671266 kernel: ... generic registers: 6 Nov 4 23:51:46.671274 kernel: ... value mask: 0000ffffffffffff Nov 4 23:51:46.671283 kernel: ... max period: 00007fffffffffff Nov 4 23:51:46.671291 kernel: ... fixed-purpose events: 0 Nov 4 23:51:46.671301 kernel: ... event mask: 000000000000003f Nov 4 23:51:46.671309 kernel: signal: max sigframe size: 1776 Nov 4 23:51:46.671317 kernel: rcu: Hierarchical SRCU implementation. Nov 4 23:51:46.671326 kernel: rcu: Max phase no-delay instances is 400. Nov 4 23:51:46.671336 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 4 23:51:46.671345 kernel: smp: Bringing up secondary CPUs ... Nov 4 23:51:46.671353 kernel: smpboot: x86: Booting SMP configuration: Nov 4 23:51:46.671365 kernel: .... node #0, CPUs: #1 #2 #3 Nov 4 23:51:46.671373 kernel: smp: Brought up 1 node, 4 CPUs Nov 4 23:51:46.671381 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Nov 4 23:51:46.671390 kernel: Memory: 2445196K/2565800K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15936K init, 2108K bss, 114668K reserved, 0K cma-reserved) Nov 4 23:51:46.671399 kernel: devtmpfs: initialized Nov 4 23:51:46.671407 kernel: x86/mm: Memory block size: 128MB Nov 4 23:51:46.671415 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Nov 4 23:51:46.671428 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Nov 4 23:51:46.671436 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Nov 4 23:51:46.671444 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Nov 4 23:51:46.671453 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Nov 4 23:51:46.671461 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Nov 4 23:51:46.671469 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 4 23:51:46.671478 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 4 23:51:46.671490 kernel: pinctrl core: initialized pinctrl subsystem Nov 4 23:51:46.671499 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 4 23:51:46.671507 kernel: audit: initializing netlink subsys (disabled) Nov 4 23:51:46.671515 kernel: audit: type=2000 audit(1762300304.725:1): state=initialized audit_enabled=0 res=1 Nov 4 23:51:46.671523 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 4 23:51:46.671531 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 4 23:51:46.671540 kernel: cpuidle: using governor menu Nov 4 23:51:46.671553 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 4 23:51:46.671561 kernel: dca service started, version 1.12.1 Nov 4 23:51:46.671570 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Nov 4 23:51:46.671578 kernel: PCI: Using configuration type 1 for base access Nov 4 23:51:46.671586 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 4 23:51:46.671595 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 4 23:51:46.671603 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 4 23:51:46.671616 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 4 23:51:46.671624 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 4 23:51:46.671632 kernel: ACPI: Added _OSI(Module Device) Nov 4 23:51:46.671640 kernel: ACPI: Added _OSI(Processor Device) Nov 4 23:51:46.671649 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 4 23:51:46.671657 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 4 23:51:46.671665 kernel: ACPI: Interpreter enabled Nov 4 23:51:46.671675 kernel: ACPI: PM: (supports S0 S3 S5) Nov 4 23:51:46.671683 kernel: ACPI: Using IOAPIC for interrupt routing Nov 4 23:51:46.671692 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 4 23:51:46.671700 kernel: PCI: Using E820 reservations for host bridge windows Nov 4 23:51:46.671708 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 4 23:51:46.671716 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 4 23:51:46.671989 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 4 23:51:46.672180 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 4 23:51:46.672361 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 4 23:51:46.672372 kernel: PCI host bridge to bus 0000:00 Nov 4 23:51:46.672546 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 4 23:51:46.672704 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 4 23:51:46.672897 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 4 23:51:46.673057 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Nov 4 23:51:46.673220 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Nov 4 23:51:46.673380 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Nov 4 23:51:46.673538 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 4 23:51:46.673727 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 4 23:51:46.673970 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 4 23:51:46.674143 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Nov 4 23:51:46.674327 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Nov 4 23:51:46.674496 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Nov 4 23:51:46.674692 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 4 23:51:46.674906 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 4 23:51:46.675143 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Nov 4 23:51:46.675400 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Nov 4 23:51:46.675601 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Nov 4 23:51:46.675965 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 4 23:51:46.676292 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Nov 4 23:51:46.676635 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Nov 4 23:51:46.677001 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Nov 4 23:51:46.677441 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 4 23:51:46.677803 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Nov 4 23:51:46.678129 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Nov 4 23:51:46.678450 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Nov 4 23:51:46.678819 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Nov 4 23:51:46.679156 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 4 23:51:46.679475 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 4 23:51:46.679841 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 4 23:51:46.680169 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Nov 4 23:51:46.680513 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Nov 4 23:51:46.680896 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 4 23:51:46.681827 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Nov 4 23:51:46.681908 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 4 23:51:46.681958 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 4 23:51:46.681978 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 4 23:51:46.681997 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 4 23:51:46.682110 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 4 23:51:46.682130 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 4 23:51:46.682149 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 4 23:51:46.682168 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 4 23:51:46.682186 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 4 23:51:46.682204 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 4 23:51:46.682224 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 4 23:51:46.682261 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 4 23:51:46.682281 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 4 23:51:46.682300 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 4 23:51:46.682319 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 4 23:51:46.682338 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 4 23:51:46.682357 kernel: iommu: Default domain type: Translated Nov 4 23:51:46.682376 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 4 23:51:46.682414 kernel: efivars: Registered efivars operations Nov 4 23:51:46.682434 kernel: PCI: Using ACPI for IRQ routing Nov 4 23:51:46.682453 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 4 23:51:46.682473 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Nov 4 23:51:46.682492 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Nov 4 23:51:46.682511 kernel: e820: reserve RAM buffer [mem 0x9b2dc018-0x9bffffff] Nov 4 23:51:46.682531 kernel: e820: reserve RAM buffer [mem 0x9b319018-0x9bffffff] Nov 4 23:51:46.682566 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Nov 4 23:51:46.682587 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Nov 4 23:51:46.682608 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Nov 4 23:51:46.682626 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Nov 4 23:51:46.682980 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 4 23:51:46.683303 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 4 23:51:46.683649 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 4 23:51:46.683672 kernel: vgaarb: loaded Nov 4 23:51:46.683691 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 4 23:51:46.683711 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 4 23:51:46.683730 kernel: clocksource: Switched to clocksource kvm-clock Nov 4 23:51:46.683780 kernel: VFS: Disk quotas dquot_6.6.0 Nov 4 23:51:46.683800 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 4 23:51:46.683820 kernel: pnp: PnP ACPI init Nov 4 23:51:46.684268 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Nov 4 23:51:46.684311 kernel: pnp: PnP ACPI: found 6 devices Nov 4 23:51:46.684346 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 4 23:51:46.684367 kernel: NET: Registered PF_INET protocol family Nov 4 23:51:46.684387 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 4 23:51:46.684406 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 4 23:51:46.684444 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 4 23:51:46.684464 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 4 23:51:46.684484 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 4 23:51:46.684504 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 4 23:51:46.684524 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 4 23:51:46.684546 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 4 23:51:46.684568 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 4 23:51:46.684605 kernel: NET: Registered PF_XDP protocol family Nov 4 23:51:46.684998 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Nov 4 23:51:46.685328 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Nov 4 23:51:46.685631 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 4 23:51:46.685968 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 4 23:51:46.686271 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 4 23:51:46.686648 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Nov 4 23:51:46.686981 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Nov 4 23:51:46.687283 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Nov 4 23:51:46.687307 kernel: PCI: CLS 0 bytes, default 64 Nov 4 23:51:46.687328 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 4 23:51:46.687367 kernel: Initialise system trusted keyrings Nov 4 23:51:46.687388 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 4 23:51:46.687408 kernel: Key type asymmetric registered Nov 4 23:51:46.687427 kernel: Asymmetric key parser 'x509' registered Nov 4 23:51:46.687448 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 4 23:51:46.687470 kernel: io scheduler mq-deadline registered Nov 4 23:51:46.687505 kernel: io scheduler kyber registered Nov 4 23:51:46.687526 kernel: io scheduler bfq registered Nov 4 23:51:46.687546 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 4 23:51:46.687568 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 4 23:51:46.687588 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 4 23:51:46.687609 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 4 23:51:46.687629 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 4 23:51:46.687649 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 4 23:51:46.687688 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 4 23:51:46.687710 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 4 23:51:46.687733 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 4 23:51:46.688118 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 4 23:51:46.688160 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 4 23:51:46.688562 kernel: rtc_cmos 00:04: registered as rtc0 Nov 4 23:51:46.688880 kernel: rtc_cmos 00:04: setting system clock to 2025-11-04T23:51:44 UTC (1762300304) Nov 4 23:51:46.689053 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Nov 4 23:51:46.689065 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 4 23:51:46.689074 kernel: efifb: probing for efifb Nov 4 23:51:46.689083 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Nov 4 23:51:46.689092 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Nov 4 23:51:46.689101 kernel: efifb: scrolling: redraw Nov 4 23:51:46.689130 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 4 23:51:46.689139 kernel: Console: switching to colour frame buffer device 160x50 Nov 4 23:51:46.689148 kernel: fb0: EFI VGA frame buffer device Nov 4 23:51:46.689169 kernel: pstore: Using crash dump compression: deflate Nov 4 23:51:46.689178 kernel: pstore: Registered efi_pstore as persistent store backend Nov 4 23:51:46.689187 kernel: NET: Registered PF_INET6 protocol family Nov 4 23:51:46.689196 kernel: Segment Routing with IPv6 Nov 4 23:51:46.689213 kernel: In-situ OAM (IOAM) with IPv6 Nov 4 23:51:46.689222 kernel: NET: Registered PF_PACKET protocol family Nov 4 23:51:46.689231 kernel: Key type dns_resolver registered Nov 4 23:51:46.689246 kernel: IPI shorthand broadcast: enabled Nov 4 23:51:46.689255 kernel: sched_clock: Marking stable (1492002554, 307861493)->(1932898544, -133034497) Nov 4 23:51:46.689263 kernel: registered taskstats version 1 Nov 4 23:51:46.689272 kernel: Loading compiled-in X.509 certificates Nov 4 23:51:46.689288 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: ace064fb6689a15889f35c6439909c760a72ef44' Nov 4 23:51:46.689296 kernel: Demotion targets for Node 0: null Nov 4 23:51:46.689305 kernel: Key type .fscrypt registered Nov 4 23:51:46.689313 kernel: Key type fscrypt-provisioning registered Nov 4 23:51:46.689322 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 4 23:51:46.689331 kernel: ima: Allocated hash algorithm: sha1 Nov 4 23:51:46.689339 kernel: ima: No architecture policies found Nov 4 23:51:46.689348 kernel: clk: Disabling unused clocks Nov 4 23:51:46.689364 kernel: Freeing unused kernel image (initmem) memory: 15936K Nov 4 23:51:46.689372 kernel: Write protecting the kernel read-only data: 40960k Nov 4 23:51:46.689381 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 4 23:51:46.689390 kernel: Run /init as init process Nov 4 23:51:46.689398 kernel: with arguments: Nov 4 23:51:46.689408 kernel: /init Nov 4 23:51:46.689416 kernel: with environment: Nov 4 23:51:46.689431 kernel: HOME=/ Nov 4 23:51:46.689439 kernel: TERM=linux Nov 4 23:51:46.689448 kernel: SCSI subsystem initialized Nov 4 23:51:46.689457 kernel: libata version 3.00 loaded. Nov 4 23:51:46.689640 kernel: ahci 0000:00:1f.2: version 3.0 Nov 4 23:51:46.689653 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 4 23:51:46.689876 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 4 23:51:46.690064 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 4 23:51:46.690246 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 4 23:51:46.690443 kernel: scsi host0: ahci Nov 4 23:51:46.690632 kernel: scsi host1: ahci Nov 4 23:51:46.690843 kernel: scsi host2: ahci Nov 4 23:51:46.691042 kernel: scsi host3: ahci Nov 4 23:51:46.691259 kernel: scsi host4: ahci Nov 4 23:51:46.691442 kernel: scsi host5: ahci Nov 4 23:51:46.691454 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Nov 4 23:51:46.691463 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Nov 4 23:51:46.691473 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Nov 4 23:51:46.691494 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Nov 4 23:51:46.691503 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Nov 4 23:51:46.691512 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Nov 4 23:51:46.691521 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 4 23:51:46.691530 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 4 23:51:46.691539 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 4 23:51:46.691547 kernel: ata3.00: LPM support broken, forcing max_power Nov 4 23:51:46.691563 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 4 23:51:46.691572 kernel: ata3.00: applying bridge limits Nov 4 23:51:46.691581 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 4 23:51:46.691590 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 4 23:51:46.691598 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 4 23:51:46.691607 kernel: ata3.00: LPM support broken, forcing max_power Nov 4 23:51:46.691615 kernel: ata3.00: configured for UDMA/100 Nov 4 23:51:46.691854 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 4 23:51:46.692047 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 4 23:51:46.692218 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 4 23:51:46.692230 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 4 23:51:46.692239 kernel: GPT:16515071 != 27000831 Nov 4 23:51:46.692247 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 4 23:51:46.692269 kernel: GPT:16515071 != 27000831 Nov 4 23:51:46.692277 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 4 23:51:46.692292 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 4 23:51:46.692484 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 4 23:51:46.692496 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 4 23:51:46.692682 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 4 23:51:46.692695 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 4 23:51:46.692716 kernel: device-mapper: uevent: version 1.0.3 Nov 4 23:51:46.692727 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 4 23:51:46.692737 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 4 23:51:46.692773 kernel: raid6: avx2x4 gen() 23077 MB/s Nov 4 23:51:46.692782 kernel: raid6: avx2x2 gen() 26415 MB/s Nov 4 23:51:46.692791 kernel: raid6: avx2x1 gen() 17806 MB/s Nov 4 23:51:46.692800 kernel: raid6: using algorithm avx2x2 gen() 26415 MB/s Nov 4 23:51:46.692817 kernel: raid6: .... xor() 14827 MB/s, rmw enabled Nov 4 23:51:46.692826 kernel: raid6: using avx2x2 recovery algorithm Nov 4 23:51:46.692835 kernel: xor: automatically using best checksumming function avx Nov 4 23:51:46.692844 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 4 23:51:46.692853 kernel: BTRFS: device fsid f719dc90-1cf7-4f08-a80f-0dda441372cc devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (181) Nov 4 23:51:46.692862 kernel: BTRFS info (device dm-0): first mount of filesystem f719dc90-1cf7-4f08-a80f-0dda441372cc Nov 4 23:51:46.692871 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:51:46.692887 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 4 23:51:46.692896 kernel: BTRFS info (device dm-0): enabling free space tree Nov 4 23:51:46.692905 kernel: loop: module loaded Nov 4 23:51:46.692914 kernel: loop0: detected capacity change from 0 to 100120 Nov 4 23:51:46.692923 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 4 23:51:46.692933 systemd[1]: Successfully made /usr/ read-only. Nov 4 23:51:46.692945 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 23:51:46.692962 systemd[1]: Detected virtualization kvm. Nov 4 23:51:46.692971 systemd[1]: Detected architecture x86-64. Nov 4 23:51:46.692980 systemd[1]: Running in initrd. Nov 4 23:51:46.692989 systemd[1]: No hostname configured, using default hostname. Nov 4 23:51:46.692999 systemd[1]: Hostname set to . Nov 4 23:51:46.693015 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 4 23:51:46.693024 systemd[1]: Queued start job for default target initrd.target. Nov 4 23:51:46.693033 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 23:51:46.693043 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 23:51:46.693052 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 23:51:46.693063 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 4 23:51:46.693072 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 23:51:46.693090 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 4 23:51:46.693099 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 4 23:51:46.693109 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 23:51:46.693119 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 23:51:46.693128 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 4 23:51:46.693138 systemd[1]: Reached target paths.target - Path Units. Nov 4 23:51:46.693154 systemd[1]: Reached target slices.target - Slice Units. Nov 4 23:51:46.693163 systemd[1]: Reached target swap.target - Swaps. Nov 4 23:51:46.693172 systemd[1]: Reached target timers.target - Timer Units. Nov 4 23:51:46.693181 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 23:51:46.693191 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 23:51:46.693200 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 4 23:51:46.693210 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 4 23:51:46.693226 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 23:51:46.693235 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 23:51:46.693245 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 23:51:46.693254 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 23:51:46.693264 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 4 23:51:46.693273 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 4 23:51:46.693289 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 23:51:46.693298 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 4 23:51:46.693308 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 4 23:51:46.693318 systemd[1]: Starting systemd-fsck-usr.service... Nov 4 23:51:46.693327 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 23:51:46.693336 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 23:51:46.693346 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:51:46.693362 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 4 23:51:46.693372 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 23:51:46.693382 systemd[1]: Finished systemd-fsck-usr.service. Nov 4 23:51:46.693397 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 23:51:46.693486 systemd-journald[315]: Collecting audit messages is disabled. Nov 4 23:51:46.693510 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 4 23:51:46.693519 kernel: Bridge firewalling registered Nov 4 23:51:46.693722 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 23:51:46.693731 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 23:51:46.693760 systemd-journald[315]: Journal started Nov 4 23:51:46.693779 systemd-journald[315]: Runtime Journal (/run/log/journal/522c2879cc1f47e6968ddcb2dc05951b) is 6M, max 48.1M, 42.1M free. Nov 4 23:51:46.691207 systemd-modules-load[318]: Inserted module 'br_netfilter' Nov 4 23:51:46.699777 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 23:51:46.702343 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:51:46.709884 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 4 23:51:46.727136 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 23:51:46.729809 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 23:51:46.734419 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 23:51:46.746518 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 23:51:46.750983 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 23:51:46.753814 systemd-tmpfiles[339]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 4 23:51:46.757932 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:51:46.761947 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 23:51:46.764909 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 4 23:51:46.769576 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 23:51:46.797433 dracut-cmdline[356]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 4 23:51:46.833892 systemd-resolved[357]: Positive Trust Anchors: Nov 4 23:51:46.833913 systemd-resolved[357]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 23:51:46.833918 systemd-resolved[357]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 23:51:46.833948 systemd-resolved[357]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 23:51:46.860890 systemd-resolved[357]: Defaulting to hostname 'linux'. Nov 4 23:51:46.862949 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 23:51:46.866552 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 23:51:46.943800 kernel: Loading iSCSI transport class v2.0-870. Nov 4 23:51:46.958802 kernel: iscsi: registered transport (tcp) Nov 4 23:51:46.983422 kernel: iscsi: registered transport (qla4xxx) Nov 4 23:51:46.983522 kernel: QLogic iSCSI HBA Driver Nov 4 23:51:47.010969 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 23:51:47.045298 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 23:51:47.050643 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 23:51:47.181633 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 4 23:51:47.184045 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 4 23:51:47.188193 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 4 23:51:47.232947 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 4 23:51:47.238395 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 23:51:47.273428 systemd-udevd[599]: Using default interface naming scheme 'v257'. Nov 4 23:51:47.288156 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 23:51:47.295361 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 4 23:51:47.321697 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 23:51:47.328055 dracut-pre-trigger[675]: rd.md=0: removing MD RAID activation Nov 4 23:51:47.328085 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 23:51:47.365012 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 23:51:47.370608 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 23:51:47.387395 systemd-networkd[707]: lo: Link UP Nov 4 23:51:47.387403 systemd-networkd[707]: lo: Gained carrier Nov 4 23:51:47.388975 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 23:51:47.393985 systemd[1]: Reached target network.target - Network. Nov 4 23:51:47.478834 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 23:51:47.484979 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 4 23:51:47.545435 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 4 23:51:47.565758 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 4 23:51:47.581253 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 4 23:51:47.629586 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 4 23:51:47.637882 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 4 23:51:47.642872 systemd-networkd[707]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:51:47.642888 systemd-networkd[707]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 23:51:47.651208 systemd-networkd[707]: eth0: Link UP Nov 4 23:51:47.652533 systemd-networkd[707]: eth0: Gained carrier Nov 4 23:51:47.652549 systemd-networkd[707]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:51:47.680787 kernel: cryptd: max_cpu_qlen set to 1000 Nov 4 23:51:47.687585 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 23:51:47.687717 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:51:47.689848 systemd-networkd[707]: eth0: DHCPv4 address 10.0.0.97/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 4 23:51:47.709807 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 4 23:51:47.709833 disk-uuid[775]: Primary Header is updated. Nov 4 23:51:47.709833 disk-uuid[775]: Secondary Entries is updated. Nov 4 23:51:47.709833 disk-uuid[775]: Secondary Header is updated. Nov 4 23:51:47.691865 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:51:47.704790 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:51:47.724281 kernel: AES CTR mode by8 optimization enabled Nov 4 23:51:47.747077 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 23:51:47.747948 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:51:47.787070 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:51:47.805305 systemd-resolved[357]: Detected conflict on linux IN A 10.0.0.97 Nov 4 23:51:47.807011 systemd-resolved[357]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Nov 4 23:51:47.813562 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 4 23:51:47.815103 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 23:51:47.815576 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 23:51:47.816118 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 23:51:47.818854 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 4 23:51:47.829959 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:51:47.850788 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 4 23:51:48.759048 disk-uuid[777]: Warning: The kernel is still using the old partition table. Nov 4 23:51:48.759048 disk-uuid[777]: The new table will be used at the next reboot or after you Nov 4 23:51:48.759048 disk-uuid[777]: run partprobe(8) or kpartx(8) Nov 4 23:51:48.759048 disk-uuid[777]: The operation has completed successfully. Nov 4 23:51:48.872438 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 4 23:51:48.872605 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 4 23:51:48.878403 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 4 23:51:48.917798 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (867) Nov 4 23:51:48.921280 kernel: BTRFS info (device vda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:51:48.921304 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:51:48.923881 systemd-networkd[707]: eth0: Gained IPv6LL Nov 4 23:51:48.926638 kernel: BTRFS info (device vda6): turning on async discard Nov 4 23:51:48.926655 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 23:51:48.933778 kernel: BTRFS info (device vda6): last unmount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:51:48.935064 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 4 23:51:48.939610 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 4 23:51:49.207157 ignition[886]: Ignition 2.22.0 Nov 4 23:51:49.207177 ignition[886]: Stage: fetch-offline Nov 4 23:51:49.207237 ignition[886]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:51:49.207253 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 23:51:49.207410 ignition[886]: parsed url from cmdline: "" Nov 4 23:51:49.207415 ignition[886]: no config URL provided Nov 4 23:51:49.207426 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" Nov 4 23:51:49.207443 ignition[886]: no config at "/usr/lib/ignition/user.ign" Nov 4 23:51:49.207503 ignition[886]: op(1): [started] loading QEMU firmware config module Nov 4 23:51:49.207510 ignition[886]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 4 23:51:49.224519 ignition[886]: op(1): [finished] loading QEMU firmware config module Nov 4 23:51:49.311528 ignition[886]: parsing config with SHA512: 89ddc0346f80206f93102e255016eb4d39b5008a900b6f6756885c7022c36032fd4148bde9a1980440526e59efd5c753c65fc5d161cfb3a13d52b4d6757db496 Nov 4 23:51:49.319138 unknown[886]: fetched base config from "system" Nov 4 23:51:49.319150 unknown[886]: fetched user config from "qemu" Nov 4 23:51:49.319564 ignition[886]: fetch-offline: fetch-offline passed Nov 4 23:51:49.319643 ignition[886]: Ignition finished successfully Nov 4 23:51:49.327018 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 23:51:49.329155 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 4 23:51:49.330236 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 4 23:51:49.481927 ignition[897]: Ignition 2.22.0 Nov 4 23:51:49.481944 ignition[897]: Stage: kargs Nov 4 23:51:49.482167 ignition[897]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:51:49.482178 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 23:51:49.488142 ignition[897]: kargs: kargs passed Nov 4 23:51:49.488206 ignition[897]: Ignition finished successfully Nov 4 23:51:49.494726 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 4 23:51:49.499316 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 4 23:51:49.612765 ignition[905]: Ignition 2.22.0 Nov 4 23:51:49.612778 ignition[905]: Stage: disks Nov 4 23:51:49.612939 ignition[905]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:51:49.612950 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 23:51:49.619098 ignition[905]: disks: disks passed Nov 4 23:51:49.620129 ignition[905]: Ignition finished successfully Nov 4 23:51:49.624225 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 4 23:51:49.625442 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 4 23:51:49.628259 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 4 23:51:49.631434 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 23:51:49.635217 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 23:51:49.638238 systemd[1]: Reached target basic.target - Basic System. Nov 4 23:51:49.642450 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 4 23:51:49.679951 systemd-fsck[915]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 4 23:51:49.688120 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 4 23:51:49.693571 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 4 23:51:49.865770 kernel: EXT4-fs (vda9): mounted filesystem cfb29ed0-6faf-41a8-b421-3abc514e4975 r/w with ordered data mode. Quota mode: none. Nov 4 23:51:49.866909 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 4 23:51:49.870057 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 4 23:51:49.872029 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 23:51:49.876458 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 4 23:51:49.877775 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 4 23:51:49.877842 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 4 23:51:49.877896 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 23:51:49.896928 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (923) Nov 4 23:51:49.896957 kernel: BTRFS info (device vda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:51:49.896988 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:51:49.897558 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 4 23:51:49.901358 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 4 23:51:49.908771 kernel: BTRFS info (device vda6): turning on async discard Nov 4 23:51:49.908824 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 23:51:49.910043 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 23:51:49.972764 initrd-setup-root[947]: cut: /sysroot/etc/passwd: No such file or directory Nov 4 23:51:49.978055 initrd-setup-root[954]: cut: /sysroot/etc/group: No such file or directory Nov 4 23:51:49.983492 initrd-setup-root[961]: cut: /sysroot/etc/shadow: No such file or directory Nov 4 23:51:49.988708 initrd-setup-root[968]: cut: /sysroot/etc/gshadow: No such file or directory Nov 4 23:51:50.106010 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 4 23:51:50.110791 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 4 23:51:50.113429 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 4 23:51:50.131879 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 4 23:51:50.134458 kernel: BTRFS info (device vda6): last unmount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:51:50.151895 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 4 23:51:50.181100 ignition[1037]: INFO : Ignition 2.22.0 Nov 4 23:51:50.181100 ignition[1037]: INFO : Stage: mount Nov 4 23:51:50.183890 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 23:51:50.183890 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 23:51:50.183890 ignition[1037]: INFO : mount: mount passed Nov 4 23:51:50.183890 ignition[1037]: INFO : Ignition finished successfully Nov 4 23:51:50.186434 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 4 23:51:50.191117 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 4 23:51:50.220970 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 23:51:50.246030 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1048) Nov 4 23:51:50.246061 kernel: BTRFS info (device vda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:51:50.246074 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:51:50.251406 kernel: BTRFS info (device vda6): turning on async discard Nov 4 23:51:50.251490 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 23:51:50.253830 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 23:51:50.309306 ignition[1065]: INFO : Ignition 2.22.0 Nov 4 23:51:50.309306 ignition[1065]: INFO : Stage: files Nov 4 23:51:50.312452 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 23:51:50.312452 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 23:51:50.318654 ignition[1065]: DEBUG : files: compiled without relabeling support, skipping Nov 4 23:51:50.321703 ignition[1065]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 4 23:51:50.321703 ignition[1065]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 4 23:51:50.331558 ignition[1065]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 4 23:51:50.334518 ignition[1065]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 4 23:51:50.337716 unknown[1065]: wrote ssh authorized keys file for user: core Nov 4 23:51:50.339713 ignition[1065]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 4 23:51:50.342578 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 4 23:51:50.342578 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 4 23:51:50.398204 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 4 23:51:50.591627 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 4 23:51:50.595333 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 4 23:51:50.595333 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 4 23:51:50.595333 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 4 23:51:50.595333 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 4 23:51:50.595333 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 23:51:50.595333 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 23:51:50.595333 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 23:51:50.595333 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 23:51:50.619947 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 23:51:50.619947 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 23:51:50.619947 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 4 23:51:50.619947 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 4 23:51:50.619947 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 4 23:51:50.619947 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 4 23:51:51.037579 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 4 23:51:51.705958 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 4 23:51:51.705958 ignition[1065]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 4 23:51:51.712168 ignition[1065]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 23:51:51.715560 ignition[1065]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 23:51:51.715560 ignition[1065]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 4 23:51:51.715560 ignition[1065]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 4 23:51:51.715560 ignition[1065]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 4 23:51:51.715560 ignition[1065]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 4 23:51:51.715560 ignition[1065]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 4 23:51:51.715560 ignition[1065]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 4 23:51:51.743702 ignition[1065]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 4 23:51:51.750802 ignition[1065]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 4 23:51:51.753394 ignition[1065]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 4 23:51:51.753394 ignition[1065]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 4 23:51:51.753394 ignition[1065]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 4 23:51:51.753394 ignition[1065]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 4 23:51:51.753394 ignition[1065]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 4 23:51:51.753394 ignition[1065]: INFO : files: files passed Nov 4 23:51:51.753394 ignition[1065]: INFO : Ignition finished successfully Nov 4 23:51:51.759856 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 4 23:51:51.763735 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 4 23:51:51.771881 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 4 23:51:51.787467 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 4 23:51:51.787635 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 4 23:51:51.796436 initrd-setup-root-after-ignition[1096]: grep: /sysroot/oem/oem-release: No such file or directory Nov 4 23:51:51.801920 initrd-setup-root-after-ignition[1098]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 23:51:51.805104 initrd-setup-root-after-ignition[1102]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 23:51:51.806134 initrd-setup-root-after-ignition[1098]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 4 23:51:51.805614 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 23:51:51.807320 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 4 23:51:51.816950 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 4 23:51:51.917388 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 4 23:51:51.917578 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 4 23:51:51.921797 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 4 23:51:51.924124 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 4 23:51:51.927928 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 4 23:51:51.929168 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 4 23:51:51.965332 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 23:51:51.969947 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 4 23:51:51.992174 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 23:51:51.992545 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 4 23:51:51.996075 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 23:51:51.997274 systemd[1]: Stopped target timers.target - Timer Units. Nov 4 23:51:52.002193 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 4 23:51:52.002350 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 23:51:52.007494 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 4 23:51:52.010881 systemd[1]: Stopped target basic.target - Basic System. Nov 4 23:51:52.012242 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 4 23:51:52.015877 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 23:51:52.019195 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 4 23:51:52.019723 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 4 23:51:52.026324 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 4 23:51:52.029429 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 23:51:52.030293 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 4 23:51:52.036257 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 4 23:51:52.036803 systemd[1]: Stopped target swap.target - Swaps. Nov 4 23:51:52.044214 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 4 23:51:52.045863 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 4 23:51:52.049494 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 4 23:51:52.052974 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 23:51:52.056806 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 4 23:51:52.057252 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 23:51:52.057850 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 4 23:51:52.057981 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 4 23:51:52.065312 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 4 23:51:52.065438 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 23:51:52.068790 systemd[1]: Stopped target paths.target - Path Units. Nov 4 23:51:52.069568 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 4 23:51:52.077838 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 23:51:52.078546 systemd[1]: Stopped target slices.target - Slice Units. Nov 4 23:51:52.082688 systemd[1]: Stopped target sockets.target - Socket Units. Nov 4 23:51:52.086194 systemd[1]: iscsid.socket: Deactivated successfully. Nov 4 23:51:52.086286 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 23:51:52.087366 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 4 23:51:52.087446 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 23:51:52.091235 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 4 23:51:52.091353 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 23:51:52.094170 systemd[1]: ignition-files.service: Deactivated successfully. Nov 4 23:51:52.094282 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 4 23:51:52.099651 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 4 23:51:52.102786 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 4 23:51:52.102910 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 23:51:52.104582 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 4 23:51:52.110915 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 4 23:51:52.111044 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 23:51:52.112155 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 4 23:51:52.112257 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 23:51:52.116398 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 4 23:51:52.116501 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 23:51:52.128211 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 4 23:51:52.128376 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 4 23:51:52.156009 ignition[1122]: INFO : Ignition 2.22.0 Nov 4 23:51:52.156009 ignition[1122]: INFO : Stage: umount Nov 4 23:51:52.158945 ignition[1122]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 23:51:52.158945 ignition[1122]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 23:51:52.158945 ignition[1122]: INFO : umount: umount passed Nov 4 23:51:52.164186 ignition[1122]: INFO : Ignition finished successfully Nov 4 23:51:52.163185 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 4 23:51:52.163839 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 4 23:51:52.163977 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 4 23:51:52.165343 systemd[1]: Stopped target network.target - Network. Nov 4 23:51:52.167621 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 4 23:51:52.167727 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 4 23:51:52.171320 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 4 23:51:52.171391 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 4 23:51:52.174368 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 4 23:51:52.174438 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 4 23:51:52.177368 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 4 23:51:52.177422 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 4 23:51:52.180383 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 4 23:51:52.183415 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 4 23:51:52.200977 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 4 23:51:52.201129 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 4 23:51:52.208413 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 4 23:51:52.208608 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 4 23:51:52.215955 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 4 23:51:52.216760 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 4 23:51:52.216849 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 4 23:51:52.222830 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 4 23:51:52.225297 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 4 23:51:52.225362 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 23:51:52.226174 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 4 23:51:52.226222 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:51:52.231765 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 4 23:51:52.231817 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 4 23:51:52.236100 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 23:51:52.236849 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 4 23:51:52.253905 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 4 23:51:52.256421 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 4 23:51:52.256539 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 4 23:51:52.270420 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 4 23:51:52.270617 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 23:51:52.272173 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 4 23:51:52.272273 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 4 23:51:52.277288 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 4 23:51:52.277343 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 23:51:52.280510 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 4 23:51:52.280577 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 4 23:51:52.286847 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 4 23:51:52.286907 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 4 23:51:52.292046 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 4 23:51:52.292109 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 23:51:52.302584 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 4 23:51:52.303256 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 4 23:51:52.303336 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 23:51:52.304144 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 4 23:51:52.304201 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 23:51:52.304655 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 23:51:52.304706 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:51:52.306040 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 4 23:51:52.320890 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 4 23:51:52.329292 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 4 23:51:52.329423 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 4 23:51:52.332990 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 4 23:51:52.334492 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 4 23:51:52.367546 systemd[1]: Switching root. Nov 4 23:51:52.403155 systemd-journald[315]: Journal stopped Nov 4 23:51:54.031481 systemd-journald[315]: Received SIGTERM from PID 1 (systemd). Nov 4 23:51:54.031552 kernel: SELinux: policy capability network_peer_controls=1 Nov 4 23:51:54.031567 kernel: SELinux: policy capability open_perms=1 Nov 4 23:51:54.031583 kernel: SELinux: policy capability extended_socket_class=1 Nov 4 23:51:54.031595 kernel: SELinux: policy capability always_check_network=0 Nov 4 23:51:54.031615 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 4 23:51:54.031627 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 4 23:51:54.031652 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 4 23:51:54.031669 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 4 23:51:54.031682 kernel: SELinux: policy capability userspace_initial_context=0 Nov 4 23:51:54.031705 kernel: audit: type=1403 audit(1762300312.987:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 4 23:51:54.031729 systemd[1]: Successfully loaded SELinux policy in 80.220ms. Nov 4 23:51:54.031778 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.165ms. Nov 4 23:51:54.031796 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 23:51:54.031818 systemd[1]: Detected virtualization kvm. Nov 4 23:51:54.031837 systemd[1]: Detected architecture x86-64. Nov 4 23:51:54.031856 systemd[1]: Detected first boot. Nov 4 23:51:54.031869 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 4 23:51:54.031882 zram_generator::config[1168]: No configuration found. Nov 4 23:51:54.031895 kernel: Guest personality initialized and is inactive Nov 4 23:51:54.031907 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 4 23:51:54.031926 kernel: Initialized host personality Nov 4 23:51:54.031938 kernel: NET: Registered PF_VSOCK protocol family Nov 4 23:51:54.031950 systemd[1]: Populated /etc with preset unit settings. Nov 4 23:51:54.031965 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 4 23:51:54.031978 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 4 23:51:54.031991 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 4 23:51:54.032004 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 4 23:51:54.032033 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 4 23:51:54.032054 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 4 23:51:54.032071 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 4 23:51:54.032086 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 4 23:51:54.032102 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 4 23:51:54.032125 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 4 23:51:54.032149 systemd[1]: Created slice user.slice - User and Session Slice. Nov 4 23:51:54.032162 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 23:51:54.032177 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 23:51:54.032196 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 4 23:51:54.032214 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 4 23:51:54.032227 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 4 23:51:54.032240 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 23:51:54.032259 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 4 23:51:54.032275 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 23:51:54.032290 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 23:51:54.032303 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 4 23:51:54.032315 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 4 23:51:54.032328 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 4 23:51:54.032346 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 4 23:51:54.032365 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 23:51:54.032381 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 23:51:54.032394 systemd[1]: Reached target slices.target - Slice Units. Nov 4 23:51:54.032407 systemd[1]: Reached target swap.target - Swaps. Nov 4 23:51:54.032419 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 4 23:51:54.032432 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 4 23:51:54.032447 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 4 23:51:54.032467 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 23:51:54.032483 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 23:51:54.032495 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 23:51:54.032514 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 4 23:51:54.032526 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 4 23:51:54.032539 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 4 23:51:54.032551 systemd[1]: Mounting media.mount - External Media Directory... Nov 4 23:51:54.032571 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:51:54.032584 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 4 23:51:54.032597 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 4 23:51:54.032618 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 4 23:51:54.032632 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 4 23:51:54.032644 systemd[1]: Reached target machines.target - Containers. Nov 4 23:51:54.032658 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 4 23:51:54.032679 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:51:54.032692 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 23:51:54.032704 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 4 23:51:54.032717 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 23:51:54.032729 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 23:51:54.032756 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 23:51:54.032778 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 4 23:51:54.032791 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 23:51:54.032804 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 4 23:51:54.032817 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 4 23:51:54.032836 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 4 23:51:54.032848 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 4 23:51:54.032860 systemd[1]: Stopped systemd-fsck-usr.service. Nov 4 23:51:54.032880 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:51:54.032893 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 23:51:54.032906 kernel: fuse: init (API version 7.41) Nov 4 23:51:54.032918 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 23:51:54.032931 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 23:51:54.032945 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 4 23:51:54.032957 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 4 23:51:54.032977 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 23:51:54.032990 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:51:54.033002 kernel: ACPI: bus type drm_connector registered Nov 4 23:51:54.033036 systemd-journald[1232]: Collecting audit messages is disabled. Nov 4 23:51:54.033070 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 4 23:51:54.033085 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 4 23:51:54.033098 systemd-journald[1232]: Journal started Nov 4 23:51:54.033121 systemd-journald[1232]: Runtime Journal (/run/log/journal/522c2879cc1f47e6968ddcb2dc05951b) is 6M, max 48.1M, 42.1M free. Nov 4 23:51:53.716458 systemd[1]: Queued start job for default target multi-user.target. Nov 4 23:51:53.735759 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 4 23:51:53.736288 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 4 23:51:54.038384 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 23:51:54.039689 systemd[1]: Mounted media.mount - External Media Directory. Nov 4 23:51:54.041329 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 4 23:51:54.043157 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 4 23:51:54.044997 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 4 23:51:54.046872 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 23:51:54.049134 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 4 23:51:54.049356 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 4 23:51:54.051580 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 23:51:54.051851 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 23:51:54.054094 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 23:51:54.054391 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 23:51:54.056425 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 23:51:54.056725 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 23:51:54.059220 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 4 23:51:54.059558 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 4 23:51:54.061789 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 23:51:54.062171 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 23:51:54.064448 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 23:51:54.067105 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 23:51:54.070405 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 4 23:51:54.091232 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 4 23:51:54.105987 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 23:51:54.108262 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 4 23:51:54.111807 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 4 23:51:54.115191 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 4 23:51:54.117005 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 4 23:51:54.117138 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 23:51:54.120412 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 4 23:51:54.122675 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:51:54.129048 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 4 23:51:54.133934 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 4 23:51:54.136051 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 23:51:54.139900 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 4 23:51:54.141733 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 23:51:54.143379 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 23:51:54.147872 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 4 23:51:54.154809 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 23:51:54.158615 systemd-journald[1232]: Time spent on flushing to /var/log/journal/522c2879cc1f47e6968ddcb2dc05951b is 25.550ms for 1053 entries. Nov 4 23:51:54.158615 systemd-journald[1232]: System Journal (/var/log/journal/522c2879cc1f47e6968ddcb2dc05951b) is 8M, max 163.5M, 155.5M free. Nov 4 23:51:54.205341 systemd-journald[1232]: Received client request to flush runtime journal. Nov 4 23:51:54.205396 kernel: loop1: detected capacity change from 0 to 128048 Nov 4 23:51:54.159520 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 4 23:51:54.163034 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 4 23:51:54.165714 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 4 23:51:54.172388 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 4 23:51:54.179834 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 4 23:51:54.182170 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 4 23:51:54.193891 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 4 23:51:54.209919 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 4 23:51:54.214757 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:51:54.219107 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 4 23:51:54.224760 kernel: loop2: detected capacity change from 0 to 224512 Nov 4 23:51:54.313789 kernel: loop3: detected capacity change from 0 to 110984 Nov 4 23:51:54.343429 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 4 23:51:54.347786 kernel: loop4: detected capacity change from 0 to 128048 Nov 4 23:51:54.349187 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 23:51:54.354968 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 23:51:54.361390 kernel: loop5: detected capacity change from 0 to 224512 Nov 4 23:51:54.424147 kernel: loop6: detected capacity change from 0 to 110984 Nov 4 23:51:54.383753 (sd-merge)[1306]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 4 23:51:54.396056 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 4 23:51:54.429040 (sd-merge)[1306]: Merged extensions into '/usr'. Nov 4 23:51:54.435084 systemd[1]: Reload requested from client PID 1283 ('systemd-sysext') (unit systemd-sysext.service)... Nov 4 23:51:54.435243 systemd[1]: Reloading... Nov 4 23:51:54.451113 systemd-tmpfiles[1308]: ACLs are not supported, ignoring. Nov 4 23:51:54.451148 systemd-tmpfiles[1308]: ACLs are not supported, ignoring. Nov 4 23:51:54.506773 zram_generator::config[1337]: No configuration found. Nov 4 23:51:54.594533 systemd-resolved[1307]: Positive Trust Anchors: Nov 4 23:51:54.594942 systemd-resolved[1307]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 23:51:54.594951 systemd-resolved[1307]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 23:51:54.594982 systemd-resolved[1307]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 23:51:54.599947 systemd-resolved[1307]: Defaulting to hostname 'linux'. Nov 4 23:51:54.720828 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 4 23:51:54.721038 systemd[1]: Reloading finished in 285 ms. Nov 4 23:51:54.758921 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 4 23:51:54.761050 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 23:51:54.763210 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 4 23:51:54.765796 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 23:51:54.772640 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 23:51:54.801224 systemd[1]: Starting ensure-sysext.service... Nov 4 23:51:54.837273 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 23:51:54.849408 systemd[1]: Reload requested from client PID 1381 ('systemctl') (unit ensure-sysext.service)... Nov 4 23:51:54.849425 systemd[1]: Reloading... Nov 4 23:51:54.862093 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 4 23:51:54.862135 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 4 23:51:54.862570 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 4 23:51:54.862882 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 4 23:51:54.863820 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 4 23:51:54.864072 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Nov 4 23:51:54.864168 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Nov 4 23:51:54.870375 systemd-tmpfiles[1382]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 23:51:54.870389 systemd-tmpfiles[1382]: Skipping /boot Nov 4 23:51:54.885409 systemd-tmpfiles[1382]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 23:51:54.885426 systemd-tmpfiles[1382]: Skipping /boot Nov 4 23:51:54.938782 zram_generator::config[1415]: No configuration found. Nov 4 23:51:55.127232 systemd[1]: Reloading finished in 277 ms. Nov 4 23:51:55.153929 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 4 23:51:55.198376 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 23:51:55.210188 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 23:51:55.212820 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 4 23:51:55.235660 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 4 23:51:55.241085 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 4 23:51:55.246524 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 23:51:55.252112 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 4 23:51:55.260490 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:51:55.260880 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:51:55.266807 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 23:51:55.271699 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 23:51:55.281160 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 23:51:55.284069 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:51:55.284292 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:51:55.284504 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:51:55.288347 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 23:51:55.294999 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 23:51:55.298919 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 4 23:51:55.301512 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 23:51:55.301762 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 23:51:55.303795 systemd-udevd[1456]: Using default interface naming scheme 'v257'. Nov 4 23:51:55.305237 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 23:51:55.305445 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 23:51:55.315102 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:51:55.315416 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:51:55.317179 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 23:51:55.320113 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 23:51:55.329896 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 23:51:55.331817 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:51:55.331984 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:51:55.332135 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:51:55.333900 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 23:51:55.334117 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 23:51:55.336803 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 23:51:55.337015 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 23:51:55.340015 augenrules[1486]: No rules Nov 4 23:51:55.345195 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 23:51:55.347294 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 23:51:55.350021 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 4 23:51:55.352635 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 23:51:55.352931 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 23:51:55.355298 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 23:51:55.358417 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 4 23:51:55.370596 systemd[1]: Finished ensure-sysext.service. Nov 4 23:51:55.373449 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:51:55.373650 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:51:55.375013 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 23:51:55.378001 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 23:51:55.381978 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 23:51:55.384048 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:51:55.384100 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:51:55.394545 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 23:51:55.397923 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 4 23:51:55.399790 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 4 23:51:55.399826 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:51:55.400531 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 23:51:55.400782 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 23:51:55.404085 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 23:51:55.410591 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 23:51:55.411884 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 23:51:55.414244 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 23:51:55.414491 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 23:51:55.422187 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 23:51:55.585776 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 4 23:51:55.589955 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 4 23:51:55.597038 systemd[1]: Reached target time-set.target - System Time Set. Nov 4 23:51:55.611673 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 4 23:51:55.618763 kernel: ACPI: button: Power Button [PWRF] Nov 4 23:51:55.617390 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 4 23:51:55.621862 kernel: mousedev: PS/2 mouse device common for all mice Nov 4 23:51:55.622396 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 4 23:51:55.639709 systemd-networkd[1518]: lo: Link UP Nov 4 23:51:55.639718 systemd-networkd[1518]: lo: Gained carrier Nov 4 23:51:55.641602 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 23:51:55.641614 systemd-networkd[1518]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:51:55.641619 systemd-networkd[1518]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 23:51:55.643550 systemd[1]: Reached target network.target - Network. Nov 4 23:51:55.647123 systemd-networkd[1518]: eth0: Link UP Nov 4 23:51:55.647348 systemd-networkd[1518]: eth0: Gained carrier Nov 4 23:51:55.647368 systemd-networkd[1518]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:51:55.650224 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 4 23:51:55.654075 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 4 23:51:55.664968 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Nov 4 23:51:55.665329 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 4 23:51:55.665546 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 4 23:51:55.664030 systemd-networkd[1518]: eth0: DHCPv4 address 10.0.0.97/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 4 23:51:55.667180 systemd-timesyncd[1521]: Network configuration changed, trying to establish connection. Nov 4 23:51:57.531739 systemd-timesyncd[1521]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 4 23:51:57.531792 systemd-timesyncd[1521]: Initial clock synchronization to Tue 2025-11-04 23:51:57.531276 UTC. Nov 4 23:51:57.531827 systemd-resolved[1307]: Clock change detected. Flushing caches. Nov 4 23:51:57.539278 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 4 23:51:57.682559 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:51:57.694074 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 4 23:51:57.708262 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 23:51:57.711229 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:51:57.720947 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:51:57.847607 kernel: kvm_amd: TSC scaling supported Nov 4 23:51:57.847714 kernel: kvm_amd: Nested Virtualization enabled Nov 4 23:51:57.847732 kernel: kvm_amd: Nested Paging enabled Nov 4 23:51:57.849363 kernel: kvm_amd: LBR virtualization supported Nov 4 23:51:57.850101 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 4 23:51:57.850496 kernel: kvm_amd: Virtual GIF supported Nov 4 23:51:57.933063 kernel: EDAC MC: Ver: 3.0.0 Nov 4 23:51:57.965323 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:51:58.174207 ldconfig[1453]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 4 23:51:58.181994 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 4 23:51:58.185751 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 4 23:51:58.209971 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 4 23:51:58.212068 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 23:51:58.213958 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 4 23:51:58.215977 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 4 23:51:58.217993 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 4 23:51:58.220027 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 4 23:51:58.221884 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 4 23:51:58.223913 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 4 23:51:58.225914 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 4 23:51:58.225948 systemd[1]: Reached target paths.target - Path Units. Nov 4 23:51:58.227447 systemd[1]: Reached target timers.target - Timer Units. Nov 4 23:51:58.230058 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 4 23:51:58.233828 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 4 23:51:58.238964 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 4 23:51:58.241411 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 4 23:51:58.243525 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 4 23:51:58.254967 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 4 23:51:58.257316 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 4 23:51:58.259793 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 4 23:51:58.262255 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 23:51:58.263814 systemd[1]: Reached target basic.target - Basic System. Nov 4 23:51:58.265363 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 4 23:51:58.265393 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 4 23:51:58.266586 systemd[1]: Starting containerd.service - containerd container runtime... Nov 4 23:51:58.269402 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 4 23:51:58.272025 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 4 23:51:58.285408 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 4 23:51:58.289083 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 4 23:51:58.290829 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 4 23:51:58.293181 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 4 23:51:58.296143 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 4 23:51:58.298821 jq[1580]: false Nov 4 23:51:58.300224 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 4 23:51:58.303217 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 4 23:51:58.307996 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 4 23:51:58.309629 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Refreshing passwd entry cache Nov 4 23:51:58.309646 oslogin_cache_refresh[1582]: Refreshing passwd entry cache Nov 4 23:51:58.315199 extend-filesystems[1581]: Found /dev/vda6 Nov 4 23:51:58.316774 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 4 23:51:58.318470 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 4 23:51:58.319785 extend-filesystems[1581]: Found /dev/vda9 Nov 4 23:51:58.325132 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Failure getting users, quitting Nov 4 23:51:58.325132 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 23:51:58.325132 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Refreshing group entry cache Nov 4 23:51:58.325201 extend-filesystems[1581]: Checking size of /dev/vda9 Nov 4 23:51:58.323380 oslogin_cache_refresh[1582]: Failure getting users, quitting Nov 4 23:51:58.322732 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 4 23:51:58.323403 oslogin_cache_refresh[1582]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 23:51:58.324077 systemd[1]: Starting update-engine.service - Update Engine... Nov 4 23:51:58.323457 oslogin_cache_refresh[1582]: Refreshing group entry cache Nov 4 23:51:58.330129 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 4 23:51:58.334675 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Failure getting groups, quitting Nov 4 23:51:58.334675 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 23:51:58.334666 oslogin_cache_refresh[1582]: Failure getting groups, quitting Nov 4 23:51:58.334681 oslogin_cache_refresh[1582]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 23:51:58.343259 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 4 23:51:58.345997 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 4 23:51:58.347445 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 4 23:51:58.347895 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 4 23:51:58.348163 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 4 23:51:58.351068 jq[1600]: true Nov 4 23:51:58.350884 systemd[1]: motdgen.service: Deactivated successfully. Nov 4 23:51:58.351453 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 4 23:51:58.355054 extend-filesystems[1581]: Resized partition /dev/vda9 Nov 4 23:51:58.357687 extend-filesystems[1614]: resize2fs 1.47.3 (8-Jul-2025) Nov 4 23:51:58.360169 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 4 23:51:58.360447 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 4 23:51:58.363249 update_engine[1595]: I20251104 23:51:58.363163 1595 main.cc:92] Flatcar Update Engine starting Nov 4 23:51:58.367045 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 4 23:51:58.382218 (ntainerd)[1617]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 4 23:51:58.401050 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 4 23:51:58.470272 extend-filesystems[1614]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 4 23:51:58.470272 extend-filesystems[1614]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 4 23:51:58.470272 extend-filesystems[1614]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 4 23:51:58.479398 extend-filesystems[1581]: Resized filesystem in /dev/vda9 Nov 4 23:51:58.482472 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 4 23:51:58.482908 tar[1615]: linux-amd64/LICENSE Nov 4 23:51:58.483214 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 4 23:51:58.487118 jq[1616]: true Nov 4 23:51:58.487283 tar[1615]: linux-amd64/helm Nov 4 23:51:58.519531 systemd-logind[1590]: Watching system buttons on /dev/input/event2 (Power Button) Nov 4 23:51:58.519564 systemd-logind[1590]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 4 23:51:58.521173 systemd-logind[1590]: New seat seat0. Nov 4 23:51:58.529187 systemd[1]: Started systemd-logind.service - User Login Management. Nov 4 23:51:58.533660 dbus-daemon[1578]: [system] SELinux support is enabled Nov 4 23:51:58.533865 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 4 23:51:58.538427 update_engine[1595]: I20251104 23:51:58.538362 1595 update_check_scheduler.cc:74] Next update check in 8m11s Nov 4 23:51:58.539146 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 4 23:51:58.539174 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 4 23:51:58.541414 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 4 23:51:58.541760 dbus-daemon[1578]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 4 23:51:58.541433 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 4 23:51:58.543885 systemd[1]: Started update-engine.service - Update Engine. Nov 4 23:51:58.545103 bash[1647]: Updated "/home/core/.ssh/authorized_keys" Nov 4 23:51:58.548132 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 4 23:51:58.550884 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 4 23:51:58.555333 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 4 23:51:58.640101 sshd_keygen[1608]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 4 23:51:58.663135 locksmithd[1648]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 4 23:51:58.679222 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 4 23:51:58.686301 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 4 23:51:58.702103 systemd[1]: issuegen.service: Deactivated successfully. Nov 4 23:51:58.702385 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 4 23:51:58.706203 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 4 23:51:58.741514 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 4 23:51:58.746119 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 4 23:51:58.753319 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 4 23:51:58.755318 systemd[1]: Reached target getty.target - Login Prompts. Nov 4 23:51:58.915662 containerd[1617]: time="2025-11-04T23:51:58Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 4 23:51:58.916512 containerd[1617]: time="2025-11-04T23:51:58.916480035Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 4 23:51:58.983387 systemd-networkd[1518]: eth0: Gained IPv6LL Nov 4 23:51:58.989711 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 4 23:51:59.001707 systemd[1]: Reached target network-online.target - Network is Online. Nov 4 23:51:59.005539 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 4 23:51:59.011241 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:51:59.012309 containerd[1617]: time="2025-11-04T23:51:59.012226654Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="24.646µs" Nov 4 23:51:59.012382 containerd[1617]: time="2025-11-04T23:51:59.012362799Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 4 23:51:59.012456 containerd[1617]: time="2025-11-04T23:51:59.012443861Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 4 23:51:59.012718 containerd[1617]: time="2025-11-04T23:51:59.012700021Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 4 23:51:59.012793 containerd[1617]: time="2025-11-04T23:51:59.012778489Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 4 23:51:59.012929 containerd[1617]: time="2025-11-04T23:51:59.012910015Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 23:51:59.013103 containerd[1617]: time="2025-11-04T23:51:59.013082689Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 23:51:59.013155 containerd[1617]: time="2025-11-04T23:51:59.013143102Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 23:51:59.013491 containerd[1617]: time="2025-11-04T23:51:59.013471368Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 23:51:59.013547 containerd[1617]: time="2025-11-04T23:51:59.013535027Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 23:51:59.013619 containerd[1617]: time="2025-11-04T23:51:59.013604508Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 23:51:59.013666 containerd[1617]: time="2025-11-04T23:51:59.013654872Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 4 23:51:59.013820 containerd[1617]: time="2025-11-04T23:51:59.013802279Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 4 23:51:59.014204 containerd[1617]: time="2025-11-04T23:51:59.014182261Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 23:51:59.014297 containerd[1617]: time="2025-11-04T23:51:59.014281277Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 23:51:59.014344 containerd[1617]: time="2025-11-04T23:51:59.014333054Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 4 23:51:59.014455 containerd[1617]: time="2025-11-04T23:51:59.014438562Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 4 23:51:59.014948 containerd[1617]: time="2025-11-04T23:51:59.014928150Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 4 23:51:59.015340 containerd[1617]: time="2025-11-04T23:51:59.015303975Z" level=info msg="metadata content store policy set" policy=shared Nov 4 23:51:59.020356 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 4 23:51:59.028049 containerd[1617]: time="2025-11-04T23:51:59.024568249Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 4 23:51:59.028049 containerd[1617]: time="2025-11-04T23:51:59.024639322Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 4 23:51:59.028049 containerd[1617]: time="2025-11-04T23:51:59.024659650Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 4 23:51:59.028049 containerd[1617]: time="2025-11-04T23:51:59.024674568Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 4 23:51:59.028049 containerd[1617]: time="2025-11-04T23:51:59.024786257Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 4 23:51:59.028049 containerd[1617]: time="2025-11-04T23:51:59.024807958Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 4 23:51:59.028049 containerd[1617]: time="2025-11-04T23:51:59.024827004Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 4 23:51:59.028049 containerd[1617]: time="2025-11-04T23:51:59.024848314Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 4 23:51:59.028049 containerd[1617]: time="2025-11-04T23:51:59.024862390Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 4 23:51:59.028049 containerd[1617]: time="2025-11-04T23:51:59.024874192Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 4 23:51:59.028049 containerd[1617]: time="2025-11-04T23:51:59.024894761Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 4 23:51:59.028049 containerd[1617]: time="2025-11-04T23:51:59.024914548Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 4 23:51:59.028049 containerd[1617]: time="2025-11-04T23:51:59.025062586Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 4 23:51:59.028049 containerd[1617]: time="2025-11-04T23:51:59.025087162Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 4 23:51:59.028323 containerd[1617]: time="2025-11-04T23:51:59.025103252Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 4 23:51:59.028323 containerd[1617]: time="2025-11-04T23:51:59.025115966Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 4 23:51:59.028323 containerd[1617]: time="2025-11-04T23:51:59.025127638Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 4 23:51:59.028323 containerd[1617]: time="2025-11-04T23:51:59.025144810Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 4 23:51:59.028323 containerd[1617]: time="2025-11-04T23:51:59.025158475Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 4 23:51:59.028323 containerd[1617]: time="2025-11-04T23:51:59.025183532Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 4 23:51:59.028323 containerd[1617]: time="2025-11-04T23:51:59.025199102Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 4 23:51:59.028323 containerd[1617]: time="2025-11-04T23:51:59.025209511Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 4 23:51:59.028323 containerd[1617]: time="2025-11-04T23:51:59.025223307Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 4 23:51:59.028323 containerd[1617]: time="2025-11-04T23:51:59.025364953Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 4 23:51:59.028323 containerd[1617]: time="2025-11-04T23:51:59.025386994Z" level=info msg="Start snapshots syncer" Nov 4 23:51:59.028323 containerd[1617]: time="2025-11-04T23:51:59.025418082Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 4 23:51:59.028539 containerd[1617]: time="2025-11-04T23:51:59.025704469Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 4 23:51:59.028539 containerd[1617]: time="2025-11-04T23:51:59.025764061Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 4 23:51:59.030586 containerd[1617]: time="2025-11-04T23:51:59.030192165Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 4 23:51:59.030586 containerd[1617]: time="2025-11-04T23:51:59.030322620Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 4 23:51:59.030586 containerd[1617]: time="2025-11-04T23:51:59.030346395Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 4 23:51:59.030586 containerd[1617]: time="2025-11-04T23:51:59.030356033Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 4 23:51:59.030586 containerd[1617]: time="2025-11-04T23:51:59.030367665Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 4 23:51:59.030586 containerd[1617]: time="2025-11-04T23:51:59.030383003Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 4 23:51:59.030586 containerd[1617]: time="2025-11-04T23:51:59.030393563Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 4 23:51:59.030586 containerd[1617]: time="2025-11-04T23:51:59.030409814Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 4 23:51:59.030586 containerd[1617]: time="2025-11-04T23:51:59.030440661Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 4 23:51:59.030586 containerd[1617]: time="2025-11-04T23:51:59.030460398Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 4 23:51:59.030586 containerd[1617]: time="2025-11-04T23:51:59.030478082Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 4 23:51:59.035096 containerd[1617]: time="2025-11-04T23:51:59.033199676Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 23:51:59.035096 containerd[1617]: time="2025-11-04T23:51:59.033313689Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 23:51:59.035096 containerd[1617]: time="2025-11-04T23:51:59.033325041Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 23:51:59.035096 containerd[1617]: time="2025-11-04T23:51:59.033334819Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 23:51:59.035096 containerd[1617]: time="2025-11-04T23:51:59.033342263Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 4 23:51:59.035096 containerd[1617]: time="2025-11-04T23:51:59.033352242Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 4 23:51:59.035096 containerd[1617]: time="2025-11-04T23:51:59.033368392Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 4 23:51:59.035096 containerd[1617]: time="2025-11-04T23:51:59.033408627Z" level=info msg="runtime interface created" Nov 4 23:51:59.035096 containerd[1617]: time="2025-11-04T23:51:59.033415390Z" level=info msg="created NRI interface" Nov 4 23:51:59.035096 containerd[1617]: time="2025-11-04T23:51:59.033432572Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 4 23:51:59.035096 containerd[1617]: time="2025-11-04T23:51:59.033447981Z" level=info msg="Connect containerd service" Nov 4 23:51:59.035096 containerd[1617]: time="2025-11-04T23:51:59.033491864Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 4 23:51:59.035096 containerd[1617]: time="2025-11-04T23:51:59.034696593Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 4 23:51:59.055588 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 4 23:51:59.076145 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 4 23:51:59.076469 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 4 23:51:59.078876 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 4 23:51:59.281978 tar[1615]: linux-amd64/README.md Nov 4 23:51:59.301588 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 4 23:51:59.491190 containerd[1617]: time="2025-11-04T23:51:59.491098800Z" level=info msg="Start subscribing containerd event" Nov 4 23:51:59.491405 containerd[1617]: time="2025-11-04T23:51:59.491245926Z" level=info msg="Start recovering state" Nov 4 23:51:59.491480 containerd[1617]: time="2025-11-04T23:51:59.491461159Z" level=info msg="Start event monitor" Nov 4 23:51:59.491686 containerd[1617]: time="2025-11-04T23:51:59.491507076Z" level=info msg="Start cni network conf syncer for default" Nov 4 23:51:59.491686 containerd[1617]: time="2025-11-04T23:51:59.491523066Z" level=info msg="Start streaming server" Nov 4 23:51:59.491686 containerd[1617]: time="2025-11-04T23:51:59.491549655Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 4 23:51:59.491686 containerd[1617]: time="2025-11-04T23:51:59.491547622Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 4 23:51:59.491686 containerd[1617]: time="2025-11-04T23:51:59.491566557Z" level=info msg="runtime interface starting up..." Nov 4 23:51:59.491686 containerd[1617]: time="2025-11-04T23:51:59.491629756Z" level=info msg="starting plugins..." Nov 4 23:51:59.491686 containerd[1617]: time="2025-11-04T23:51:59.491641518Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 4 23:51:59.491686 containerd[1617]: time="2025-11-04T23:51:59.491654903Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 4 23:51:59.494878 containerd[1617]: time="2025-11-04T23:51:59.491883602Z" level=info msg="containerd successfully booted in 0.576818s" Nov 4 23:51:59.492534 systemd[1]: Started containerd.service - containerd container runtime. Nov 4 23:51:59.879485 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 4 23:51:59.882674 systemd[1]: Started sshd@0-10.0.0.97:22-10.0.0.1:52908.service - OpenSSH per-connection server daemon (10.0.0.1:52908). Nov 4 23:52:00.103310 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 52908 ssh2: RSA SHA256:v8z7uopbB1B1OOL2xS9KndxAowBPo6/CiwqBjTrJpz4 Nov 4 23:52:00.105537 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:52:00.114287 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 4 23:52:00.117550 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 4 23:52:00.125983 systemd-logind[1590]: New session 1 of user core. Nov 4 23:52:00.180372 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 4 23:52:00.185721 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 4 23:52:00.208125 (systemd)[1719]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 4 23:52:00.211414 systemd-logind[1590]: New session c1 of user core. Nov 4 23:52:00.358863 systemd[1719]: Queued start job for default target default.target. Nov 4 23:52:00.374715 systemd[1719]: Created slice app.slice - User Application Slice. Nov 4 23:52:00.374746 systemd[1719]: Reached target paths.target - Paths. Nov 4 23:52:00.374829 systemd[1719]: Reached target timers.target - Timers. Nov 4 23:52:00.376641 systemd[1719]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 4 23:52:00.394290 systemd[1719]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 4 23:52:00.394444 systemd[1719]: Reached target sockets.target - Sockets. Nov 4 23:52:00.394494 systemd[1719]: Reached target basic.target - Basic System. Nov 4 23:52:00.394537 systemd[1719]: Reached target default.target - Main User Target. Nov 4 23:52:00.394576 systemd[1719]: Startup finished in 172ms. Nov 4 23:52:00.395381 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 4 23:52:00.406408 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 4 23:52:00.449379 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:52:00.451872 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 4 23:52:00.453892 systemd[1]: Startup finished in 3.044s (kernel) + 6.826s (initrd) + 5.680s (userspace) = 15.550s. Nov 4 23:52:00.454425 (kubelet)[1733]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:52:00.464001 systemd[1]: Started sshd@1-10.0.0.97:22-10.0.0.1:52920.service - OpenSSH per-connection server daemon (10.0.0.1:52920). Nov 4 23:52:00.678141 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 52920 ssh2: RSA SHA256:v8z7uopbB1B1OOL2xS9KndxAowBPo6/CiwqBjTrJpz4 Nov 4 23:52:00.679673 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:52:00.685119 systemd-logind[1590]: New session 2 of user core. Nov 4 23:52:00.692164 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 4 23:52:00.752052 sshd[1739]: Connection closed by 10.0.0.1 port 52920 Nov 4 23:52:00.752263 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Nov 4 23:52:00.762826 systemd[1]: sshd@1-10.0.0.97:22-10.0.0.1:52920.service: Deactivated successfully. Nov 4 23:52:00.765055 systemd[1]: session-2.scope: Deactivated successfully. Nov 4 23:52:00.765988 systemd-logind[1590]: Session 2 logged out. Waiting for processes to exit. Nov 4 23:52:00.769569 systemd[1]: Started sshd@2-10.0.0.97:22-10.0.0.1:52936.service - OpenSSH per-connection server daemon (10.0.0.1:52936). Nov 4 23:52:00.770660 systemd-logind[1590]: Removed session 2. Nov 4 23:52:00.842325 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 52936 ssh2: RSA SHA256:v8z7uopbB1B1OOL2xS9KndxAowBPo6/CiwqBjTrJpz4 Nov 4 23:52:00.843990 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:52:00.849613 systemd-logind[1590]: New session 3 of user core. Nov 4 23:52:00.865186 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 4 23:52:00.917139 sshd[1757]: Connection closed by 10.0.0.1 port 52936 Nov 4 23:52:00.917555 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Nov 4 23:52:00.927845 systemd[1]: sshd@2-10.0.0.97:22-10.0.0.1:52936.service: Deactivated successfully. Nov 4 23:52:00.929935 systemd[1]: session-3.scope: Deactivated successfully. Nov 4 23:52:00.930667 systemd-logind[1590]: Session 3 logged out. Waiting for processes to exit. Nov 4 23:52:00.933951 systemd[1]: Started sshd@3-10.0.0.97:22-10.0.0.1:52948.service - OpenSSH per-connection server daemon (10.0.0.1:52948). Nov 4 23:52:00.935467 systemd-logind[1590]: Removed session 3. Nov 4 23:52:00.990845 sshd[1763]: Accepted publickey for core from 10.0.0.1 port 52948 ssh2: RSA SHA256:v8z7uopbB1B1OOL2xS9KndxAowBPo6/CiwqBjTrJpz4 Nov 4 23:52:00.992550 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:52:00.996973 systemd-logind[1590]: New session 4 of user core. Nov 4 23:52:01.005160 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 4 23:52:01.208880 kubelet[1733]: E1104 23:52:01.208772 1733 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:52:01.213926 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:52:01.214267 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:52:01.214970 systemd[1]: kubelet.service: Consumed 1.972s CPU time, 265.4M memory peak. Nov 4 23:52:01.225025 sshd[1768]: Connection closed by 10.0.0.1 port 52948 Nov 4 23:52:01.225641 sshd-session[1763]: pam_unix(sshd:session): session closed for user core Nov 4 23:52:01.239936 systemd[1]: sshd@3-10.0.0.97:22-10.0.0.1:52948.service: Deactivated successfully. Nov 4 23:52:01.242495 systemd[1]: session-4.scope: Deactivated successfully. Nov 4 23:52:01.243437 systemd-logind[1590]: Session 4 logged out. Waiting for processes to exit. Nov 4 23:52:01.246716 systemd[1]: Started sshd@4-10.0.0.97:22-10.0.0.1:52960.service - OpenSSH per-connection server daemon (10.0.0.1:52960). Nov 4 23:52:01.247596 systemd-logind[1590]: Removed session 4. Nov 4 23:52:01.315276 sshd[1775]: Accepted publickey for core from 10.0.0.1 port 52960 ssh2: RSA SHA256:v8z7uopbB1B1OOL2xS9KndxAowBPo6/CiwqBjTrJpz4 Nov 4 23:52:01.317559 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:52:01.323913 systemd-logind[1590]: New session 5 of user core. Nov 4 23:52:01.337406 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 4 23:52:01.402271 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 4 23:52:01.402608 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:52:01.421089 sudo[1779]: pam_unix(sudo:session): session closed for user root Nov 4 23:52:01.423440 sshd[1778]: Connection closed by 10.0.0.1 port 52960 Nov 4 23:52:01.423877 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Nov 4 23:52:01.438186 systemd[1]: sshd@4-10.0.0.97:22-10.0.0.1:52960.service: Deactivated successfully. Nov 4 23:52:01.440547 systemd[1]: session-5.scope: Deactivated successfully. Nov 4 23:52:01.441387 systemd-logind[1590]: Session 5 logged out. Waiting for processes to exit. Nov 4 23:52:01.444548 systemd[1]: Started sshd@5-10.0.0.97:22-10.0.0.1:52962.service - OpenSSH per-connection server daemon (10.0.0.1:52962). Nov 4 23:52:01.445445 systemd-logind[1590]: Removed session 5. Nov 4 23:52:01.502056 sshd[1785]: Accepted publickey for core from 10.0.0.1 port 52962 ssh2: RSA SHA256:v8z7uopbB1B1OOL2xS9KndxAowBPo6/CiwqBjTrJpz4 Nov 4 23:52:01.503396 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:52:01.508436 systemd-logind[1590]: New session 6 of user core. Nov 4 23:52:01.524347 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 4 23:52:01.581223 sudo[1790]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 4 23:52:01.581586 sudo[1790]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:52:01.589162 sudo[1790]: pam_unix(sudo:session): session closed for user root Nov 4 23:52:01.597179 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 4 23:52:01.597499 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:52:01.608910 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 23:52:01.667919 augenrules[1812]: No rules Nov 4 23:52:01.669638 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 23:52:01.669929 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 23:52:01.671128 sudo[1789]: pam_unix(sudo:session): session closed for user root Nov 4 23:52:01.673234 sshd[1788]: Connection closed by 10.0.0.1 port 52962 Nov 4 23:52:01.673528 sshd-session[1785]: pam_unix(sshd:session): session closed for user core Nov 4 23:52:01.682640 systemd[1]: sshd@5-10.0.0.97:22-10.0.0.1:52962.service: Deactivated successfully. Nov 4 23:52:01.684380 systemd[1]: session-6.scope: Deactivated successfully. Nov 4 23:52:01.685075 systemd-logind[1590]: Session 6 logged out. Waiting for processes to exit. Nov 4 23:52:01.687680 systemd[1]: Started sshd@6-10.0.0.97:22-10.0.0.1:52970.service - OpenSSH per-connection server daemon (10.0.0.1:52970). Nov 4 23:52:01.688400 systemd-logind[1590]: Removed session 6. Nov 4 23:52:01.751912 sshd[1821]: Accepted publickey for core from 10.0.0.1 port 52970 ssh2: RSA SHA256:v8z7uopbB1B1OOL2xS9KndxAowBPo6/CiwqBjTrJpz4 Nov 4 23:52:01.753569 sshd-session[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:52:01.758016 systemd-logind[1590]: New session 7 of user core. Nov 4 23:52:01.768177 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 4 23:52:01.825063 sudo[1825]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 4 23:52:01.825603 sudo[1825]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:52:02.793360 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 4 23:52:02.814689 (dockerd)[1846]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 4 23:52:03.542764 dockerd[1846]: time="2025-11-04T23:52:03.542650365Z" level=info msg="Starting up" Nov 4 23:52:03.544785 dockerd[1846]: time="2025-11-04T23:52:03.544715127Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 4 23:52:03.565071 dockerd[1846]: time="2025-11-04T23:52:03.565012512Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 4 23:52:04.085630 dockerd[1846]: time="2025-11-04T23:52:04.085562740Z" level=info msg="Loading containers: start." Nov 4 23:52:04.157057 kernel: Initializing XFRM netlink socket Nov 4 23:52:04.441288 systemd-networkd[1518]: docker0: Link UP Nov 4 23:52:04.447909 dockerd[1846]: time="2025-11-04T23:52:04.447871278Z" level=info msg="Loading containers: done." Nov 4 23:52:04.479063 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck913028138-merged.mount: Deactivated successfully. Nov 4 23:52:04.480593 dockerd[1846]: time="2025-11-04T23:52:04.480554871Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 4 23:52:04.480662 dockerd[1846]: time="2025-11-04T23:52:04.480642676Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 4 23:52:04.480759 dockerd[1846]: time="2025-11-04T23:52:04.480744787Z" level=info msg="Initializing buildkit" Nov 4 23:52:04.514684 dockerd[1846]: time="2025-11-04T23:52:04.514625255Z" level=info msg="Completed buildkit initialization" Nov 4 23:52:04.519001 dockerd[1846]: time="2025-11-04T23:52:04.518956317Z" level=info msg="Daemon has completed initialization" Nov 4 23:52:04.519089 dockerd[1846]: time="2025-11-04T23:52:04.519022912Z" level=info msg="API listen on /run/docker.sock" Nov 4 23:52:04.519263 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 4 23:52:05.620866 containerd[1617]: time="2025-11-04T23:52:05.620801510Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 4 23:52:06.358936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount898112852.mount: Deactivated successfully. Nov 4 23:52:07.686060 containerd[1617]: time="2025-11-04T23:52:07.684861961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:52:07.687749 containerd[1617]: time="2025-11-04T23:52:07.687692850Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Nov 4 23:52:07.689272 containerd[1617]: time="2025-11-04T23:52:07.689206668Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:52:07.691931 containerd[1617]: time="2025-11-04T23:52:07.691896032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:52:07.692913 containerd[1617]: time="2025-11-04T23:52:07.692873215Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.072024465s" Nov 4 23:52:07.692969 containerd[1617]: time="2025-11-04T23:52:07.692912428Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 4 23:52:07.693754 containerd[1617]: time="2025-11-04T23:52:07.693711727Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 4 23:52:09.565327 containerd[1617]: time="2025-11-04T23:52:09.565259718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:52:09.566755 containerd[1617]: time="2025-11-04T23:52:09.566683257Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Nov 4 23:52:09.568677 containerd[1617]: time="2025-11-04T23:52:09.568636340Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:52:09.571727 containerd[1617]: time="2025-11-04T23:52:09.571664459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:52:09.572927 containerd[1617]: time="2025-11-04T23:52:09.572843871Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.879093151s" Nov 4 23:52:09.572927 containerd[1617]: time="2025-11-04T23:52:09.572892051Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 4 23:52:09.573680 containerd[1617]: time="2025-11-04T23:52:09.573638541Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 4 23:52:11.312478 containerd[1617]: time="2025-11-04T23:52:11.312386674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:52:11.314187 containerd[1617]: time="2025-11-04T23:52:11.314122229Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Nov 4 23:52:11.315770 containerd[1617]: time="2025-11-04T23:52:11.315732058Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:52:11.319134 containerd[1617]: time="2025-11-04T23:52:11.319027298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:52:11.320214 containerd[1617]: time="2025-11-04T23:52:11.320162427Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.746487397s" Nov 4 23:52:11.320214 containerd[1617]: time="2025-11-04T23:52:11.320209485Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 4 23:52:11.321062 containerd[1617]: time="2025-11-04T23:52:11.320822665Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 4 23:52:11.464690 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 4 23:52:11.466915 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:52:11.752932 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:52:11.757620 (kubelet)[2140]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:52:11.876835 kubelet[2140]: E1104 23:52:11.876766 2140 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:52:11.883660 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:52:11.883895 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:52:11.884384 systemd[1]: kubelet.service: Consumed 304ms CPU time, 110.8M memory peak. Nov 4 23:52:12.782459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2900070871.mount: Deactivated successfully. Nov 4 23:52:13.496787 containerd[1617]: time="2025-11-04T23:52:13.496691496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:52:13.497646 containerd[1617]: time="2025-11-04T23:52:13.497571065Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Nov 4 23:52:13.499630 containerd[1617]: time="2025-11-04T23:52:13.499516754Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:52:13.501853 containerd[1617]: time="2025-11-04T23:52:13.501797120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:52:13.502552 containerd[1617]: time="2025-11-04T23:52:13.502494438Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 2.181634634s" Nov 4 23:52:13.502624 containerd[1617]: time="2025-11-04T23:52:13.502552878Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 4 23:52:13.503492 containerd[1617]: time="2025-11-04T23:52:13.503229988Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 4 23:52:14.219962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2432940898.mount: Deactivated successfully. Nov 4 23:52:15.114411 containerd[1617]: time="2025-11-04T23:52:15.114339393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:52:15.115125 containerd[1617]: time="2025-11-04T23:52:15.115100670Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Nov 4 23:52:15.116679 containerd[1617]: time="2025-11-04T23:52:15.116637863Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:52:15.119696 containerd[1617]: time="2025-11-04T23:52:15.119640414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:52:15.120862 containerd[1617]: time="2025-11-04T23:52:15.120827801Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.617562787s" Nov 4 23:52:15.120862 containerd[1617]: time="2025-11-04T23:52:15.120859350Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 4 23:52:15.121360 containerd[1617]: time="2025-11-04T23:52:15.121335763Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 4 23:52:15.766426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1663980315.mount: Deactivated successfully. Nov 4 23:52:15.772350 containerd[1617]: time="2025-11-04T23:52:15.772315413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:52:15.773341 containerd[1617]: time="2025-11-04T23:52:15.773315568Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 4 23:52:15.774466 containerd[1617]: time="2025-11-04T23:52:15.774429658Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:52:15.776583 containerd[1617]: time="2025-11-04T23:52:15.776552088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:52:15.777320 containerd[1617]: time="2025-11-04T23:52:15.777282788Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 655.919944ms" Nov 4 23:52:15.777359 containerd[1617]: time="2025-11-04T23:52:15.777316562Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 4 23:52:15.777784 containerd[1617]: time="2025-11-04T23:52:15.777739084Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 4 23:52:16.406604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3806107678.mount: Deactivated successfully. Nov 4 23:52:19.218214 containerd[1617]: time="2025-11-04T23:52:19.218144332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:52:19.218821 containerd[1617]: time="2025-11-04T23:52:19.218772470Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Nov 4 23:52:19.220074 containerd[1617]: time="2025-11-04T23:52:19.220016864Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:52:19.222982 containerd[1617]: time="2025-11-04T23:52:19.222936539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:52:19.224290 containerd[1617]: time="2025-11-04T23:52:19.224238971Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.446393348s" Nov 4 23:52:19.224290 containerd[1617]: time="2025-11-04T23:52:19.224278906Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 4 23:52:21.533532 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:52:21.533755 systemd[1]: kubelet.service: Consumed 304ms CPU time, 110.8M memory peak. Nov 4 23:52:21.536157 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:52:21.563913 systemd[1]: Reload requested from client PID 2295 ('systemctl') (unit session-7.scope)... Nov 4 23:52:21.563929 systemd[1]: Reloading... Nov 4 23:52:21.653063 zram_generator::config[2344]: No configuration found. Nov 4 23:52:21.972369 systemd[1]: Reloading finished in 408 ms. Nov 4 23:52:22.050834 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 4 23:52:22.050945 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 4 23:52:22.051381 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:52:22.051429 systemd[1]: kubelet.service: Consumed 166ms CPU time, 98.4M memory peak. Nov 4 23:52:22.053396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:52:22.252626 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:52:22.257364 (kubelet)[2388]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 23:52:22.305275 kubelet[2388]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:52:22.305275 kubelet[2388]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 23:52:22.305275 kubelet[2388]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:52:22.305748 kubelet[2388]: I1104 23:52:22.305344 2388 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 23:52:22.806918 kubelet[2388]: I1104 23:52:22.806867 2388 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 4 23:52:22.806918 kubelet[2388]: I1104 23:52:22.806900 2388 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 23:52:22.807223 kubelet[2388]: I1104 23:52:22.807205 2388 server.go:954] "Client rotation is on, will bootstrap in background" Nov 4 23:52:22.839742 kubelet[2388]: E1104 23:52:22.839687 2388 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.97:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Nov 4 23:52:22.841082 kubelet[2388]: I1104 23:52:22.841021 2388 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 23:52:22.851856 kubelet[2388]: I1104 23:52:22.851823 2388 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 23:52:22.857871 kubelet[2388]: I1104 23:52:22.857832 2388 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 4 23:52:22.858158 kubelet[2388]: I1104 23:52:22.858107 2388 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 23:52:22.858358 kubelet[2388]: I1104 23:52:22.858145 2388 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 23:52:22.858505 kubelet[2388]: I1104 23:52:22.858374 2388 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 23:52:22.858505 kubelet[2388]: I1104 23:52:22.858383 2388 container_manager_linux.go:304] "Creating device plugin manager" Nov 4 23:52:22.858561 kubelet[2388]: I1104 23:52:22.858543 2388 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:52:22.862228 kubelet[2388]: I1104 23:52:22.862193 2388 kubelet.go:446] "Attempting to sync node with API server" Nov 4 23:52:22.862271 kubelet[2388]: I1104 23:52:22.862234 2388 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 23:52:22.862271 kubelet[2388]: I1104 23:52:22.862265 2388 kubelet.go:352] "Adding apiserver pod source" Nov 4 23:52:22.862433 kubelet[2388]: I1104 23:52:22.862285 2388 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 23:52:22.867974 kubelet[2388]: W1104 23:52:22.867891 2388 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Nov 4 23:52:22.868091 kubelet[2388]: E1104 23:52:22.867987 2388 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Nov 4 23:52:22.868970 kubelet[2388]: I1104 23:52:22.868186 2388 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 4 23:52:22.868970 kubelet[2388]: I1104 23:52:22.868789 2388 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 4 23:52:22.869250 kubelet[2388]: W1104 23:52:22.869193 2388 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Nov 4 23:52:22.869310 kubelet[2388]: E1104 23:52:22.869272 2388 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Nov 4 23:52:22.869979 kubelet[2388]: W1104 23:52:22.869807 2388 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 4 23:52:22.873287 kubelet[2388]: I1104 23:52:22.873258 2388 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 4 23:52:22.873377 kubelet[2388]: I1104 23:52:22.873314 2388 server.go:1287] "Started kubelet" Nov 4 23:52:22.873574 kubelet[2388]: I1104 23:52:22.873488 2388 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 23:52:22.875521 kubelet[2388]: I1104 23:52:22.875476 2388 server.go:479] "Adding debug handlers to kubelet server" Nov 4 23:52:22.880057 kubelet[2388]: I1104 23:52:22.878515 2388 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 23:52:22.880057 kubelet[2388]: I1104 23:52:22.879680 2388 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 23:52:22.880759 kubelet[2388]: I1104 23:52:22.880621 2388 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 23:52:22.881142 kubelet[2388]: I1104 23:52:22.881112 2388 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 23:52:22.884982 kubelet[2388]: E1104 23:52:22.884951 2388 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 23:52:22.885115 kubelet[2388]: I1104 23:52:22.885093 2388 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 4 23:52:22.885243 kubelet[2388]: E1104 23:52:22.884017 2388 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.97:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.97:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1874f2d32482ceec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-04 23:52:22.873280236 +0000 UTC m=+0.610750466,LastTimestamp:2025-11-04 23:52:22.873280236 +0000 UTC m=+0.610750466,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 4 23:52:22.885243 kubelet[2388]: I1104 23:52:22.885209 2388 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 4 23:52:22.885403 kubelet[2388]: I1104 23:52:22.885305 2388 reconciler.go:26] "Reconciler: start to sync state" Nov 4 23:52:22.885472 kubelet[2388]: E1104 23:52:22.885186 2388 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 23:52:22.885549 kubelet[2388]: E1104 23:52:22.885512 2388 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="200ms" Nov 4 23:52:22.886019 kubelet[2388]: W1104 23:52:22.885953 2388 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Nov 4 23:52:22.886122 kubelet[2388]: E1104 23:52:22.886022 2388 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Nov 4 23:52:22.886308 kubelet[2388]: I1104 23:52:22.886265 2388 factory.go:221] Registration of the systemd container factory successfully Nov 4 23:52:22.886633 kubelet[2388]: I1104 23:52:22.886606 2388 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 23:52:22.887895 kubelet[2388]: I1104 23:52:22.887863 2388 factory.go:221] Registration of the containerd container factory successfully Nov 4 23:52:22.903287 kubelet[2388]: I1104 23:52:22.903136 2388 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 4 23:52:22.904596 kubelet[2388]: I1104 23:52:22.904580 2388 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 4 23:52:22.904686 kubelet[2388]: I1104 23:52:22.904676 2388 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 4 23:52:22.904757 kubelet[2388]: I1104 23:52:22.904747 2388 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 23:52:22.904811 kubelet[2388]: I1104 23:52:22.904803 2388 kubelet.go:2382] "Starting kubelet main sync loop" Nov 4 23:52:22.904916 kubelet[2388]: E1104 23:52:22.904897 2388 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 23:52:22.906916 kubelet[2388]: W1104 23:52:22.906862 2388 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Nov 4 23:52:22.906976 kubelet[2388]: E1104 23:52:22.906919 2388 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Nov 4 23:52:22.914645 kubelet[2388]: I1104 23:52:22.914622 2388 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 23:52:22.914645 kubelet[2388]: I1104 23:52:22.914639 2388 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 23:52:22.914721 kubelet[2388]: I1104 23:52:22.914661 2388 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:52:22.986106 kubelet[2388]: E1104 23:52:22.986072 2388 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 23:52:23.005316 kubelet[2388]: E1104 23:52:23.005271 2388 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 4 23:52:23.086823 kubelet[2388]: E1104 23:52:23.086672 2388 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 23:52:23.087107 kubelet[2388]: E1104 23:52:23.087078 2388 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="400ms" Nov 4 23:52:23.184594 kubelet[2388]: I1104 23:52:23.184545 2388 policy_none.go:49] "None policy: Start" Nov 4 23:52:23.184594 kubelet[2388]: I1104 23:52:23.184582 2388 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 4 23:52:23.184594 kubelet[2388]: I1104 23:52:23.184601 2388 state_mem.go:35] "Initializing new in-memory state store" Nov 4 23:52:23.186946 kubelet[2388]: E1104 23:52:23.186912 2388 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 23:52:23.191154 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 4 23:52:23.205683 kubelet[2388]: E1104 23:52:23.205636 2388 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 4 23:52:23.210739 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 4 23:52:23.214166 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 4 23:52:23.233198 kubelet[2388]: I1104 23:52:23.233149 2388 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 4 23:52:23.233543 kubelet[2388]: I1104 23:52:23.233486 2388 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 23:52:23.233543 kubelet[2388]: I1104 23:52:23.233512 2388 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 23:52:23.233917 kubelet[2388]: I1104 23:52:23.233865 2388 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 23:52:23.234974 kubelet[2388]: E1104 23:52:23.234940 2388 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 23:52:23.235072 kubelet[2388]: E1104 23:52:23.235055 2388 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 4 23:52:23.336957 kubelet[2388]: I1104 23:52:23.336797 2388 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 23:52:23.337667 kubelet[2388]: E1104 23:52:23.337610 2388 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Nov 4 23:52:23.487728 kubelet[2388]: E1104 23:52:23.487664 2388 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="800ms" Nov 4 23:52:23.539443 kubelet[2388]: I1104 23:52:23.539400 2388 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 23:52:23.539846 kubelet[2388]: E1104 23:52:23.539801 2388 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Nov 4 23:52:23.618061 systemd[1]: Created slice kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice - libcontainer container kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice. Nov 4 23:52:23.629883 kubelet[2388]: E1104 23:52:23.629846 2388 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:52:23.632495 systemd[1]: Created slice kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice - libcontainer container kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice. Nov 4 23:52:23.634728 kubelet[2388]: E1104 23:52:23.634684 2388 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:52:23.637362 systemd[1]: Created slice kubepods-burstable-pode4006302d9430d0966904948fcffffb7.slice - libcontainer container kubepods-burstable-pode4006302d9430d0966904948fcffffb7.slice. Nov 4 23:52:23.639280 kubelet[2388]: E1104 23:52:23.639256 2388 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:52:23.690645 kubelet[2388]: I1104 23:52:23.690568 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:52:23.690645 kubelet[2388]: I1104 23:52:23.690617 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:52:23.690645 kubelet[2388]: I1104 23:52:23.690646 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e4006302d9430d0966904948fcffffb7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e4006302d9430d0966904948fcffffb7\") " pod="kube-system/kube-apiserver-localhost" Nov 4 23:52:23.690645 kubelet[2388]: I1104 23:52:23.690662 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e4006302d9430d0966904948fcffffb7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e4006302d9430d0966904948fcffffb7\") " pod="kube-system/kube-apiserver-localhost" Nov 4 23:52:23.690921 kubelet[2388]: I1104 23:52:23.690679 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:52:23.690921 kubelet[2388]: I1104 23:52:23.690694 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:52:23.690921 kubelet[2388]: I1104 23:52:23.690708 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e4006302d9430d0966904948fcffffb7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e4006302d9430d0966904948fcffffb7\") " pod="kube-system/kube-apiserver-localhost" Nov 4 23:52:23.690921 kubelet[2388]: I1104 23:52:23.690722 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:52:23.690921 kubelet[2388]: I1104 23:52:23.690739 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 4 23:52:23.742153 kubelet[2388]: W1104 23:52:23.742084 2388 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Nov 4 23:52:23.742153 kubelet[2388]: E1104 23:52:23.742151 2388 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Nov 4 23:52:23.931209 kubelet[2388]: E1104 23:52:23.931157 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:23.931953 containerd[1617]: time="2025-11-04T23:52:23.931911618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Nov 4 23:52:23.936151 kubelet[2388]: E1104 23:52:23.936119 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:23.936760 containerd[1617]: time="2025-11-04T23:52:23.936712331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Nov 4 23:52:23.940149 kubelet[2388]: E1104 23:52:23.939987 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:23.940394 containerd[1617]: time="2025-11-04T23:52:23.940352828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e4006302d9430d0966904948fcffffb7,Namespace:kube-system,Attempt:0,}" Nov 4 23:52:23.941256 kubelet[2388]: I1104 23:52:23.941224 2388 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 23:52:23.941593 kubelet[2388]: E1104 23:52:23.941563 2388 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Nov 4 23:52:24.274020 kubelet[2388]: W1104 23:52:24.273877 2388 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Nov 4 23:52:24.274020 kubelet[2388]: E1104 23:52:24.273948 2388 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Nov 4 23:52:24.278792 kubelet[2388]: W1104 23:52:24.278752 2388 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Nov 4 23:52:24.278837 kubelet[2388]: E1104 23:52:24.278794 2388 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Nov 4 23:52:24.288608 kubelet[2388]: E1104 23:52:24.288552 2388 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="1.6s" Nov 4 23:52:24.407857 kubelet[2388]: W1104 23:52:24.407744 2388 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Nov 4 23:52:24.407857 kubelet[2388]: E1104 23:52:24.407843 2388 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Nov 4 23:52:24.672844 containerd[1617]: time="2025-11-04T23:52:24.672786443Z" level=info msg="connecting to shim 74de6d3168135f7bdc461f7e4a679c7cc420751cf6e5ba8f7d237772649d883d" address="unix:///run/containerd/s/96fff5b0583d2bc544f2197c941bbc00c25eae2655e857adad4ff7bf711ad553" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:52:24.711830 containerd[1617]: time="2025-11-04T23:52:24.711720017Z" level=info msg="connecting to shim c74b711d258952478430722191a6032f82172c5249bc58a62c3e16dfdbb14be9" address="unix:///run/containerd/s/917bf0e5f3bc44b9bdd549092079d50d384911ede7ac4677b36c9d93df34d6b3" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:52:24.715795 containerd[1617]: time="2025-11-04T23:52:24.715747089Z" level=info msg="connecting to shim 5c49ca8b59e1fef9b0ae32409c8375ed01bb1fbee019d970869725cb006cd937" address="unix:///run/containerd/s/4fe74344d7f491527148c94b5d4b0045af8726f0359df05f7a44e4a78438aa4a" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:52:24.735196 systemd[1]: Started cri-containerd-74de6d3168135f7bdc461f7e4a679c7cc420751cf6e5ba8f7d237772649d883d.scope - libcontainer container 74de6d3168135f7bdc461f7e4a679c7cc420751cf6e5ba8f7d237772649d883d. Nov 4 23:52:24.936915 kubelet[2388]: I1104 23:52:24.936792 2388 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 23:52:24.938726 kubelet[2388]: E1104 23:52:24.938698 2388 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Nov 4 23:52:24.944162 systemd[1]: Started cri-containerd-c74b711d258952478430722191a6032f82172c5249bc58a62c3e16dfdbb14be9.scope - libcontainer container c74b711d258952478430722191a6032f82172c5249bc58a62c3e16dfdbb14be9. Nov 4 23:52:24.948811 systemd[1]: Started cri-containerd-5c49ca8b59e1fef9b0ae32409c8375ed01bb1fbee019d970869725cb006cd937.scope - libcontainer container 5c49ca8b59e1fef9b0ae32409c8375ed01bb1fbee019d970869725cb006cd937. Nov 4 23:52:25.004411 kubelet[2388]: E1104 23:52:25.004362 2388 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.97:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Nov 4 23:52:25.057889 containerd[1617]: time="2025-11-04T23:52:25.057834926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"74de6d3168135f7bdc461f7e4a679c7cc420751cf6e5ba8f7d237772649d883d\"" Nov 4 23:52:25.059218 kubelet[2388]: E1104 23:52:25.059174 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:25.061468 containerd[1617]: time="2025-11-04T23:52:25.061414830Z" level=info msg="CreateContainer within sandbox \"74de6d3168135f7bdc461f7e4a679c7cc420751cf6e5ba8f7d237772649d883d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 4 23:52:25.108537 containerd[1617]: time="2025-11-04T23:52:25.108492036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e4006302d9430d0966904948fcffffb7,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c49ca8b59e1fef9b0ae32409c8375ed01bb1fbee019d970869725cb006cd937\"" Nov 4 23:52:25.109303 kubelet[2388]: E1104 23:52:25.109257 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:25.111198 containerd[1617]: time="2025-11-04T23:52:25.111165059Z" level=info msg="CreateContainer within sandbox \"5c49ca8b59e1fef9b0ae32409c8375ed01bb1fbee019d970869725cb006cd937\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 4 23:52:25.126067 containerd[1617]: time="2025-11-04T23:52:25.125985823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"c74b711d258952478430722191a6032f82172c5249bc58a62c3e16dfdbb14be9\"" Nov 4 23:52:25.126523 kubelet[2388]: E1104 23:52:25.126492 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:25.127808 containerd[1617]: time="2025-11-04T23:52:25.127782503Z" level=info msg="CreateContainer within sandbox \"c74b711d258952478430722191a6032f82172c5249bc58a62c3e16dfdbb14be9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 4 23:52:25.280431 containerd[1617]: time="2025-11-04T23:52:25.280295970Z" level=info msg="Container 3297953661695f28078561b36d9156a71564d2b90058ea6a99aeb3eb48a569d7: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:52:25.283700 containerd[1617]: time="2025-11-04T23:52:25.283648597Z" level=info msg="Container 7e83263baf1430790615df4bbf02452689d484339f00a0212b895c48bb543d7c: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:52:25.286913 containerd[1617]: time="2025-11-04T23:52:25.286857906Z" level=info msg="Container edb1c1ef2055d86c286b4ee4ef537b3719cf797bb0752d57215cb1e9304f53a2: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:52:25.294057 containerd[1617]: time="2025-11-04T23:52:25.293973250Z" level=info msg="CreateContainer within sandbox \"5c49ca8b59e1fef9b0ae32409c8375ed01bb1fbee019d970869725cb006cd937\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7e83263baf1430790615df4bbf02452689d484339f00a0212b895c48bb543d7c\"" Nov 4 23:52:25.294766 containerd[1617]: time="2025-11-04T23:52:25.294721814Z" level=info msg="StartContainer for \"7e83263baf1430790615df4bbf02452689d484339f00a0212b895c48bb543d7c\"" Nov 4 23:52:25.295913 containerd[1617]: time="2025-11-04T23:52:25.295878052Z" level=info msg="connecting to shim 7e83263baf1430790615df4bbf02452689d484339f00a0212b895c48bb543d7c" address="unix:///run/containerd/s/4fe74344d7f491527148c94b5d4b0045af8726f0359df05f7a44e4a78438aa4a" protocol=ttrpc version=3 Nov 4 23:52:25.297233 containerd[1617]: time="2025-11-04T23:52:25.297177810Z" level=info msg="CreateContainer within sandbox \"74de6d3168135f7bdc461f7e4a679c7cc420751cf6e5ba8f7d237772649d883d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3297953661695f28078561b36d9156a71564d2b90058ea6a99aeb3eb48a569d7\"" Nov 4 23:52:25.297634 containerd[1617]: time="2025-11-04T23:52:25.297581026Z" level=info msg="StartContainer for \"3297953661695f28078561b36d9156a71564d2b90058ea6a99aeb3eb48a569d7\"" Nov 4 23:52:25.299069 containerd[1617]: time="2025-11-04T23:52:25.298719881Z" level=info msg="connecting to shim 3297953661695f28078561b36d9156a71564d2b90058ea6a99aeb3eb48a569d7" address="unix:///run/containerd/s/96fff5b0583d2bc544f2197c941bbc00c25eae2655e857adad4ff7bf711ad553" protocol=ttrpc version=3 Nov 4 23:52:25.299574 containerd[1617]: time="2025-11-04T23:52:25.299530412Z" level=info msg="CreateContainer within sandbox \"c74b711d258952478430722191a6032f82172c5249bc58a62c3e16dfdbb14be9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"edb1c1ef2055d86c286b4ee4ef537b3719cf797bb0752d57215cb1e9304f53a2\"" Nov 4 23:52:25.300273 containerd[1617]: time="2025-11-04T23:52:25.300235003Z" level=info msg="StartContainer for \"edb1c1ef2055d86c286b4ee4ef537b3719cf797bb0752d57215cb1e9304f53a2\"" Nov 4 23:52:25.301654 containerd[1617]: time="2025-11-04T23:52:25.301619720Z" level=info msg="connecting to shim edb1c1ef2055d86c286b4ee4ef537b3719cf797bb0752d57215cb1e9304f53a2" address="unix:///run/containerd/s/917bf0e5f3bc44b9bdd549092079d50d384911ede7ac4677b36c9d93df34d6b3" protocol=ttrpc version=3 Nov 4 23:52:25.324322 systemd[1]: Started cri-containerd-7e83263baf1430790615df4bbf02452689d484339f00a0212b895c48bb543d7c.scope - libcontainer container 7e83263baf1430790615df4bbf02452689d484339f00a0212b895c48bb543d7c. Nov 4 23:52:25.334271 systemd[1]: Started cri-containerd-3297953661695f28078561b36d9156a71564d2b90058ea6a99aeb3eb48a569d7.scope - libcontainer container 3297953661695f28078561b36d9156a71564d2b90058ea6a99aeb3eb48a569d7. Nov 4 23:52:25.336140 systemd[1]: Started cri-containerd-edb1c1ef2055d86c286b4ee4ef537b3719cf797bb0752d57215cb1e9304f53a2.scope - libcontainer container edb1c1ef2055d86c286b4ee4ef537b3719cf797bb0752d57215cb1e9304f53a2. Nov 4 23:52:25.399937 containerd[1617]: time="2025-11-04T23:52:25.399853353Z" level=info msg="StartContainer for \"7e83263baf1430790615df4bbf02452689d484339f00a0212b895c48bb543d7c\" returns successfully" Nov 4 23:52:25.405888 containerd[1617]: time="2025-11-04T23:52:25.405854507Z" level=info msg="StartContainer for \"edb1c1ef2055d86c286b4ee4ef537b3719cf797bb0752d57215cb1e9304f53a2\" returns successfully" Nov 4 23:52:25.407678 containerd[1617]: time="2025-11-04T23:52:25.407621872Z" level=info msg="StartContainer for \"3297953661695f28078561b36d9156a71564d2b90058ea6a99aeb3eb48a569d7\" returns successfully" Nov 4 23:52:25.952986 kubelet[2388]: E1104 23:52:25.952934 2388 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:52:25.954177 kubelet[2388]: E1104 23:52:25.954112 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:25.957968 kubelet[2388]: E1104 23:52:25.957934 2388 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:52:25.960123 kubelet[2388]: E1104 23:52:25.960079 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:25.960863 kubelet[2388]: E1104 23:52:25.960837 2388 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:52:25.967896 kubelet[2388]: E1104 23:52:25.967788 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:26.540642 kubelet[2388]: I1104 23:52:26.540607 2388 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 23:52:26.721782 kubelet[2388]: E1104 23:52:26.721721 2388 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 4 23:52:26.794586 kubelet[2388]: I1104 23:52:26.794342 2388 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 4 23:52:26.885392 kubelet[2388]: I1104 23:52:26.885331 2388 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 23:52:26.890994 kubelet[2388]: E1104 23:52:26.890935 2388 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 4 23:52:26.890994 kubelet[2388]: I1104 23:52:26.890971 2388 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 23:52:26.892654 kubelet[2388]: E1104 23:52:26.892621 2388 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 4 23:52:26.892654 kubelet[2388]: I1104 23:52:26.892640 2388 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 23:52:26.894239 kubelet[2388]: E1104 23:52:26.894212 2388 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 4 23:52:26.934841 kubelet[2388]: I1104 23:52:26.934775 2388 apiserver.go:52] "Watching apiserver" Nov 4 23:52:26.960704 kubelet[2388]: I1104 23:52:26.960641 2388 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 23:52:26.961333 kubelet[2388]: I1104 23:52:26.961064 2388 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 23:52:26.963027 kubelet[2388]: E1104 23:52:26.962644 2388 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 4 23:52:26.963027 kubelet[2388]: E1104 23:52:26.962828 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:26.963027 kubelet[2388]: E1104 23:52:26.962989 2388 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 4 23:52:26.963265 kubelet[2388]: E1104 23:52:26.963234 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:26.985587 kubelet[2388]: I1104 23:52:26.985503 2388 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 4 23:52:29.202145 systemd[1]: Reload requested from client PID 2667 ('systemctl') (unit session-7.scope)... Nov 4 23:52:29.202166 systemd[1]: Reloading... Nov 4 23:52:29.283115 zram_generator::config[2711]: No configuration found. Nov 4 23:52:29.523055 systemd[1]: Reloading finished in 320 ms. Nov 4 23:52:29.560821 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:52:29.569664 systemd[1]: kubelet.service: Deactivated successfully. Nov 4 23:52:29.569944 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:52:29.569988 systemd[1]: kubelet.service: Consumed 1.238s CPU time, 132.2M memory peak. Nov 4 23:52:29.572759 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:52:29.800817 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:52:29.812452 (kubelet)[2756]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 23:52:29.854633 kubelet[2756]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:52:29.854633 kubelet[2756]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 23:52:29.854633 kubelet[2756]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:52:29.855164 kubelet[2756]: I1104 23:52:29.854726 2756 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 23:52:29.863761 kubelet[2756]: I1104 23:52:29.863710 2756 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 4 23:52:29.863761 kubelet[2756]: I1104 23:52:29.863739 2756 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 23:52:29.864008 kubelet[2756]: I1104 23:52:29.863985 2756 server.go:954] "Client rotation is on, will bootstrap in background" Nov 4 23:52:29.865239 kubelet[2756]: I1104 23:52:29.865209 2756 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 4 23:52:29.867694 kubelet[2756]: I1104 23:52:29.867640 2756 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 23:52:29.876171 kubelet[2756]: I1104 23:52:29.876070 2756 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 23:52:29.882629 kubelet[2756]: I1104 23:52:29.882583 2756 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 4 23:52:29.882957 kubelet[2756]: I1104 23:52:29.882915 2756 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 23:52:29.883153 kubelet[2756]: I1104 23:52:29.882948 2756 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 23:52:29.883267 kubelet[2756]: I1104 23:52:29.883163 2756 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 23:52:29.883267 kubelet[2756]: I1104 23:52:29.883173 2756 container_manager_linux.go:304] "Creating device plugin manager" Nov 4 23:52:29.883267 kubelet[2756]: I1104 23:52:29.883239 2756 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:52:29.883440 kubelet[2756]: I1104 23:52:29.883422 2756 kubelet.go:446] "Attempting to sync node with API server" Nov 4 23:52:29.883464 kubelet[2756]: I1104 23:52:29.883447 2756 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 23:52:29.883488 kubelet[2756]: I1104 23:52:29.883472 2756 kubelet.go:352] "Adding apiserver pod source" Nov 4 23:52:29.883518 kubelet[2756]: I1104 23:52:29.883491 2756 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 23:52:29.885096 kubelet[2756]: I1104 23:52:29.885074 2756 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 4 23:52:29.885465 kubelet[2756]: I1104 23:52:29.885447 2756 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 4 23:52:29.885940 kubelet[2756]: I1104 23:52:29.885912 2756 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 4 23:52:29.885971 kubelet[2756]: I1104 23:52:29.885948 2756 server.go:1287] "Started kubelet" Nov 4 23:52:29.887490 kubelet[2756]: I1104 23:52:29.887471 2756 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 23:52:29.891961 kubelet[2756]: I1104 23:52:29.891328 2756 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 23:52:29.892190 kubelet[2756]: I1104 23:52:29.892161 2756 server.go:479] "Adding debug handlers to kubelet server" Nov 4 23:52:29.896389 kubelet[2756]: I1104 23:52:29.895301 2756 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 23:52:29.896389 kubelet[2756]: I1104 23:52:29.895575 2756 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 23:52:29.896389 kubelet[2756]: I1104 23:52:29.896068 2756 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 23:52:29.900267 kubelet[2756]: I1104 23:52:29.900160 2756 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 4 23:52:29.900613 kubelet[2756]: E1104 23:52:29.900589 2756 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 23:52:29.901328 kubelet[2756]: E1104 23:52:29.900855 2756 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 23:52:29.901328 kubelet[2756]: I1104 23:52:29.901112 2756 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 4 23:52:29.902555 kubelet[2756]: I1104 23:52:29.902532 2756 factory.go:221] Registration of the systemd container factory successfully Nov 4 23:52:29.902668 kubelet[2756]: I1104 23:52:29.902648 2756 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 23:52:29.902942 kubelet[2756]: I1104 23:52:29.902921 2756 reconciler.go:26] "Reconciler: start to sync state" Nov 4 23:52:29.905839 kubelet[2756]: I1104 23:52:29.905738 2756 factory.go:221] Registration of the containerd container factory successfully Nov 4 23:52:29.907548 kubelet[2756]: I1104 23:52:29.907511 2756 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 4 23:52:29.908949 kubelet[2756]: I1104 23:52:29.908897 2756 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 4 23:52:29.908979 kubelet[2756]: I1104 23:52:29.908964 2756 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 4 23:52:29.909043 kubelet[2756]: I1104 23:52:29.909017 2756 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 23:52:29.909108 kubelet[2756]: I1104 23:52:29.909094 2756 kubelet.go:2382] "Starting kubelet main sync loop" Nov 4 23:52:29.909236 kubelet[2756]: E1104 23:52:29.909204 2756 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 23:52:29.944379 kubelet[2756]: I1104 23:52:29.944333 2756 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 23:52:29.944379 kubelet[2756]: I1104 23:52:29.944358 2756 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 23:52:29.944379 kubelet[2756]: I1104 23:52:29.944383 2756 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:52:29.944618 kubelet[2756]: I1104 23:52:29.944595 2756 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 4 23:52:29.944669 kubelet[2756]: I1104 23:52:29.944611 2756 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 4 23:52:29.944669 kubelet[2756]: I1104 23:52:29.944630 2756 policy_none.go:49] "None policy: Start" Nov 4 23:52:29.944669 kubelet[2756]: I1104 23:52:29.944640 2756 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 4 23:52:29.944669 kubelet[2756]: I1104 23:52:29.944651 2756 state_mem.go:35] "Initializing new in-memory state store" Nov 4 23:52:29.944769 kubelet[2756]: I1104 23:52:29.944756 2756 state_mem.go:75] "Updated machine memory state" Nov 4 23:52:29.949628 kubelet[2756]: I1104 23:52:29.949567 2756 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 4 23:52:29.949890 kubelet[2756]: I1104 23:52:29.949859 2756 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 23:52:29.949965 kubelet[2756]: I1104 23:52:29.949884 2756 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 23:52:29.950213 kubelet[2756]: I1104 23:52:29.950155 2756 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 23:52:29.951090 kubelet[2756]: E1104 23:52:29.950963 2756 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 23:52:30.010461 kubelet[2756]: I1104 23:52:30.010416 2756 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 23:52:30.010877 kubelet[2756]: I1104 23:52:30.010569 2756 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 23:52:30.010877 kubelet[2756]: I1104 23:52:30.010644 2756 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 23:52:30.054907 kubelet[2756]: I1104 23:52:30.054743 2756 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 23:52:30.061384 kubelet[2756]: I1104 23:52:30.061336 2756 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 4 23:52:30.061498 kubelet[2756]: I1104 23:52:30.061439 2756 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 4 23:52:30.104672 kubelet[2756]: I1104 23:52:30.104600 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e4006302d9430d0966904948fcffffb7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e4006302d9430d0966904948fcffffb7\") " pod="kube-system/kube-apiserver-localhost" Nov 4 23:52:30.104672 kubelet[2756]: I1104 23:52:30.104654 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e4006302d9430d0966904948fcffffb7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e4006302d9430d0966904948fcffffb7\") " pod="kube-system/kube-apiserver-localhost" Nov 4 23:52:30.104672 kubelet[2756]: I1104 23:52:30.104683 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:52:30.104950 kubelet[2756]: I1104 23:52:30.104703 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:52:30.104950 kubelet[2756]: I1104 23:52:30.104775 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e4006302d9430d0966904948fcffffb7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e4006302d9430d0966904948fcffffb7\") " pod="kube-system/kube-apiserver-localhost" Nov 4 23:52:30.104950 kubelet[2756]: I1104 23:52:30.104822 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:52:30.104950 kubelet[2756]: I1104 23:52:30.104847 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:52:30.104950 kubelet[2756]: I1104 23:52:30.104863 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 4 23:52:30.105321 kubelet[2756]: I1104 23:52:30.104879 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:52:30.316812 kubelet[2756]: E1104 23:52:30.316637 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:30.316812 kubelet[2756]: E1104 23:52:30.316792 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:30.317924 kubelet[2756]: E1104 23:52:30.317802 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:30.884782 kubelet[2756]: I1104 23:52:30.884593 2756 apiserver.go:52] "Watching apiserver" Nov 4 23:52:30.901373 kubelet[2756]: I1104 23:52:30.901311 2756 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 4 23:52:30.929543 kubelet[2756]: I1104 23:52:30.929499 2756 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 23:52:30.929960 kubelet[2756]: I1104 23:52:30.929913 2756 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 23:52:30.930096 kubelet[2756]: I1104 23:52:30.930068 2756 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 23:52:30.935429 kubelet[2756]: E1104 23:52:30.935379 2756 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 4 23:52:30.936094 kubelet[2756]: E1104 23:52:30.936056 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:30.938182 kubelet[2756]: E1104 23:52:30.938142 2756 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 4 23:52:30.938415 kubelet[2756]: E1104 23:52:30.938289 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:30.939062 kubelet[2756]: E1104 23:52:30.938560 2756 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 4 23:52:30.939062 kubelet[2756]: E1104 23:52:30.938741 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:30.957613 kubelet[2756]: I1104 23:52:30.957008 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.956986823 podStartE2EDuration="956.986823ms" podCreationTimestamp="2025-11-04 23:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:52:30.956766792 +0000 UTC m=+1.139367745" watchObservedRunningTime="2025-11-04 23:52:30.956986823 +0000 UTC m=+1.139587766" Nov 4 23:52:30.973105 kubelet[2756]: I1104 23:52:30.973027 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.972979909 podStartE2EDuration="972.979909ms" podCreationTimestamp="2025-11-04 23:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:52:30.972832266 +0000 UTC m=+1.155433219" watchObservedRunningTime="2025-11-04 23:52:30.972979909 +0000 UTC m=+1.155580872" Nov 4 23:52:30.973339 kubelet[2756]: I1104 23:52:30.973137 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.973132421 podStartE2EDuration="973.132421ms" podCreationTimestamp="2025-11-04 23:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:52:30.964802995 +0000 UTC m=+1.147403948" watchObservedRunningTime="2025-11-04 23:52:30.973132421 +0000 UTC m=+1.155733374" Nov 4 23:52:31.930648 kubelet[2756]: E1104 23:52:31.930598 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:31.930648 kubelet[2756]: E1104 23:52:31.930612 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:31.931295 kubelet[2756]: E1104 23:52:31.930837 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:35.513712 kubelet[2756]: I1104 23:52:35.513665 2756 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 4 23:52:35.514180 containerd[1617]: time="2025-11-04T23:52:35.513998008Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 4 23:52:35.514450 kubelet[2756]: I1104 23:52:35.514217 2756 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 4 23:52:36.355052 kubelet[2756]: E1104 23:52:36.354923 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:36.460860 systemd[1]: Created slice kubepods-besteffort-pod690018ea_40e8_4528_bd4d_dde174340a50.slice - libcontainer container kubepods-besteffort-pod690018ea_40e8_4528_bd4d_dde174340a50.slice. Nov 4 23:52:36.545688 kubelet[2756]: I1104 23:52:36.545645 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/690018ea-40e8-4528-bd4d-dde174340a50-lib-modules\") pod \"kube-proxy-nqxhn\" (UID: \"690018ea-40e8-4528-bd4d-dde174340a50\") " pod="kube-system/kube-proxy-nqxhn" Nov 4 23:52:36.545688 kubelet[2756]: I1104 23:52:36.545678 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/690018ea-40e8-4528-bd4d-dde174340a50-kube-proxy\") pod \"kube-proxy-nqxhn\" (UID: \"690018ea-40e8-4528-bd4d-dde174340a50\") " pod="kube-system/kube-proxy-nqxhn" Nov 4 23:52:36.545688 kubelet[2756]: I1104 23:52:36.545700 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/690018ea-40e8-4528-bd4d-dde174340a50-xtables-lock\") pod \"kube-proxy-nqxhn\" (UID: \"690018ea-40e8-4528-bd4d-dde174340a50\") " pod="kube-system/kube-proxy-nqxhn" Nov 4 23:52:36.546190 kubelet[2756]: I1104 23:52:36.545721 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96q4z\" (UniqueName: \"kubernetes.io/projected/690018ea-40e8-4528-bd4d-dde174340a50-kube-api-access-96q4z\") pod \"kube-proxy-nqxhn\" (UID: \"690018ea-40e8-4528-bd4d-dde174340a50\") " pod="kube-system/kube-proxy-nqxhn" Nov 4 23:52:36.622118 systemd[1]: Created slice kubepods-besteffort-pod10751091_1e40_4ed3_849a_6cde64c8b56e.slice - libcontainer container kubepods-besteffort-pod10751091_1e40_4ed3_849a_6cde64c8b56e.slice. Nov 4 23:52:36.646933 kubelet[2756]: I1104 23:52:36.646886 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd6vd\" (UniqueName: \"kubernetes.io/projected/10751091-1e40-4ed3-849a-6cde64c8b56e-kube-api-access-kd6vd\") pod \"tigera-operator-7dcd859c48-cq7sg\" (UID: \"10751091-1e40-4ed3-849a-6cde64c8b56e\") " pod="tigera-operator/tigera-operator-7dcd859c48-cq7sg" Nov 4 23:52:36.646933 kubelet[2756]: I1104 23:52:36.646938 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/10751091-1e40-4ed3-849a-6cde64c8b56e-var-lib-calico\") pod \"tigera-operator-7dcd859c48-cq7sg\" (UID: \"10751091-1e40-4ed3-849a-6cde64c8b56e\") " pod="tigera-operator/tigera-operator-7dcd859c48-cq7sg" Nov 4 23:52:36.773645 kubelet[2756]: E1104 23:52:36.773554 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:36.774455 containerd[1617]: time="2025-11-04T23:52:36.774410779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nqxhn,Uid:690018ea-40e8-4528-bd4d-dde174340a50,Namespace:kube-system,Attempt:0,}" Nov 4 23:52:36.799078 containerd[1617]: time="2025-11-04T23:52:36.798883783Z" level=info msg="connecting to shim 5267023c356ae7ad04fb58d68ac16ce272ac09e36dff82ca64ea2e0c38df98ea" address="unix:///run/containerd/s/8e1226a00c44ed02871873632cf30ff81177ca4d6e1f3badb07ec322130afbb8" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:52:36.830245 systemd[1]: Started cri-containerd-5267023c356ae7ad04fb58d68ac16ce272ac09e36dff82ca64ea2e0c38df98ea.scope - libcontainer container 5267023c356ae7ad04fb58d68ac16ce272ac09e36dff82ca64ea2e0c38df98ea. Nov 4 23:52:36.857553 containerd[1617]: time="2025-11-04T23:52:36.857494490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nqxhn,Uid:690018ea-40e8-4528-bd4d-dde174340a50,Namespace:kube-system,Attempt:0,} returns sandbox id \"5267023c356ae7ad04fb58d68ac16ce272ac09e36dff82ca64ea2e0c38df98ea\"" Nov 4 23:52:36.858238 kubelet[2756]: E1104 23:52:36.858206 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:36.860017 containerd[1617]: time="2025-11-04T23:52:36.859988874Z" level=info msg="CreateContainer within sandbox \"5267023c356ae7ad04fb58d68ac16ce272ac09e36dff82ca64ea2e0c38df98ea\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 4 23:52:36.871257 containerd[1617]: time="2025-11-04T23:52:36.871196328Z" level=info msg="Container 05b6dce1b0ea690a14f2a8b1ce06acd0cf327a90ed6e65bb33c6f563b4690fe0: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:52:36.879621 containerd[1617]: time="2025-11-04T23:52:36.879535478Z" level=info msg="CreateContainer within sandbox \"5267023c356ae7ad04fb58d68ac16ce272ac09e36dff82ca64ea2e0c38df98ea\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"05b6dce1b0ea690a14f2a8b1ce06acd0cf327a90ed6e65bb33c6f563b4690fe0\"" Nov 4 23:52:36.880378 containerd[1617]: time="2025-11-04T23:52:36.880214169Z" level=info msg="StartContainer for \"05b6dce1b0ea690a14f2a8b1ce06acd0cf327a90ed6e65bb33c6f563b4690fe0\"" Nov 4 23:52:36.881612 containerd[1617]: time="2025-11-04T23:52:36.881587140Z" level=info msg="connecting to shim 05b6dce1b0ea690a14f2a8b1ce06acd0cf327a90ed6e65bb33c6f563b4690fe0" address="unix:///run/containerd/s/8e1226a00c44ed02871873632cf30ff81177ca4d6e1f3badb07ec322130afbb8" protocol=ttrpc version=3 Nov 4 23:52:36.909258 systemd[1]: Started cri-containerd-05b6dce1b0ea690a14f2a8b1ce06acd0cf327a90ed6e65bb33c6f563b4690fe0.scope - libcontainer container 05b6dce1b0ea690a14f2a8b1ce06acd0cf327a90ed6e65bb33c6f563b4690fe0. Nov 4 23:52:36.926793 containerd[1617]: time="2025-11-04T23:52:36.926728922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-cq7sg,Uid:10751091-1e40-4ed3-849a-6cde64c8b56e,Namespace:tigera-operator,Attempt:0,}" Nov 4 23:52:36.940914 kubelet[2756]: E1104 23:52:36.940877 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:36.960172 containerd[1617]: time="2025-11-04T23:52:36.960120386Z" level=info msg="StartContainer for \"05b6dce1b0ea690a14f2a8b1ce06acd0cf327a90ed6e65bb33c6f563b4690fe0\" returns successfully" Nov 4 23:52:36.960320 containerd[1617]: time="2025-11-04T23:52:36.960280901Z" level=info msg="connecting to shim 2448fd0aae848e2a78fb26975f1f22462dde30552ff4e684291e592b44e798fa" address="unix:///run/containerd/s/aaf1d261a1a8a5951b69a1850edf1c4098f06b551cedd8305684c8f786329213" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:52:36.994223 systemd[1]: Started cri-containerd-2448fd0aae848e2a78fb26975f1f22462dde30552ff4e684291e592b44e798fa.scope - libcontainer container 2448fd0aae848e2a78fb26975f1f22462dde30552ff4e684291e592b44e798fa. Nov 4 23:52:37.046576 containerd[1617]: time="2025-11-04T23:52:37.046523424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-cq7sg,Uid:10751091-1e40-4ed3-849a-6cde64c8b56e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2448fd0aae848e2a78fb26975f1f22462dde30552ff4e684291e592b44e798fa\"" Nov 4 23:52:37.049341 containerd[1617]: time="2025-11-04T23:52:37.049292436Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 4 23:52:37.858689 kubelet[2756]: E1104 23:52:37.858615 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:37.944136 kubelet[2756]: E1104 23:52:37.944086 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:37.946436 kubelet[2756]: E1104 23:52:37.946347 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:37.946436 kubelet[2756]: E1104 23:52:37.946404 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:37.960541 kubelet[2756]: I1104 23:52:37.960471 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nqxhn" podStartSLOduration=1.9604487860000002 podStartE2EDuration="1.960448786s" podCreationTimestamp="2025-11-04 23:52:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:52:37.952620902 +0000 UTC m=+8.135221865" watchObservedRunningTime="2025-11-04 23:52:37.960448786 +0000 UTC m=+8.143049739" Nov 4 23:52:38.211018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount889136478.mount: Deactivated successfully. Nov 4 23:52:38.753726 containerd[1617]: time="2025-11-04T23:52:38.753660185Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:52:38.754378 containerd[1617]: time="2025-11-04T23:52:38.754346989Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 4 23:52:38.755497 containerd[1617]: time="2025-11-04T23:52:38.755470502Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:52:38.758763 containerd[1617]: time="2025-11-04T23:52:38.758716535Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:52:38.759327 containerd[1617]: time="2025-11-04T23:52:38.759294883Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.709944347s" Nov 4 23:52:38.759371 containerd[1617]: time="2025-11-04T23:52:38.759326924Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 4 23:52:38.761557 containerd[1617]: time="2025-11-04T23:52:38.761101995Z" level=info msg="CreateContainer within sandbox \"2448fd0aae848e2a78fb26975f1f22462dde30552ff4e684291e592b44e798fa\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 4 23:52:38.769765 containerd[1617]: time="2025-11-04T23:52:38.769732731Z" level=info msg="Container 7d5524b8f558070946898db6052098be1883a4a581eb9d1bea1316e8e878567e: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:52:38.773315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2457138388.mount: Deactivated successfully. Nov 4 23:52:38.778739 containerd[1617]: time="2025-11-04T23:52:38.778690988Z" level=info msg="CreateContainer within sandbox \"2448fd0aae848e2a78fb26975f1f22462dde30552ff4e684291e592b44e798fa\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7d5524b8f558070946898db6052098be1883a4a581eb9d1bea1316e8e878567e\"" Nov 4 23:52:38.779193 containerd[1617]: time="2025-11-04T23:52:38.779163967Z" level=info msg="StartContainer for \"7d5524b8f558070946898db6052098be1883a4a581eb9d1bea1316e8e878567e\"" Nov 4 23:52:38.779964 containerd[1617]: time="2025-11-04T23:52:38.779933858Z" level=info msg="connecting to shim 7d5524b8f558070946898db6052098be1883a4a581eb9d1bea1316e8e878567e" address="unix:///run/containerd/s/aaf1d261a1a8a5951b69a1850edf1c4098f06b551cedd8305684c8f786329213" protocol=ttrpc version=3 Nov 4 23:52:38.845178 systemd[1]: Started cri-containerd-7d5524b8f558070946898db6052098be1883a4a581eb9d1bea1316e8e878567e.scope - libcontainer container 7d5524b8f558070946898db6052098be1883a4a581eb9d1bea1316e8e878567e. Nov 4 23:52:38.876633 containerd[1617]: time="2025-11-04T23:52:38.876593118Z" level=info msg="StartContainer for \"7d5524b8f558070946898db6052098be1883a4a581eb9d1bea1316e8e878567e\" returns successfully" Nov 4 23:52:38.950905 kubelet[2756]: E1104 23:52:38.950054 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:38.958242 kubelet[2756]: I1104 23:52:38.958130 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-cq7sg" podStartSLOduration=1.24603872 podStartE2EDuration="2.958113273s" podCreationTimestamp="2025-11-04 23:52:36 +0000 UTC" firstStartedPulling="2025-11-04 23:52:37.047928875 +0000 UTC m=+7.230529828" lastFinishedPulling="2025-11-04 23:52:38.760003428 +0000 UTC m=+8.942604381" observedRunningTime="2025-11-04 23:52:38.957933432 +0000 UTC m=+9.140534415" watchObservedRunningTime="2025-11-04 23:52:38.958113273 +0000 UTC m=+9.140714226" Nov 4 23:52:40.327805 kubelet[2756]: E1104 23:52:40.327750 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:43.907220 update_engine[1595]: I20251104 23:52:43.907114 1595 update_attempter.cc:509] Updating boot flags... Nov 4 23:52:44.441256 sudo[1825]: pam_unix(sudo:session): session closed for user root Nov 4 23:52:44.442905 sshd[1824]: Connection closed by 10.0.0.1 port 52970 Nov 4 23:52:44.443902 sshd-session[1821]: pam_unix(sshd:session): session closed for user core Nov 4 23:52:44.448776 systemd[1]: sshd@6-10.0.0.97:22-10.0.0.1:52970.service: Deactivated successfully. Nov 4 23:52:44.451643 systemd[1]: session-7.scope: Deactivated successfully. Nov 4 23:52:44.451891 systemd[1]: session-7.scope: Consumed 5.248s CPU time, 223.6M memory peak. Nov 4 23:52:44.453243 systemd-logind[1590]: Session 7 logged out. Waiting for processes to exit. Nov 4 23:52:44.456409 systemd-logind[1590]: Removed session 7. Nov 4 23:52:48.662253 systemd[1]: Created slice kubepods-besteffort-podbc8a8687_5baf_4b7a_9e08_7f47ca49b802.slice - libcontainer container kubepods-besteffort-podbc8a8687_5baf_4b7a_9e08_7f47ca49b802.slice. Nov 4 23:52:48.723853 kubelet[2756]: I1104 23:52:48.723788 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bc8a8687-5baf-4b7a-9e08-7f47ca49b802-typha-certs\") pod \"calico-typha-6c98797585-b64fb\" (UID: \"bc8a8687-5baf-4b7a-9e08-7f47ca49b802\") " pod="calico-system/calico-typha-6c98797585-b64fb" Nov 4 23:52:48.723853 kubelet[2756]: I1104 23:52:48.723839 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc8a8687-5baf-4b7a-9e08-7f47ca49b802-tigera-ca-bundle\") pod \"calico-typha-6c98797585-b64fb\" (UID: \"bc8a8687-5baf-4b7a-9e08-7f47ca49b802\") " pod="calico-system/calico-typha-6c98797585-b64fb" Nov 4 23:52:48.723853 kubelet[2756]: I1104 23:52:48.723859 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkcb6\" (UniqueName: \"kubernetes.io/projected/bc8a8687-5baf-4b7a-9e08-7f47ca49b802-kube-api-access-kkcb6\") pod \"calico-typha-6c98797585-b64fb\" (UID: \"bc8a8687-5baf-4b7a-9e08-7f47ca49b802\") " pod="calico-system/calico-typha-6c98797585-b64fb" Nov 4 23:52:48.857855 systemd[1]: Created slice kubepods-besteffort-pod090adf82_49a2_4865_850d_d1d1cafb43d4.slice - libcontainer container kubepods-besteffort-pod090adf82_49a2_4865_850d_d1d1cafb43d4.slice. Nov 4 23:52:48.927328 kubelet[2756]: I1104 23:52:48.926874 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/090adf82-49a2-4865-850d-d1d1cafb43d4-policysync\") pod \"calico-node-j4xnh\" (UID: \"090adf82-49a2-4865-850d-d1d1cafb43d4\") " pod="calico-system/calico-node-j4xnh" Nov 4 23:52:48.927328 kubelet[2756]: I1104 23:52:48.926914 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/090adf82-49a2-4865-850d-d1d1cafb43d4-cni-log-dir\") pod \"calico-node-j4xnh\" (UID: \"090adf82-49a2-4865-850d-d1d1cafb43d4\") " pod="calico-system/calico-node-j4xnh" Nov 4 23:52:48.927328 kubelet[2756]: I1104 23:52:48.926930 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/090adf82-49a2-4865-850d-d1d1cafb43d4-flexvol-driver-host\") pod \"calico-node-j4xnh\" (UID: \"090adf82-49a2-4865-850d-d1d1cafb43d4\") " pod="calico-system/calico-node-j4xnh" Nov 4 23:52:48.927328 kubelet[2756]: I1104 23:52:48.926945 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/090adf82-49a2-4865-850d-d1d1cafb43d4-lib-modules\") pod \"calico-node-j4xnh\" (UID: \"090adf82-49a2-4865-850d-d1d1cafb43d4\") " pod="calico-system/calico-node-j4xnh" Nov 4 23:52:48.927328 kubelet[2756]: I1104 23:52:48.926963 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/090adf82-49a2-4865-850d-d1d1cafb43d4-var-run-calico\") pod \"calico-node-j4xnh\" (UID: \"090adf82-49a2-4865-850d-d1d1cafb43d4\") " pod="calico-system/calico-node-j4xnh" Nov 4 23:52:48.927638 kubelet[2756]: I1104 23:52:48.926978 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq5lv\" (UniqueName: \"kubernetes.io/projected/090adf82-49a2-4865-850d-d1d1cafb43d4-kube-api-access-jq5lv\") pod \"calico-node-j4xnh\" (UID: \"090adf82-49a2-4865-850d-d1d1cafb43d4\") " pod="calico-system/calico-node-j4xnh" Nov 4 23:52:48.927638 kubelet[2756]: I1104 23:52:48.927012 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/090adf82-49a2-4865-850d-d1d1cafb43d4-var-lib-calico\") pod \"calico-node-j4xnh\" (UID: \"090adf82-49a2-4865-850d-d1d1cafb43d4\") " pod="calico-system/calico-node-j4xnh" Nov 4 23:52:48.927638 kubelet[2756]: I1104 23:52:48.927059 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/090adf82-49a2-4865-850d-d1d1cafb43d4-xtables-lock\") pod \"calico-node-j4xnh\" (UID: \"090adf82-49a2-4865-850d-d1d1cafb43d4\") " pod="calico-system/calico-node-j4xnh" Nov 4 23:52:48.927638 kubelet[2756]: I1104 23:52:48.927141 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/090adf82-49a2-4865-850d-d1d1cafb43d4-tigera-ca-bundle\") pod \"calico-node-j4xnh\" (UID: \"090adf82-49a2-4865-850d-d1d1cafb43d4\") " pod="calico-system/calico-node-j4xnh" Nov 4 23:52:48.927638 kubelet[2756]: I1104 23:52:48.927201 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/090adf82-49a2-4865-850d-d1d1cafb43d4-cni-bin-dir\") pod \"calico-node-j4xnh\" (UID: \"090adf82-49a2-4865-850d-d1d1cafb43d4\") " pod="calico-system/calico-node-j4xnh" Nov 4 23:52:48.927754 kubelet[2756]: I1104 23:52:48.927222 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/090adf82-49a2-4865-850d-d1d1cafb43d4-cni-net-dir\") pod \"calico-node-j4xnh\" (UID: \"090adf82-49a2-4865-850d-d1d1cafb43d4\") " pod="calico-system/calico-node-j4xnh" Nov 4 23:52:48.927754 kubelet[2756]: I1104 23:52:48.927250 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/090adf82-49a2-4865-850d-d1d1cafb43d4-node-certs\") pod \"calico-node-j4xnh\" (UID: \"090adf82-49a2-4865-850d-d1d1cafb43d4\") " pod="calico-system/calico-node-j4xnh" Nov 4 23:52:48.965308 kubelet[2756]: E1104 23:52:48.965262 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:48.965881 containerd[1617]: time="2025-11-04T23:52:48.965817969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c98797585-b64fb,Uid:bc8a8687-5baf-4b7a-9e08-7f47ca49b802,Namespace:calico-system,Attempt:0,}" Nov 4 23:52:49.034461 kubelet[2756]: E1104 23:52:49.034411 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.034461 kubelet[2756]: W1104 23:52:49.034473 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.034660 kubelet[2756]: E1104 23:52:49.034552 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.035827 kubelet[2756]: E1104 23:52:49.034942 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.035827 kubelet[2756]: W1104 23:52:49.035006 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.035827 kubelet[2756]: E1104 23:52:49.035022 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.039390 kubelet[2756]: E1104 23:52:49.038583 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.040140 kubelet[2756]: W1104 23:52:49.039547 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.040140 kubelet[2756]: E1104 23:52:49.040087 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.053640 kubelet[2756]: E1104 23:52:49.053425 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vmhxx" podUID="aba4eacc-4aef-4d09-939a-0ecd4f64c80b" Nov 4 23:52:49.058147 kubelet[2756]: E1104 23:52:49.052625 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.058147 kubelet[2756]: W1104 23:52:49.057074 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.058147 kubelet[2756]: E1104 23:52:49.057114 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.060758 containerd[1617]: time="2025-11-04T23:52:49.060705599Z" level=info msg="connecting to shim 85b676f5a77c9d40b01214998ef449028fb6581382914c306428044561890423" address="unix:///run/containerd/s/dfcf1f8e7215d3fc112ef2bff0951031f7ae6411ea22b4cdf2f822ba1a0d2a8d" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:52:49.091172 systemd[1]: Started cri-containerd-85b676f5a77c9d40b01214998ef449028fb6581382914c306428044561890423.scope - libcontainer container 85b676f5a77c9d40b01214998ef449028fb6581382914c306428044561890423. Nov 4 23:52:49.111377 kubelet[2756]: E1104 23:52:49.111323 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.111377 kubelet[2756]: W1104 23:52:49.111358 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.111377 kubelet[2756]: E1104 23:52:49.111392 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.114202 kubelet[2756]: E1104 23:52:49.114175 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.114409 kubelet[2756]: W1104 23:52:49.114312 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.114409 kubelet[2756]: E1104 23:52:49.114345 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.114864 kubelet[2756]: E1104 23:52:49.114851 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.114923 kubelet[2756]: W1104 23:52:49.114912 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.115129 kubelet[2756]: E1104 23:52:49.115066 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.115462 kubelet[2756]: E1104 23:52:49.115411 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.115462 kubelet[2756]: W1104 23:52:49.115424 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.115462 kubelet[2756]: E1104 23:52:49.115434 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.115934 kubelet[2756]: E1104 23:52:49.115878 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.115934 kubelet[2756]: W1104 23:52:49.115890 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.115934 kubelet[2756]: E1104 23:52:49.115900 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.116289 kubelet[2756]: E1104 23:52:49.116218 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.116289 kubelet[2756]: W1104 23:52:49.116230 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.116289 kubelet[2756]: E1104 23:52:49.116250 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.117052 kubelet[2756]: E1104 23:52:49.116985 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.117052 kubelet[2756]: W1104 23:52:49.116998 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.117052 kubelet[2756]: E1104 23:52:49.117008 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.117454 kubelet[2756]: E1104 23:52:49.117386 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.117454 kubelet[2756]: W1104 23:52:49.117398 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.117454 kubelet[2756]: E1104 23:52:49.117408 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.117731 kubelet[2756]: E1104 23:52:49.117718 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.117867 kubelet[2756]: W1104 23:52:49.117789 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.117867 kubelet[2756]: E1104 23:52:49.117805 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.118355 kubelet[2756]: E1104 23:52:49.118263 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.118445 kubelet[2756]: W1104 23:52:49.118429 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.118505 kubelet[2756]: E1104 23:52:49.118490 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.118827 kubelet[2756]: E1104 23:52:49.118785 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.118827 kubelet[2756]: W1104 23:52:49.118797 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.118968 kubelet[2756]: E1104 23:52:49.118924 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.119314 kubelet[2756]: E1104 23:52:49.119244 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.119314 kubelet[2756]: W1104 23:52:49.119258 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.119314 kubelet[2756]: E1104 23:52:49.119270 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.119587 kubelet[2756]: E1104 23:52:49.119573 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.119655 kubelet[2756]: W1104 23:52:49.119643 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.119714 kubelet[2756]: E1104 23:52:49.119703 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.119984 kubelet[2756]: E1104 23:52:49.119970 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.120219 kubelet[2756]: W1104 23:52:49.120099 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.120219 kubelet[2756]: E1104 23:52:49.120117 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.120384 kubelet[2756]: E1104 23:52:49.120372 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.120454 kubelet[2756]: W1104 23:52:49.120442 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.120516 kubelet[2756]: E1104 23:52:49.120504 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.120819 kubelet[2756]: E1104 23:52:49.120752 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.120819 kubelet[2756]: W1104 23:52:49.120764 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.120819 kubelet[2756]: E1104 23:52:49.120773 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.121122 kubelet[2756]: E1104 23:52:49.121110 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.121231 kubelet[2756]: W1104 23:52:49.121175 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.121231 kubelet[2756]: E1104 23:52:49.121189 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.121492 kubelet[2756]: E1104 23:52:49.121479 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.121551 kubelet[2756]: W1104 23:52:49.121539 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.121599 kubelet[2756]: E1104 23:52:49.121589 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.121921 kubelet[2756]: E1104 23:52:49.121908 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.122074 kubelet[2756]: W1104 23:52:49.121976 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.122074 kubelet[2756]: E1104 23:52:49.121989 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.122326 kubelet[2756]: E1104 23:52:49.122314 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.122386 kubelet[2756]: W1104 23:52:49.122375 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.122439 kubelet[2756]: E1104 23:52:49.122429 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.128432 kubelet[2756]: E1104 23:52:49.128375 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.128432 kubelet[2756]: W1104 23:52:49.128392 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.128432 kubelet[2756]: E1104 23:52:49.128407 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.128709 kubelet[2756]: I1104 23:52:49.128594 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/aba4eacc-4aef-4d09-939a-0ecd4f64c80b-registration-dir\") pod \"csi-node-driver-vmhxx\" (UID: \"aba4eacc-4aef-4d09-939a-0ecd4f64c80b\") " pod="calico-system/csi-node-driver-vmhxx" Nov 4 23:52:49.128958 kubelet[2756]: E1104 23:52:49.128886 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.128958 kubelet[2756]: W1104 23:52:49.128899 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.128958 kubelet[2756]: E1104 23:52:49.128915 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.128958 kubelet[2756]: I1104 23:52:49.128931 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aba4eacc-4aef-4d09-939a-0ecd4f64c80b-kubelet-dir\") pod \"csi-node-driver-vmhxx\" (UID: \"aba4eacc-4aef-4d09-939a-0ecd4f64c80b\") " pod="calico-system/csi-node-driver-vmhxx" Nov 4 23:52:49.129487 kubelet[2756]: E1104 23:52:49.129451 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.129528 kubelet[2756]: W1104 23:52:49.129486 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.129551 kubelet[2756]: E1104 23:52:49.129527 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.129819 kubelet[2756]: E1104 23:52:49.129796 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.129859 kubelet[2756]: W1104 23:52:49.129834 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.129882 kubelet[2756]: E1104 23:52:49.129859 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.130215 kubelet[2756]: E1104 23:52:49.130197 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.130274 kubelet[2756]: W1104 23:52:49.130211 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.130274 kubelet[2756]: E1104 23:52:49.130260 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.130432 kubelet[2756]: I1104 23:52:49.130293 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/aba4eacc-4aef-4d09-939a-0ecd4f64c80b-socket-dir\") pod \"csi-node-driver-vmhxx\" (UID: \"aba4eacc-4aef-4d09-939a-0ecd4f64c80b\") " pod="calico-system/csi-node-driver-vmhxx" Nov 4 23:52:49.130599 kubelet[2756]: E1104 23:52:49.130583 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.130653 kubelet[2756]: W1104 23:52:49.130641 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.130710 kubelet[2756]: E1104 23:52:49.130699 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.130960 kubelet[2756]: E1104 23:52:49.130947 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.131017 kubelet[2756]: W1104 23:52:49.131006 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.131170 kubelet[2756]: E1104 23:52:49.131090 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.131170 kubelet[2756]: I1104 23:52:49.131109 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tntpn\" (UniqueName: \"kubernetes.io/projected/aba4eacc-4aef-4d09-939a-0ecd4f64c80b-kube-api-access-tntpn\") pod \"csi-node-driver-vmhxx\" (UID: \"aba4eacc-4aef-4d09-939a-0ecd4f64c80b\") " pod="calico-system/csi-node-driver-vmhxx" Nov 4 23:52:49.131452 kubelet[2756]: E1104 23:52:49.131424 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.131452 kubelet[2756]: W1104 23:52:49.131436 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.131555 kubelet[2756]: E1104 23:52:49.131540 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.131809 kubelet[2756]: E1104 23:52:49.131798 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.131863 kubelet[2756]: W1104 23:52:49.131852 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.131924 kubelet[2756]: E1104 23:52:49.131913 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.132248 kubelet[2756]: E1104 23:52:49.132224 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.132376 kubelet[2756]: W1104 23:52:49.132312 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.132376 kubelet[2756]: E1104 23:52:49.132334 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.132619 kubelet[2756]: E1104 23:52:49.132606 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.132690 kubelet[2756]: W1104 23:52:49.132678 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.132745 kubelet[2756]: E1104 23:52:49.132734 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.133152 kubelet[2756]: E1104 23:52:49.133108 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.133152 kubelet[2756]: W1104 23:52:49.133125 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.133152 kubelet[2756]: E1104 23:52:49.133136 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.133557 kubelet[2756]: E1104 23:52:49.133515 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.133557 kubelet[2756]: W1104 23:52:49.133529 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.133557 kubelet[2756]: E1104 23:52:49.133541 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.133700 kubelet[2756]: I1104 23:52:49.133684 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/aba4eacc-4aef-4d09-939a-0ecd4f64c80b-varrun\") pod \"csi-node-driver-vmhxx\" (UID: \"aba4eacc-4aef-4d09-939a-0ecd4f64c80b\") " pod="calico-system/csi-node-driver-vmhxx" Nov 4 23:52:49.133986 kubelet[2756]: E1104 23:52:49.133973 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.134075 kubelet[2756]: W1104 23:52:49.134051 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.134145 kubelet[2756]: E1104 23:52:49.134128 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.134490 kubelet[2756]: E1104 23:52:49.134449 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.134490 kubelet[2756]: W1104 23:52:49.134461 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.134490 kubelet[2756]: E1104 23:52:49.134471 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.141150 containerd[1617]: time="2025-11-04T23:52:49.141091471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c98797585-b64fb,Uid:bc8a8687-5baf-4b7a-9e08-7f47ca49b802,Namespace:calico-system,Attempt:0,} returns sandbox id \"85b676f5a77c9d40b01214998ef449028fb6581382914c306428044561890423\"" Nov 4 23:52:49.141974 kubelet[2756]: E1104 23:52:49.141932 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:49.143746 containerd[1617]: time="2025-11-04T23:52:49.143680426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 4 23:52:49.161056 kubelet[2756]: E1104 23:52:49.160980 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:49.161682 containerd[1617]: time="2025-11-04T23:52:49.161640423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j4xnh,Uid:090adf82-49a2-4865-850d-d1d1cafb43d4,Namespace:calico-system,Attempt:0,}" Nov 4 23:52:49.197212 containerd[1617]: time="2025-11-04T23:52:49.196430574Z" level=info msg="connecting to shim 81ae9e9e44f4389bcdd0128946a2244688280337605699df9151508fd9650d52" address="unix:///run/containerd/s/8c78366b5b83c4b9bce50a772d5424a9938aed63b343581bf4b18662a65b467f" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:52:49.234694 kubelet[2756]: E1104 23:52:49.234667 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.234942 kubelet[2756]: W1104 23:52:49.234808 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.234942 kubelet[2756]: E1104 23:52:49.234832 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.235151 kubelet[2756]: E1104 23:52:49.235123 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.235151 kubelet[2756]: W1104 23:52:49.235135 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.235242 kubelet[2756]: E1104 23:52:49.235222 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.235569 kubelet[2756]: E1104 23:52:49.235555 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.235627 kubelet[2756]: W1104 23:52:49.235615 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.235684 kubelet[2756]: E1104 23:52:49.235672 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.235910 kubelet[2756]: E1104 23:52:49.235893 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.235910 kubelet[2756]: W1104 23:52:49.235908 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.235963 kubelet[2756]: E1104 23:52:49.235923 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.236145 kubelet[2756]: E1104 23:52:49.236132 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.236145 kubelet[2756]: W1104 23:52:49.236142 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.236204 kubelet[2756]: E1104 23:52:49.236156 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.236400 kubelet[2756]: E1104 23:52:49.236387 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.236400 kubelet[2756]: W1104 23:52:49.236397 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.236450 kubelet[2756]: E1104 23:52:49.236421 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.236685 kubelet[2756]: E1104 23:52:49.236673 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.236685 kubelet[2756]: W1104 23:52:49.236683 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.236818 kubelet[2756]: E1104 23:52:49.236756 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.236857 kubelet[2756]: E1104 23:52:49.236843 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.236857 kubelet[2756]: W1104 23:52:49.236850 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.236913 kubelet[2756]: E1104 23:52:49.236897 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.237073 kubelet[2756]: E1104 23:52:49.237059 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.237073 kubelet[2756]: W1104 23:52:49.237069 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.237166 kubelet[2756]: E1104 23:52:49.237151 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.237269 kubelet[2756]: E1104 23:52:49.237257 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.237269 kubelet[2756]: W1104 23:52:49.237266 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.237310 systemd[1]: Started cri-containerd-81ae9e9e44f4389bcdd0128946a2244688280337605699df9151508fd9650d52.scope - libcontainer container 81ae9e9e44f4389bcdd0128946a2244688280337605699df9151508fd9650d52. Nov 4 23:52:49.237399 kubelet[2756]: E1104 23:52:49.237343 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.238400 kubelet[2756]: E1104 23:52:49.238386 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.238542 kubelet[2756]: W1104 23:52:49.238461 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.238599 kubelet[2756]: E1104 23:52:49.238587 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.238939 kubelet[2756]: E1104 23:52:49.238857 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.238939 kubelet[2756]: W1104 23:52:49.238877 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.238939 kubelet[2756]: E1104 23:52:49.238900 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.239165 kubelet[2756]: E1104 23:52:49.239148 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.239165 kubelet[2756]: W1104 23:52:49.239161 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.239246 kubelet[2756]: E1104 23:52:49.239221 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.239550 kubelet[2756]: E1104 23:52:49.239533 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.239550 kubelet[2756]: W1104 23:52:49.239545 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.239599 kubelet[2756]: E1104 23:52:49.239587 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.239877 kubelet[2756]: E1104 23:52:49.239860 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.239877 kubelet[2756]: W1104 23:52:49.239872 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.239942 kubelet[2756]: E1104 23:52:49.239930 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.240178 kubelet[2756]: E1104 23:52:49.240161 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.240178 kubelet[2756]: W1104 23:52:49.240174 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.240261 kubelet[2756]: E1104 23:52:49.240241 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.240449 kubelet[2756]: E1104 23:52:49.240433 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.240449 kubelet[2756]: W1104 23:52:49.240444 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.240557 kubelet[2756]: E1104 23:52:49.240541 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.240652 kubelet[2756]: E1104 23:52:49.240638 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.240652 kubelet[2756]: W1104 23:52:49.240648 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.240699 kubelet[2756]: E1104 23:52:49.240661 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.240985 kubelet[2756]: E1104 23:52:49.240959 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.240985 kubelet[2756]: W1104 23:52:49.240975 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.241141 kubelet[2756]: E1104 23:52:49.241122 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.241423 kubelet[2756]: E1104 23:52:49.241405 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.241423 kubelet[2756]: W1104 23:52:49.241420 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.241707 kubelet[2756]: E1104 23:52:49.241627 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.242323 kubelet[2756]: E1104 23:52:49.242303 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.242323 kubelet[2756]: W1104 23:52:49.242318 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.242404 kubelet[2756]: E1104 23:52:49.242365 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.243763 kubelet[2756]: E1104 23:52:49.243400 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.243763 kubelet[2756]: W1104 23:52:49.243418 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.243763 kubelet[2756]: E1104 23:52:49.243466 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.243971 kubelet[2756]: E1104 23:52:49.243929 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.244065 kubelet[2756]: W1104 23:52:49.243973 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.244527 kubelet[2756]: E1104 23:52:49.244024 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.244527 kubelet[2756]: E1104 23:52:49.244451 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.244527 kubelet[2756]: W1104 23:52:49.244461 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.244527 kubelet[2756]: E1104 23:52:49.244477 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.246105 kubelet[2756]: E1104 23:52:49.244796 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.246105 kubelet[2756]: W1104 23:52:49.244811 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.246105 kubelet[2756]: E1104 23:52:49.244822 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.251823 kubelet[2756]: E1104 23:52:49.251782 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:49.251823 kubelet[2756]: W1104 23:52:49.251802 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:49.251823 kubelet[2756]: E1104 23:52:49.251820 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:49.273541 containerd[1617]: time="2025-11-04T23:52:49.273492314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j4xnh,Uid:090adf82-49a2-4865-850d-d1d1cafb43d4,Namespace:calico-system,Attempt:0,} returns sandbox id \"81ae9e9e44f4389bcdd0128946a2244688280337605699df9151508fd9650d52\"" Nov 4 23:52:49.274519 kubelet[2756]: E1104 23:52:49.274345 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:50.516048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount561664415.mount: Deactivated successfully. Nov 4 23:52:50.872335 containerd[1617]: time="2025-11-04T23:52:50.872200445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:52:50.873144 containerd[1617]: time="2025-11-04T23:52:50.873087708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 4 23:52:50.874236 containerd[1617]: time="2025-11-04T23:52:50.874199104Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:52:50.876108 containerd[1617]: time="2025-11-04T23:52:50.876080904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:52:50.876626 containerd[1617]: time="2025-11-04T23:52:50.876577601Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.732850648s" Nov 4 23:52:50.876626 containerd[1617]: time="2025-11-04T23:52:50.876615421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 4 23:52:50.877728 containerd[1617]: time="2025-11-04T23:52:50.877695238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 4 23:52:50.885923 containerd[1617]: time="2025-11-04T23:52:50.885880917Z" level=info msg="CreateContainer within sandbox \"85b676f5a77c9d40b01214998ef449028fb6581382914c306428044561890423\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 4 23:52:50.894380 containerd[1617]: time="2025-11-04T23:52:50.894349911Z" level=info msg="Container 97098e16c7b62f90dadf0135375ed847580a0049773c7c5f8896a4b91e93080a: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:52:50.901581 containerd[1617]: time="2025-11-04T23:52:50.901541344Z" level=info msg="CreateContainer within sandbox \"85b676f5a77c9d40b01214998ef449028fb6581382914c306428044561890423\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"97098e16c7b62f90dadf0135375ed847580a0049773c7c5f8896a4b91e93080a\"" Nov 4 23:52:50.902017 containerd[1617]: time="2025-11-04T23:52:50.901981274Z" level=info msg="StartContainer for \"97098e16c7b62f90dadf0135375ed847580a0049773c7c5f8896a4b91e93080a\"" Nov 4 23:52:50.903097 containerd[1617]: time="2025-11-04T23:52:50.903067322Z" level=info msg="connecting to shim 97098e16c7b62f90dadf0135375ed847580a0049773c7c5f8896a4b91e93080a" address="unix:///run/containerd/s/dfcf1f8e7215d3fc112ef2bff0951031f7ae6411ea22b4cdf2f822ba1a0d2a8d" protocol=ttrpc version=3 Nov 4 23:52:50.909673 kubelet[2756]: E1104 23:52:50.909613 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vmhxx" podUID="aba4eacc-4aef-4d09-939a-0ecd4f64c80b" Nov 4 23:52:50.934267 systemd[1]: Started cri-containerd-97098e16c7b62f90dadf0135375ed847580a0049773c7c5f8896a4b91e93080a.scope - libcontainer container 97098e16c7b62f90dadf0135375ed847580a0049773c7c5f8896a4b91e93080a. Nov 4 23:52:51.027011 containerd[1617]: time="2025-11-04T23:52:51.026901315Z" level=info msg="StartContainer for \"97098e16c7b62f90dadf0135375ed847580a0049773c7c5f8896a4b91e93080a\" returns successfully" Nov 4 23:52:51.998149 kubelet[2756]: E1104 23:52:51.998106 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:52.010541 kubelet[2756]: I1104 23:52:52.010457 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6c98797585-b64fb" podStartSLOduration=2.276470074 podStartE2EDuration="4.010433821s" podCreationTimestamp="2025-11-04 23:52:48 +0000 UTC" firstStartedPulling="2025-11-04 23:52:49.143315326 +0000 UTC m=+19.325916279" lastFinishedPulling="2025-11-04 23:52:50.877279073 +0000 UTC m=+21.059880026" observedRunningTime="2025-11-04 23:52:52.009193293 +0000 UTC m=+22.191794256" watchObservedRunningTime="2025-11-04 23:52:52.010433821 +0000 UTC m=+22.193034774" Nov 4 23:52:52.039675 kubelet[2756]: E1104 23:52:52.039628 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:52.039675 kubelet[2756]: W1104 23:52:52.039656 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:52.039675 kubelet[2756]: E1104 23:52:52.039681 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:52.040118 kubelet[2756]: E1104 23:52:52.040070 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:52.040247 kubelet[2756]: W1104 23:52:52.040112 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:52.040247 kubelet[2756]: E1104 23:52:52.040149 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:52.040506 kubelet[2756]: E1104 23:52:52.040485 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:52.040506 kubelet[2756]: W1104 23:52:52.040501 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:52.040588 kubelet[2756]: E1104 23:52:52.040513 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:52.040895 kubelet[2756]: E1104 23:52:52.040863 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:52.040895 kubelet[2756]: W1104 23:52:52.040885 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:52.041014 kubelet[2756]: E1104 23:52:52.040898 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:52.041477 kubelet[2756]: E1104 23:52:52.041432 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:52.041477 kubelet[2756]: W1104 23:52:52.041449 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:52.041477 kubelet[2756]: E1104 23:52:52.041462 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:52.041861 kubelet[2756]: E1104 23:52:52.041674 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:52.041861 kubelet[2756]: W1104 23:52:52.041685 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:52.041861 kubelet[2756]: E1104 23:52:52.041695 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:52.041970 kubelet[2756]: E1104 23:52:52.041884 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:52.041970 kubelet[2756]: W1104 23:52:52.041894 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:52.041970 kubelet[2756]: E1104 23:52:52.041904 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:52.042158 kubelet[2756]: E1104 23:52:52.042111 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:52.042158 kubelet[2756]: W1104 23:52:52.042125 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:52.042158 kubelet[2756]: E1104 23:52:52.042138 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:52.042492 kubelet[2756]: E1104 23:52:52.042361 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:52.042492 kubelet[2756]: W1104 23:52:52.042372 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:52.042492 kubelet[2756]: E1104 23:52:52.042383 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:52.042614 kubelet[2756]: E1104 23:52:52.042573 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:52.042614 kubelet[2756]: W1104 23:52:52.042588 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:52.042614 kubelet[2756]: E1104 23:52:52.042600 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:52.042952 kubelet[2756]: E1104 23:52:52.042828 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:52.042952 kubelet[2756]: W1104 23:52:52.042843 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:52.042952 kubelet[2756]: E1104 23:52:52.042857 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:52.043228 kubelet[2756]: E1104 23:52:52.043201 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:52.043332 kubelet[2756]: W1104 23:52:52.043292 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:52.043332 kubelet[2756]: E1104 23:52:52.043311 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:52.043639 kubelet[2756]: E1104 23:52:52.043613 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:52.043639 kubelet[2756]: W1104 23:52:52.043632 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:52.043737 kubelet[2756]: E1104 23:52:52.043648 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:52.043893 kubelet[2756]: E1104 23:52:52.043870 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:52.043893 kubelet[2756]: W1104 23:52:52.043885 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:52.043893 kubelet[2756]: E1104 23:52:52.043896 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:52.044154 kubelet[2756]: E1104 23:52:52.044139 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:52.044154 kubelet[2756]: W1104 23:52:52.044152 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:52.044254 kubelet[2756]: E1104 23:52:52.044163 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:52.054949 kubelet[2756]: E1104 23:52:52.054902 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:52.054949 kubelet[2756]: W1104 23:52:52.054936 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:52.055164 kubelet[2756]: E1104 23:52:52.054963 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:52.055354 kubelet[2756]: E1104 23:52:52.055322 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:52.055354 kubelet[2756]: W1104 23:52:52.055338 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:52.055354 kubelet[2756]: E1104 23:52:52.055355 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:52.055589 kubelet[2756]: E1104 23:52:52.055542 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:52.055589 kubelet[2756]: W1104 23:52:52.055550 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:52.055589 kubelet[2756]: E1104 23:52:52.055559 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:52.055812 kubelet[2756]: E1104 23:52:52.055765 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:52.055812 kubelet[2756]: W1104 23:52:52.055792 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:52.055812 kubelet[2756]: E1104 23:52:52.055806 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:52.056079 kubelet[2756]: E1104 23:52:52.056026 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:52.056079 kubelet[2756]: W1104 23:52:52.056074 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:52.056166 kubelet[2756]: E1104 23:52:52.056094 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:52.056343 kubelet[2756]: E1104 23:52:52.056320 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:52.056343 kubelet[2756]: W1104 23:52:52.056339 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:52.056409 kubelet[2756]: E1104 23:52:52.056364 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:52.056599 kubelet[2756]: E1104 23:52:52.056577 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:52.056599 kubelet[2756]: W1104 23:52:52.056593 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:52.056679 kubelet[2756]: E1104 23:52:52.056613 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:52.057026 kubelet[2756]: E1104 23:52:52.056987 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:52.057121 kubelet[2756]: W1104 23:52:52.057025 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:52.057167 kubelet[2756]: E1104 23:52:52.057137 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:52.057510 kubelet[2756]: E1104 23:52:52.057488 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:52.057510 kubelet[2756]: W1104 23:52:52.057501 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:52.057642 kubelet[2756]: E1104 23:52:52.057613 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:52.057884 kubelet[2756]: E1104 23:52:52.057839 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:52.057884 kubelet[2756]: W1104 23:52:52.057872 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:52.058137 kubelet[2756]: E1104 23:52:52.058074 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:52.058362 kubelet[2756]: E1104 23:52:52.058344 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:52.058362 kubelet[2756]: W1104 23:52:52.058359 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:52.058439 kubelet[2756]: E1104 23:52:52.058403 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:52.058608 kubelet[2756]: E1104 23:52:52.058586 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:52.058608 kubelet[2756]: W1104 23:52:52.058600 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:52.058674 kubelet[2756]: E1104 23:52:52.058638 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:52.058959 kubelet[2756]: E1104 23:52:52.058938 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:52.058959 kubelet[2756]: W1104 23:52:52.058955 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:52.059050 kubelet[2756]: E1104 23:52:52.058977 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:52.059293 kubelet[2756]: E1104 23:52:52.059274 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:52.059293 kubelet[2756]: W1104 23:52:52.059290 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:52.059355 kubelet[2756]: E1104 23:52:52.059308 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:52.059584 kubelet[2756]: E1104 23:52:52.059563 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:52.059584 kubelet[2756]: W1104 23:52:52.059578 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:52.059655 kubelet[2756]: E1104 23:52:52.059598 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:52.059981 kubelet[2756]: E1104 23:52:52.059960 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:52.059981 kubelet[2756]: W1104 23:52:52.059974 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:52.060092 kubelet[2756]: E1104 23:52:52.059992 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:52.060280 kubelet[2756]: E1104 23:52:52.060262 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:52.060280 kubelet[2756]: W1104 23:52:52.060275 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:52.060394 kubelet[2756]: E1104 23:52:52.060284 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:52.060571 kubelet[2756]: E1104 23:52:52.060549 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:52:52.060614 kubelet[2756]: W1104 23:52:52.060583 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:52:52.060614 kubelet[2756]: E1104 23:52:52.060597 2756 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:52:52.187618 containerd[1617]: time="2025-11-04T23:52:52.187553733Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:52:52.188457 containerd[1617]: time="2025-11-04T23:52:52.188402272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 4 23:52:52.189732 containerd[1617]: time="2025-11-04T23:52:52.189705088Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:52:52.192105 containerd[1617]: time="2025-11-04T23:52:52.192062251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:52:52.192923 containerd[1617]: time="2025-11-04T23:52:52.192884099Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.315159465s" Nov 4 23:52:52.192972 containerd[1617]: time="2025-11-04T23:52:52.192924556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 4 23:52:52.195824 containerd[1617]: time="2025-11-04T23:52:52.195785649Z" level=info msg="CreateContainer within sandbox \"81ae9e9e44f4389bcdd0128946a2244688280337605699df9151508fd9650d52\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 4 23:52:52.206254 containerd[1617]: time="2025-11-04T23:52:52.206187844Z" level=info msg="Container 7f240d6cc4cee99c10fabf94ff27a7d79dbaf4f8b34b773cc85cb449b80d0d7c: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:52:52.215126 containerd[1617]: time="2025-11-04T23:52:52.215027074Z" level=info msg="CreateContainer within sandbox \"81ae9e9e44f4389bcdd0128946a2244688280337605699df9151508fd9650d52\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7f240d6cc4cee99c10fabf94ff27a7d79dbaf4f8b34b773cc85cb449b80d0d7c\"" Nov 4 23:52:52.215816 containerd[1617]: time="2025-11-04T23:52:52.215777419Z" level=info msg="StartContainer for \"7f240d6cc4cee99c10fabf94ff27a7d79dbaf4f8b34b773cc85cb449b80d0d7c\"" Nov 4 23:52:52.217743 containerd[1617]: time="2025-11-04T23:52:52.217704992Z" level=info msg="connecting to shim 7f240d6cc4cee99c10fabf94ff27a7d79dbaf4f8b34b773cc85cb449b80d0d7c" address="unix:///run/containerd/s/8c78366b5b83c4b9bce50a772d5424a9938aed63b343581bf4b18662a65b467f" protocol=ttrpc version=3 Nov 4 23:52:52.242232 systemd[1]: Started cri-containerd-7f240d6cc4cee99c10fabf94ff27a7d79dbaf4f8b34b773cc85cb449b80d0d7c.scope - libcontainer container 7f240d6cc4cee99c10fabf94ff27a7d79dbaf4f8b34b773cc85cb449b80d0d7c. Nov 4 23:52:52.311403 containerd[1617]: time="2025-11-04T23:52:52.310573228Z" level=info msg="StartContainer for \"7f240d6cc4cee99c10fabf94ff27a7d79dbaf4f8b34b773cc85cb449b80d0d7c\" returns successfully" Nov 4 23:52:52.315557 systemd[1]: cri-containerd-7f240d6cc4cee99c10fabf94ff27a7d79dbaf4f8b34b773cc85cb449b80d0d7c.scope: Deactivated successfully. Nov 4 23:52:52.315905 systemd[1]: cri-containerd-7f240d6cc4cee99c10fabf94ff27a7d79dbaf4f8b34b773cc85cb449b80d0d7c.scope: Consumed 47ms CPU time, 6.3M memory peak, 4.6M written to disk. Nov 4 23:52:52.323098 containerd[1617]: time="2025-11-04T23:52:52.318447700Z" level=info msg="received exit event container_id:\"7f240d6cc4cee99c10fabf94ff27a7d79dbaf4f8b34b773cc85cb449b80d0d7c\" id:\"7f240d6cc4cee99c10fabf94ff27a7d79dbaf4f8b34b773cc85cb449b80d0d7c\" pid:3464 exited_at:{seconds:1762300372 nanos:317226908}" Nov 4 23:52:52.323098 containerd[1617]: time="2025-11-04T23:52:52.318710434Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7f240d6cc4cee99c10fabf94ff27a7d79dbaf4f8b34b773cc85cb449b80d0d7c\" id:\"7f240d6cc4cee99c10fabf94ff27a7d79dbaf4f8b34b773cc85cb449b80d0d7c\" pid:3464 exited_at:{seconds:1762300372 nanos:317226908}" Nov 4 23:52:52.344103 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f240d6cc4cee99c10fabf94ff27a7d79dbaf4f8b34b773cc85cb449b80d0d7c-rootfs.mount: Deactivated successfully. Nov 4 23:52:52.909532 kubelet[2756]: E1104 23:52:52.909428 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vmhxx" podUID="aba4eacc-4aef-4d09-939a-0ecd4f64c80b" Nov 4 23:52:53.002787 kubelet[2756]: I1104 23:52:53.002742 2756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 23:52:53.003376 kubelet[2756]: E1104 23:52:53.003130 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:53.003376 kubelet[2756]: E1104 23:52:53.003351 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:53.004324 containerd[1617]: time="2025-11-04T23:52:53.004281283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 4 23:52:54.909841 kubelet[2756]: E1104 23:52:54.909765 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vmhxx" podUID="aba4eacc-4aef-4d09-939a-0ecd4f64c80b" Nov 4 23:52:55.435873 containerd[1617]: time="2025-11-04T23:52:55.435809313Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:52:55.436656 containerd[1617]: time="2025-11-04T23:52:55.436608968Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 4 23:52:55.437728 containerd[1617]: time="2025-11-04T23:52:55.437681668Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:52:55.440558 containerd[1617]: time="2025-11-04T23:52:55.440522458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:52:55.441423 containerd[1617]: time="2025-11-04T23:52:55.441343915Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.437009291s" Nov 4 23:52:55.441423 containerd[1617]: time="2025-11-04T23:52:55.441409458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 4 23:52:55.443809 containerd[1617]: time="2025-11-04T23:52:55.443768971Z" level=info msg="CreateContainer within sandbox \"81ae9e9e44f4389bcdd0128946a2244688280337605699df9151508fd9650d52\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 4 23:52:55.453912 containerd[1617]: time="2025-11-04T23:52:55.453872137Z" level=info msg="Container 6e9cdd2f72cc387acfde7a7576ad53062bea89cd1c093bbb6aad73755304af8b: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:52:55.462110 containerd[1617]: time="2025-11-04T23:52:55.462062721Z" level=info msg="CreateContainer within sandbox \"81ae9e9e44f4389bcdd0128946a2244688280337605699df9151508fd9650d52\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6e9cdd2f72cc387acfde7a7576ad53062bea89cd1c093bbb6aad73755304af8b\"" Nov 4 23:52:55.462687 containerd[1617]: time="2025-11-04T23:52:55.462599552Z" level=info msg="StartContainer for \"6e9cdd2f72cc387acfde7a7576ad53062bea89cd1c093bbb6aad73755304af8b\"" Nov 4 23:52:55.464010 containerd[1617]: time="2025-11-04T23:52:55.463958331Z" level=info msg="connecting to shim 6e9cdd2f72cc387acfde7a7576ad53062bea89cd1c093bbb6aad73755304af8b" address="unix:///run/containerd/s/8c78366b5b83c4b9bce50a772d5424a9938aed63b343581bf4b18662a65b467f" protocol=ttrpc version=3 Nov 4 23:52:55.494258 systemd[1]: Started cri-containerd-6e9cdd2f72cc387acfde7a7576ad53062bea89cd1c093bbb6aad73755304af8b.scope - libcontainer container 6e9cdd2f72cc387acfde7a7576ad53062bea89cd1c093bbb6aad73755304af8b. Nov 4 23:52:55.540491 containerd[1617]: time="2025-11-04T23:52:55.540432761Z" level=info msg="StartContainer for \"6e9cdd2f72cc387acfde7a7576ad53062bea89cd1c093bbb6aad73755304af8b\" returns successfully" Nov 4 23:52:56.011263 kubelet[2756]: E1104 23:52:56.011224 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:56.635075 systemd[1]: cri-containerd-6e9cdd2f72cc387acfde7a7576ad53062bea89cd1c093bbb6aad73755304af8b.scope: Deactivated successfully. Nov 4 23:52:56.635854 systemd[1]: cri-containerd-6e9cdd2f72cc387acfde7a7576ad53062bea89cd1c093bbb6aad73755304af8b.scope: Consumed 705ms CPU time, 178.7M memory peak, 3.9M read from disk, 171.3M written to disk. Nov 4 23:52:56.636751 containerd[1617]: time="2025-11-04T23:52:56.636656155Z" level=info msg="received exit event container_id:\"6e9cdd2f72cc387acfde7a7576ad53062bea89cd1c093bbb6aad73755304af8b\" id:\"6e9cdd2f72cc387acfde7a7576ad53062bea89cd1c093bbb6aad73755304af8b\" pid:3523 exited_at:{seconds:1762300376 nanos:636061296}" Nov 4 23:52:56.636751 containerd[1617]: time="2025-11-04T23:52:56.636723773Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e9cdd2f72cc387acfde7a7576ad53062bea89cd1c093bbb6aad73755304af8b\" id:\"6e9cdd2f72cc387acfde7a7576ad53062bea89cd1c093bbb6aad73755304af8b\" pid:3523 exited_at:{seconds:1762300376 nanos:636061296}" Nov 4 23:52:56.661424 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e9cdd2f72cc387acfde7a7576ad53062bea89cd1c093bbb6aad73755304af8b-rootfs.mount: Deactivated successfully. Nov 4 23:52:56.722361 kubelet[2756]: I1104 23:52:56.722140 2756 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 4 23:52:56.762615 systemd[1]: Created slice kubepods-burstable-pod8edd433e_bff3_42d9_ba02_36618a779a17.slice - libcontainer container kubepods-burstable-pod8edd433e_bff3_42d9_ba02_36618a779a17.slice. Nov 4 23:52:56.773225 systemd[1]: Created slice kubepods-burstable-poddb2f6229_0a76_421d_9c33_f0b44fd98a47.slice - libcontainer container kubepods-burstable-poddb2f6229_0a76_421d_9c33_f0b44fd98a47.slice. Nov 4 23:52:56.781200 systemd[1]: Created slice kubepods-besteffort-pod61ed265f_0860_4f8f_9e00_9c62a99949f4.slice - libcontainer container kubepods-besteffort-pod61ed265f_0860_4f8f_9e00_9c62a99949f4.slice. Nov 4 23:52:56.790453 systemd[1]: Created slice kubepods-besteffort-podb79db394_cdf6_4f69_a1b0_fe3bb4b1119d.slice - libcontainer container kubepods-besteffort-podb79db394_cdf6_4f69_a1b0_fe3bb4b1119d.slice. Nov 4 23:52:56.791070 kubelet[2756]: I1104 23:52:56.790956 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db2f6229-0a76-421d-9c33-f0b44fd98a47-config-volume\") pod \"coredns-668d6bf9bc-5mbcv\" (UID: \"db2f6229-0a76-421d-9c33-f0b44fd98a47\") " pod="kube-system/coredns-668d6bf9bc-5mbcv" Nov 4 23:52:56.791414 kubelet[2756]: I1104 23:52:56.791166 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b79db394-cdf6-4f69-a1b0-fe3bb4b1119d-calico-apiserver-certs\") pod \"calico-apiserver-6f6449fc66-sdj9r\" (UID: \"b79db394-cdf6-4f69-a1b0-fe3bb4b1119d\") " pod="calico-apiserver/calico-apiserver-6f6449fc66-sdj9r" Nov 4 23:52:56.791488 kubelet[2756]: I1104 23:52:56.791451 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvbpq\" (UniqueName: \"kubernetes.io/projected/db2f6229-0a76-421d-9c33-f0b44fd98a47-kube-api-access-hvbpq\") pod \"coredns-668d6bf9bc-5mbcv\" (UID: \"db2f6229-0a76-421d-9c33-f0b44fd98a47\") " pod="kube-system/coredns-668d6bf9bc-5mbcv" Nov 4 23:52:56.791542 kubelet[2756]: I1104 23:52:56.791508 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/403117e1-6656-4ef1-bd00-648990dd9320-calico-apiserver-certs\") pod \"calico-apiserver-6f6449fc66-4tmvj\" (UID: \"403117e1-6656-4ef1-bd00-648990dd9320\") " pod="calico-apiserver/calico-apiserver-6f6449fc66-4tmvj" Nov 4 23:52:56.791542 kubelet[2756]: I1104 23:52:56.791531 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61ed265f-0860-4f8f-9e00-9c62a99949f4-tigera-ca-bundle\") pod \"calico-kube-controllers-68ffc7886c-bvp99\" (UID: \"61ed265f-0860-4f8f-9e00-9c62a99949f4\") " pod="calico-system/calico-kube-controllers-68ffc7886c-bvp99" Nov 4 23:52:56.791596 kubelet[2756]: I1104 23:52:56.791552 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8edd433e-bff3-42d9-ba02-36618a779a17-config-volume\") pod \"coredns-668d6bf9bc-g5nlt\" (UID: \"8edd433e-bff3-42d9-ba02-36618a779a17\") " pod="kube-system/coredns-668d6bf9bc-g5nlt" Nov 4 23:52:56.791596 kubelet[2756]: I1104 23:52:56.791574 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw5cp\" (UniqueName: \"kubernetes.io/projected/8edd433e-bff3-42d9-ba02-36618a779a17-kube-api-access-vw5cp\") pod \"coredns-668d6bf9bc-g5nlt\" (UID: \"8edd433e-bff3-42d9-ba02-36618a779a17\") " pod="kube-system/coredns-668d6bf9bc-g5nlt" Nov 4 23:52:56.791681 kubelet[2756]: I1104 23:52:56.791667 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59d88858-9079-42f7-b468-71dc6a4f5e97-goldmane-ca-bundle\") pod \"goldmane-666569f655-dzl9n\" (UID: \"59d88858-9079-42f7-b468-71dc6a4f5e97\") " pod="calico-system/goldmane-666569f655-dzl9n" Nov 4 23:52:56.791708 kubelet[2756]: I1104 23:52:56.791691 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7bc4ad9-0bde-41dc-bff9-79439b4aba01-whisker-ca-bundle\") pod \"whisker-7995f6c4db-x8hc6\" (UID: \"c7bc4ad9-0bde-41dc-bff9-79439b4aba01\") " pod="calico-system/whisker-7995f6c4db-x8hc6" Nov 4 23:52:56.791743 kubelet[2756]: I1104 23:52:56.791713 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btwfr\" (UniqueName: \"kubernetes.io/projected/b79db394-cdf6-4f69-a1b0-fe3bb4b1119d-kube-api-access-btwfr\") pod \"calico-apiserver-6f6449fc66-sdj9r\" (UID: \"b79db394-cdf6-4f69-a1b0-fe3bb4b1119d\") " pod="calico-apiserver/calico-apiserver-6f6449fc66-sdj9r" Nov 4 23:52:56.791743 kubelet[2756]: I1104 23:52:56.791731 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/59d88858-9079-42f7-b468-71dc6a4f5e97-goldmane-key-pair\") pod \"goldmane-666569f655-dzl9n\" (UID: \"59d88858-9079-42f7-b468-71dc6a4f5e97\") " pod="calico-system/goldmane-666569f655-dzl9n" Nov 4 23:52:56.791789 kubelet[2756]: I1104 23:52:56.791746 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5kmw\" (UniqueName: \"kubernetes.io/projected/59d88858-9079-42f7-b468-71dc6a4f5e97-kube-api-access-x5kmw\") pod \"goldmane-666569f655-dzl9n\" (UID: \"59d88858-9079-42f7-b468-71dc6a4f5e97\") " pod="calico-system/goldmane-666569f655-dzl9n" Nov 4 23:52:56.791813 kubelet[2756]: I1104 23:52:56.791803 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m27qz\" (UniqueName: \"kubernetes.io/projected/c7bc4ad9-0bde-41dc-bff9-79439b4aba01-kube-api-access-m27qz\") pod \"whisker-7995f6c4db-x8hc6\" (UID: \"c7bc4ad9-0bde-41dc-bff9-79439b4aba01\") " pod="calico-system/whisker-7995f6c4db-x8hc6" Nov 4 23:52:56.791845 kubelet[2756]: I1104 23:52:56.791828 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf728\" (UniqueName: \"kubernetes.io/projected/403117e1-6656-4ef1-bd00-648990dd9320-kube-api-access-qf728\") pod \"calico-apiserver-6f6449fc66-4tmvj\" (UID: \"403117e1-6656-4ef1-bd00-648990dd9320\") " pod="calico-apiserver/calico-apiserver-6f6449fc66-4tmvj" Nov 4 23:52:56.791871 kubelet[2756]: I1104 23:52:56.791848 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj4jt\" (UniqueName: \"kubernetes.io/projected/61ed265f-0860-4f8f-9e00-9c62a99949f4-kube-api-access-rj4jt\") pod \"calico-kube-controllers-68ffc7886c-bvp99\" (UID: \"61ed265f-0860-4f8f-9e00-9c62a99949f4\") " pod="calico-system/calico-kube-controllers-68ffc7886c-bvp99" Nov 4 23:52:56.791897 kubelet[2756]: I1104 23:52:56.791870 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59d88858-9079-42f7-b468-71dc6a4f5e97-config\") pod \"goldmane-666569f655-dzl9n\" (UID: \"59d88858-9079-42f7-b468-71dc6a4f5e97\") " pod="calico-system/goldmane-666569f655-dzl9n" Nov 4 23:52:56.791897 kubelet[2756]: I1104 23:52:56.791888 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c7bc4ad9-0bde-41dc-bff9-79439b4aba01-whisker-backend-key-pair\") pod \"whisker-7995f6c4db-x8hc6\" (UID: \"c7bc4ad9-0bde-41dc-bff9-79439b4aba01\") " pod="calico-system/whisker-7995f6c4db-x8hc6" Nov 4 23:52:56.801310 systemd[1]: Created slice kubepods-besteffort-pod59d88858_9079_42f7_b468_71dc6a4f5e97.slice - libcontainer container kubepods-besteffort-pod59d88858_9079_42f7_b468_71dc6a4f5e97.slice. Nov 4 23:52:56.807530 systemd[1]: Created slice kubepods-besteffort-podc7bc4ad9_0bde_41dc_bff9_79439b4aba01.slice - libcontainer container kubepods-besteffort-podc7bc4ad9_0bde_41dc_bff9_79439b4aba01.slice. Nov 4 23:52:56.814137 systemd[1]: Created slice kubepods-besteffort-pod403117e1_6656_4ef1_bd00_648990dd9320.slice - libcontainer container kubepods-besteffort-pod403117e1_6656_4ef1_bd00_648990dd9320.slice. Nov 4 23:52:56.925312 systemd[1]: Created slice kubepods-besteffort-podaba4eacc_4aef_4d09_939a_0ecd4f64c80b.slice - libcontainer container kubepods-besteffort-podaba4eacc_4aef_4d09_939a_0ecd4f64c80b.slice. Nov 4 23:52:56.927696 containerd[1617]: time="2025-11-04T23:52:56.927649991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vmhxx,Uid:aba4eacc-4aef-4d09-939a-0ecd4f64c80b,Namespace:calico-system,Attempt:0,}" Nov 4 23:52:57.016225 kubelet[2756]: E1104 23:52:57.016174 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:57.018358 containerd[1617]: time="2025-11-04T23:52:57.017749468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 4 23:52:57.059236 containerd[1617]: time="2025-11-04T23:52:57.059156253Z" level=error msg="Failed to destroy network for sandbox \"281b5e8c34d9483f250808c27de0dd18a41ae1e27f41e587e21f055ae462063a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:52:57.060686 containerd[1617]: time="2025-11-04T23:52:57.060604589Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vmhxx,Uid:aba4eacc-4aef-4d09-939a-0ecd4f64c80b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"281b5e8c34d9483f250808c27de0dd18a41ae1e27f41e587e21f055ae462063a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:52:57.060951 kubelet[2756]: E1104 23:52:57.060903 2756 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"281b5e8c34d9483f250808c27de0dd18a41ae1e27f41e587e21f055ae462063a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:52:57.061018 kubelet[2756]: E1104 23:52:57.060987 2756 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"281b5e8c34d9483f250808c27de0dd18a41ae1e27f41e587e21f055ae462063a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vmhxx" Nov 4 23:52:57.061018 kubelet[2756]: E1104 23:52:57.061012 2756 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"281b5e8c34d9483f250808c27de0dd18a41ae1e27f41e587e21f055ae462063a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vmhxx" Nov 4 23:52:57.061166 kubelet[2756]: E1104 23:52:57.061133 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vmhxx_calico-system(aba4eacc-4aef-4d09-939a-0ecd4f64c80b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vmhxx_calico-system(aba4eacc-4aef-4d09-939a-0ecd4f64c80b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"281b5e8c34d9483f250808c27de0dd18a41ae1e27f41e587e21f055ae462063a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vmhxx" podUID="aba4eacc-4aef-4d09-939a-0ecd4f64c80b" Nov 4 23:52:57.069928 kubelet[2756]: E1104 23:52:57.069893 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:57.070517 containerd[1617]: time="2025-11-04T23:52:57.070475317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g5nlt,Uid:8edd433e-bff3-42d9-ba02-36618a779a17,Namespace:kube-system,Attempt:0,}" Nov 4 23:52:57.077316 kubelet[2756]: E1104 23:52:57.077286 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:52:57.080170 containerd[1617]: time="2025-11-04T23:52:57.080106124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5mbcv,Uid:db2f6229-0a76-421d-9c33-f0b44fd98a47,Namespace:kube-system,Attempt:0,}" Nov 4 23:52:57.085164 containerd[1617]: time="2025-11-04T23:52:57.085116896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68ffc7886c-bvp99,Uid:61ed265f-0860-4f8f-9e00-9c62a99949f4,Namespace:calico-system,Attempt:0,}" Nov 4 23:52:57.096416 containerd[1617]: time="2025-11-04T23:52:57.096367561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f6449fc66-sdj9r,Uid:b79db394-cdf6-4f69-a1b0-fe3bb4b1119d,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:52:57.106057 containerd[1617]: time="2025-11-04T23:52:57.105722068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dzl9n,Uid:59d88858-9079-42f7-b468-71dc6a4f5e97,Namespace:calico-system,Attempt:0,}" Nov 4 23:52:57.113553 containerd[1617]: time="2025-11-04T23:52:57.113516870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7995f6c4db-x8hc6,Uid:c7bc4ad9-0bde-41dc-bff9-79439b4aba01,Namespace:calico-system,Attempt:0,}" Nov 4 23:52:57.117492 containerd[1617]: time="2025-11-04T23:52:57.117444933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f6449fc66-4tmvj,Uid:403117e1-6656-4ef1-bd00-648990dd9320,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:52:57.144098 containerd[1617]: time="2025-11-04T23:52:57.144016936Z" level=error msg="Failed to destroy network for sandbox \"5732679e175e216d491d5709e9f97b32c07b01d90fcebf2f5eba0bc9157432ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:52:57.154832 containerd[1617]: time="2025-11-04T23:52:57.154771949Z" level=error msg="Failed to destroy network for sandbox \"4549168b38ab1739ae0e8de2cd743e9438003966cc863d9cdcd1db9d5b4535a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:52:57.160459 containerd[1617]: time="2025-11-04T23:52:57.160394001Z" level=error msg="Failed to destroy network for sandbox \"f85d48a355c793eb7145e9bb2866acbf8a78caa7297269474b4bebd78ca23972\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:52:57.228108 containerd[1617]: time="2025-11-04T23:52:57.227944663Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g5nlt,Uid:8edd433e-bff3-42d9-ba02-36618a779a17,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5732679e175e216d491d5709e9f97b32c07b01d90fcebf2f5eba0bc9157432ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:52:57.228313 kubelet[2756]: E1104 23:52:57.228254 2756 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5732679e175e216d491d5709e9f97b32c07b01d90fcebf2f5eba0bc9157432ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:52:57.228376 kubelet[2756]: E1104 23:52:57.228347 2756 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5732679e175e216d491d5709e9f97b32c07b01d90fcebf2f5eba0bc9157432ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-g5nlt" Nov 4 23:52:57.228407 kubelet[2756]: E1104 23:52:57.228376 2756 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5732679e175e216d491d5709e9f97b32c07b01d90fcebf2f5eba0bc9157432ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-g5nlt" Nov 4 23:52:57.228465 kubelet[2756]: E1104 23:52:57.228433 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-g5nlt_kube-system(8edd433e-bff3-42d9-ba02-36618a779a17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-g5nlt_kube-system(8edd433e-bff3-42d9-ba02-36618a779a17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5732679e175e216d491d5709e9f97b32c07b01d90fcebf2f5eba0bc9157432ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-g5nlt" podUID="8edd433e-bff3-42d9-ba02-36618a779a17" Nov 4 23:52:57.229483 containerd[1617]: time="2025-11-04T23:52:57.229415582Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68ffc7886c-bvp99,Uid:61ed265f-0860-4f8f-9e00-9c62a99949f4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4549168b38ab1739ae0e8de2cd743e9438003966cc863d9cdcd1db9d5b4535a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:52:57.230005 kubelet[2756]: E1104 23:52:57.229737 2756 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4549168b38ab1739ae0e8de2cd743e9438003966cc863d9cdcd1db9d5b4535a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:52:57.230005 kubelet[2756]: E1104 23:52:57.229823 2756 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4549168b38ab1739ae0e8de2cd743e9438003966cc863d9cdcd1db9d5b4535a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68ffc7886c-bvp99" Nov 4 23:52:57.230005 kubelet[2756]: E1104 23:52:57.229850 2756 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4549168b38ab1739ae0e8de2cd743e9438003966cc863d9cdcd1db9d5b4535a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68ffc7886c-bvp99" Nov 4 23:52:57.230131 kubelet[2756]: E1104 23:52:57.229920 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68ffc7886c-bvp99_calico-system(61ed265f-0860-4f8f-9e00-9c62a99949f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68ffc7886c-bvp99_calico-system(61ed265f-0860-4f8f-9e00-9c62a99949f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4549168b38ab1739ae0e8de2cd743e9438003966cc863d9cdcd1db9d5b4535a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68ffc7886c-bvp99" podUID="61ed265f-0860-4f8f-9e00-9c62a99949f4" Nov 4 23:52:57.230470 containerd[1617]: time="2025-11-04T23:52:57.230427687Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5mbcv,Uid:db2f6229-0a76-421d-9c33-f0b44fd98a47,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f85d48a355c793eb7145e9bb2866acbf8a78caa7297269474b4bebd78ca23972\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:52:57.230880 kubelet[2756]: E1104 23:52:57.230842 2756 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f85d48a355c793eb7145e9bb2866acbf8a78caa7297269474b4bebd78ca23972\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:52:57.230936 kubelet[2756]: E1104 23:52:57.230893 2756 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f85d48a355c793eb7145e9bb2866acbf8a78caa7297269474b4bebd78ca23972\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5mbcv" Nov 4 23:52:57.230962 kubelet[2756]: E1104 23:52:57.230929 2756 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f85d48a355c793eb7145e9bb2866acbf8a78caa7297269474b4bebd78ca23972\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5mbcv" Nov 4 23:52:57.231019 kubelet[2756]: E1104 23:52:57.230989 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-5mbcv_kube-system(db2f6229-0a76-421d-9c33-f0b44fd98a47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-5mbcv_kube-system(db2f6229-0a76-421d-9c33-f0b44fd98a47)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f85d48a355c793eb7145e9bb2866acbf8a78caa7297269474b4bebd78ca23972\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-5mbcv" podUID="db2f6229-0a76-421d-9c33-f0b44fd98a47" Nov 4 23:52:57.306648 containerd[1617]: time="2025-11-04T23:52:57.306575131Z" level=error msg="Failed to destroy network for sandbox \"78d79594557678fb4a795fa31cec478b8b923b892730c7a2bfc9e39ca9e3c405\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:52:57.309595 containerd[1617]: time="2025-11-04T23:52:57.309485639Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f6449fc66-sdj9r,Uid:b79db394-cdf6-4f69-a1b0-fe3bb4b1119d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"78d79594557678fb4a795fa31cec478b8b923b892730c7a2bfc9e39ca9e3c405\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:52:57.310070 kubelet[2756]: E1104 23:52:57.309796 2756 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78d79594557678fb4a795fa31cec478b8b923b892730c7a2bfc9e39ca9e3c405\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:52:57.310070 kubelet[2756]: E1104 23:52:57.309866 2756 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78d79594557678fb4a795fa31cec478b8b923b892730c7a2bfc9e39ca9e3c405\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f6449fc66-sdj9r" Nov 4 23:52:57.310070 kubelet[2756]: E1104 23:52:57.309922 2756 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78d79594557678fb4a795fa31cec478b8b923b892730c7a2bfc9e39ca9e3c405\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f6449fc66-sdj9r" Nov 4 23:52:57.311537 kubelet[2756]: E1104 23:52:57.311174 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f6449fc66-sdj9r_calico-apiserver(b79db394-cdf6-4f69-a1b0-fe3bb4b1119d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f6449fc66-sdj9r_calico-apiserver(b79db394-cdf6-4f69-a1b0-fe3bb4b1119d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78d79594557678fb4a795fa31cec478b8b923b892730c7a2bfc9e39ca9e3c405\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f6449fc66-sdj9r" podUID="b79db394-cdf6-4f69-a1b0-fe3bb4b1119d" Nov 4 23:52:57.312672 containerd[1617]: time="2025-11-04T23:52:57.312475806Z" level=error msg="Failed to destroy network for sandbox \"d18f9abdc2cb0fb013bbf0cf3ccb9fad5ef1afb3c3872f022cab3655b3a60c5c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:52:57.314609 containerd[1617]: time="2025-11-04T23:52:57.314578996Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dzl9n,Uid:59d88858-9079-42f7-b468-71dc6a4f5e97,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d18f9abdc2cb0fb013bbf0cf3ccb9fad5ef1afb3c3872f022cab3655b3a60c5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:52:57.315154 kubelet[2756]: E1104 23:52:57.315001 2756 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d18f9abdc2cb0fb013bbf0cf3ccb9fad5ef1afb3c3872f022cab3655b3a60c5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:52:57.315260 kubelet[2756]: E1104 23:52:57.315242 2756 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d18f9abdc2cb0fb013bbf0cf3ccb9fad5ef1afb3c3872f022cab3655b3a60c5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-dzl9n" Nov 4 23:52:57.315383 kubelet[2756]: E1104 23:52:57.315320 2756 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d18f9abdc2cb0fb013bbf0cf3ccb9fad5ef1afb3c3872f022cab3655b3a60c5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-dzl9n" Nov 4 23:52:57.315606 kubelet[2756]: E1104 23:52:57.315475 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-dzl9n_calico-system(59d88858-9079-42f7-b468-71dc6a4f5e97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-dzl9n_calico-system(59d88858-9079-42f7-b468-71dc6a4f5e97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d18f9abdc2cb0fb013bbf0cf3ccb9fad5ef1afb3c3872f022cab3655b3a60c5c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-dzl9n" podUID="59d88858-9079-42f7-b468-71dc6a4f5e97" Nov 4 23:52:57.316416 containerd[1617]: time="2025-11-04T23:52:57.316256863Z" level=error msg="Failed to destroy network for sandbox \"ab2d0e1226f3fc8c53f6b1890709b020159d061778094e7515b4449efae6bf87\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:52:57.317850 containerd[1617]: time="2025-11-04T23:52:57.317820827Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7995f6c4db-x8hc6,Uid:c7bc4ad9-0bde-41dc-bff9-79439b4aba01,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab2d0e1226f3fc8c53f6b1890709b020159d061778094e7515b4449efae6bf87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:52:57.318638 kubelet[2756]: E1104 23:52:57.318605 2756 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab2d0e1226f3fc8c53f6b1890709b020159d061778094e7515b4449efae6bf87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:52:57.318718 kubelet[2756]: E1104 23:52:57.318642 2756 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab2d0e1226f3fc8c53f6b1890709b020159d061778094e7515b4449efae6bf87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7995f6c4db-x8hc6" Nov 4 23:52:57.318718 kubelet[2756]: E1104 23:52:57.318656 2756 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab2d0e1226f3fc8c53f6b1890709b020159d061778094e7515b4449efae6bf87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7995f6c4db-x8hc6" Nov 4 23:52:57.319044 kubelet[2756]: E1104 23:52:57.318992 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7995f6c4db-x8hc6_calico-system(c7bc4ad9-0bde-41dc-bff9-79439b4aba01)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7995f6c4db-x8hc6_calico-system(c7bc4ad9-0bde-41dc-bff9-79439b4aba01)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab2d0e1226f3fc8c53f6b1890709b020159d061778094e7515b4449efae6bf87\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7995f6c4db-x8hc6" podUID="c7bc4ad9-0bde-41dc-bff9-79439b4aba01" Nov 4 23:52:57.332070 containerd[1617]: time="2025-11-04T23:52:57.332001838Z" level=error msg="Failed to destroy network for sandbox \"b60486a316efd66ae4410f6b1bfc2231838f162ea01db117cda7101d00781c91\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:52:57.333759 containerd[1617]: time="2025-11-04T23:52:57.333717307Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f6449fc66-4tmvj,Uid:403117e1-6656-4ef1-bd00-648990dd9320,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b60486a316efd66ae4410f6b1bfc2231838f162ea01db117cda7101d00781c91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:52:57.334065 kubelet[2756]: E1104 23:52:57.333983 2756 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b60486a316efd66ae4410f6b1bfc2231838f162ea01db117cda7101d00781c91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:52:57.334124 kubelet[2756]: E1104 23:52:57.334087 2756 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b60486a316efd66ae4410f6b1bfc2231838f162ea01db117cda7101d00781c91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f6449fc66-4tmvj" Nov 4 23:52:57.334124 kubelet[2756]: E1104 23:52:57.334112 2756 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b60486a316efd66ae4410f6b1bfc2231838f162ea01db117cda7101d00781c91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f6449fc66-4tmvj" Nov 4 23:52:57.334218 kubelet[2756]: E1104 23:52:57.334184 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f6449fc66-4tmvj_calico-apiserver(403117e1-6656-4ef1-bd00-648990dd9320)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f6449fc66-4tmvj_calico-apiserver(403117e1-6656-4ef1-bd00-648990dd9320)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b60486a316efd66ae4410f6b1bfc2231838f162ea01db117cda7101d00781c91\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f6449fc66-4tmvj" podUID="403117e1-6656-4ef1-bd00-648990dd9320" Nov 4 23:53:05.077235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1524038760.mount: Deactivated successfully. Nov 4 23:53:05.888364 containerd[1617]: time="2025-11-04T23:53:05.888285869Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:53:05.890297 containerd[1617]: time="2025-11-04T23:53:05.890075232Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 4 23:53:05.892416 containerd[1617]: time="2025-11-04T23:53:05.891743317Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:53:05.898453 containerd[1617]: time="2025-11-04T23:53:05.898392764Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:53:05.899386 containerd[1617]: time="2025-11-04T23:53:05.899108018Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.880628977s" Nov 4 23:53:05.899386 containerd[1617]: time="2025-11-04T23:53:05.899269382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 4 23:53:05.918669 containerd[1617]: time="2025-11-04T23:53:05.918601995Z" level=info msg="CreateContainer within sandbox \"81ae9e9e44f4389bcdd0128946a2244688280337605699df9151508fd9650d52\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 4 23:53:05.997562 containerd[1617]: time="2025-11-04T23:53:05.997489729Z" level=info msg="Container 45ed525b59a068bf4a0a9a6d0d81a1d92f977a9175e483371ec92eec7d63c579: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:53:06.061292 containerd[1617]: time="2025-11-04T23:53:06.061234087Z" level=info msg="CreateContainer within sandbox \"81ae9e9e44f4389bcdd0128946a2244688280337605699df9151508fd9650d52\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"45ed525b59a068bf4a0a9a6d0d81a1d92f977a9175e483371ec92eec7d63c579\"" Nov 4 23:53:06.062068 containerd[1617]: time="2025-11-04T23:53:06.061787046Z" level=info msg="StartContainer for \"45ed525b59a068bf4a0a9a6d0d81a1d92f977a9175e483371ec92eec7d63c579\"" Nov 4 23:53:06.063690 containerd[1617]: time="2025-11-04T23:53:06.063658202Z" level=info msg="connecting to shim 45ed525b59a068bf4a0a9a6d0d81a1d92f977a9175e483371ec92eec7d63c579" address="unix:///run/containerd/s/8c78366b5b83c4b9bce50a772d5424a9938aed63b343581bf4b18662a65b467f" protocol=ttrpc version=3 Nov 4 23:53:06.109238 systemd[1]: Started cri-containerd-45ed525b59a068bf4a0a9a6d0d81a1d92f977a9175e483371ec92eec7d63c579.scope - libcontainer container 45ed525b59a068bf4a0a9a6d0d81a1d92f977a9175e483371ec92eec7d63c579. Nov 4 23:53:06.159725 containerd[1617]: time="2025-11-04T23:53:06.159605594Z" level=info msg="StartContainer for \"45ed525b59a068bf4a0a9a6d0d81a1d92f977a9175e483371ec92eec7d63c579\" returns successfully" Nov 4 23:53:06.241396 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 4 23:53:06.242726 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 4 23:53:06.458363 kubelet[2756]: I1104 23:53:06.458177 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7bc4ad9-0bde-41dc-bff9-79439b4aba01-whisker-ca-bundle\") pod \"c7bc4ad9-0bde-41dc-bff9-79439b4aba01\" (UID: \"c7bc4ad9-0bde-41dc-bff9-79439b4aba01\") " Nov 4 23:53:06.458363 kubelet[2756]: I1104 23:53:06.458261 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m27qz\" (UniqueName: \"kubernetes.io/projected/c7bc4ad9-0bde-41dc-bff9-79439b4aba01-kube-api-access-m27qz\") pod \"c7bc4ad9-0bde-41dc-bff9-79439b4aba01\" (UID: \"c7bc4ad9-0bde-41dc-bff9-79439b4aba01\") " Nov 4 23:53:06.458363 kubelet[2756]: I1104 23:53:06.458298 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c7bc4ad9-0bde-41dc-bff9-79439b4aba01-whisker-backend-key-pair\") pod \"c7bc4ad9-0bde-41dc-bff9-79439b4aba01\" (UID: \"c7bc4ad9-0bde-41dc-bff9-79439b4aba01\") " Nov 4 23:53:06.459210 kubelet[2756]: I1104 23:53:06.458954 2756 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7bc4ad9-0bde-41dc-bff9-79439b4aba01-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c7bc4ad9-0bde-41dc-bff9-79439b4aba01" (UID: "c7bc4ad9-0bde-41dc-bff9-79439b4aba01"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 4 23:53:06.463748 kubelet[2756]: I1104 23:53:06.463667 2756 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7bc4ad9-0bde-41dc-bff9-79439b4aba01-kube-api-access-m27qz" (OuterVolumeSpecName: "kube-api-access-m27qz") pod "c7bc4ad9-0bde-41dc-bff9-79439b4aba01" (UID: "c7bc4ad9-0bde-41dc-bff9-79439b4aba01"). InnerVolumeSpecName "kube-api-access-m27qz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 23:53:06.465023 systemd[1]: var-lib-kubelet-pods-c7bc4ad9\x2d0bde\x2d41dc\x2dbff9\x2d79439b4aba01-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm27qz.mount: Deactivated successfully. Nov 4 23:53:06.465969 kubelet[2756]: I1104 23:53:06.465925 2756 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7bc4ad9-0bde-41dc-bff9-79439b4aba01-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c7bc4ad9-0bde-41dc-bff9-79439b4aba01" (UID: "c7bc4ad9-0bde-41dc-bff9-79439b4aba01"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 4 23:53:06.466270 systemd[1]: var-lib-kubelet-pods-c7bc4ad9\x2d0bde\x2d41dc\x2dbff9\x2d79439b4aba01-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 4 23:53:06.558730 kubelet[2756]: I1104 23:53:06.558658 2756 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7bc4ad9-0bde-41dc-bff9-79439b4aba01-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 4 23:53:06.558730 kubelet[2756]: I1104 23:53:06.558698 2756 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m27qz\" (UniqueName: \"kubernetes.io/projected/c7bc4ad9-0bde-41dc-bff9-79439b4aba01-kube-api-access-m27qz\") on node \"localhost\" DevicePath \"\"" Nov 4 23:53:06.558730 kubelet[2756]: I1104 23:53:06.558709 2756 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c7bc4ad9-0bde-41dc-bff9-79439b4aba01-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 4 23:53:07.046062 kubelet[2756]: E1104 23:53:07.045518 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:53:07.053156 systemd[1]: Removed slice kubepods-besteffort-podc7bc4ad9_0bde_41dc_bff9_79439b4aba01.slice - libcontainer container kubepods-besteffort-podc7bc4ad9_0bde_41dc_bff9_79439b4aba01.slice. Nov 4 23:53:07.064007 kubelet[2756]: I1104 23:53:07.063192 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-j4xnh" podStartSLOduration=2.43645353 podStartE2EDuration="19.063173491s" podCreationTimestamp="2025-11-04 23:52:48 +0000 UTC" firstStartedPulling="2025-11-04 23:52:49.274761649 +0000 UTC m=+19.457362602" lastFinishedPulling="2025-11-04 23:53:05.90148161 +0000 UTC m=+36.084082563" observedRunningTime="2025-11-04 23:53:07.062413073 +0000 UTC m=+37.245014026" watchObservedRunningTime="2025-11-04 23:53:07.063173491 +0000 UTC m=+37.245774444" Nov 4 23:53:07.202946 systemd[1]: Created slice kubepods-besteffort-podbe662a3c_e749_45bb_a12b_0eac658e4ad2.slice - libcontainer container kubepods-besteffort-podbe662a3c_e749_45bb_a12b_0eac658e4ad2.slice. Nov 4 23:53:07.250682 containerd[1617]: time="2025-11-04T23:53:07.250617677Z" level=info msg="TaskExit event in podsandbox handler container_id:\"45ed525b59a068bf4a0a9a6d0d81a1d92f977a9175e483371ec92eec7d63c579\" id:\"360a76e29936e5977cc2ee08f8dacd92b80d3d769cb5b4227330d6fe60e3651f\" pid:3908 exit_status:1 exited_at:{seconds:1762300387 nanos:250164075}" Nov 4 23:53:07.264351 kubelet[2756]: I1104 23:53:07.264289 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/be662a3c-e749-45bb-a12b-0eac658e4ad2-whisker-backend-key-pair\") pod \"whisker-6c9f67fff6-f7vtj\" (UID: \"be662a3c-e749-45bb-a12b-0eac658e4ad2\") " pod="calico-system/whisker-6c9f67fff6-f7vtj" Nov 4 23:53:07.264351 kubelet[2756]: I1104 23:53:07.264337 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be662a3c-e749-45bb-a12b-0eac658e4ad2-whisker-ca-bundle\") pod \"whisker-6c9f67fff6-f7vtj\" (UID: \"be662a3c-e749-45bb-a12b-0eac658e4ad2\") " pod="calico-system/whisker-6c9f67fff6-f7vtj" Nov 4 23:53:07.264351 kubelet[2756]: I1104 23:53:07.264364 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hp9m\" (UniqueName: \"kubernetes.io/projected/be662a3c-e749-45bb-a12b-0eac658e4ad2-kube-api-access-5hp9m\") pod \"whisker-6c9f67fff6-f7vtj\" (UID: \"be662a3c-e749-45bb-a12b-0eac658e4ad2\") " pod="calico-system/whisker-6c9f67fff6-f7vtj" Nov 4 23:53:07.508759 containerd[1617]: time="2025-11-04T23:53:07.508698284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c9f67fff6-f7vtj,Uid:be662a3c-e749-45bb-a12b-0eac658e4ad2,Namespace:calico-system,Attempt:0,}" Nov 4 23:53:07.696290 kubelet[2756]: I1104 23:53:07.696230 2756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 23:53:07.716252 kubelet[2756]: E1104 23:53:07.697817 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:53:07.910023 kubelet[2756]: E1104 23:53:07.909964 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:53:07.910537 containerd[1617]: time="2025-11-04T23:53:07.910478935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g5nlt,Uid:8edd433e-bff3-42d9-ba02-36618a779a17,Namespace:kube-system,Attempt:0,}" Nov 4 23:53:07.911064 containerd[1617]: time="2025-11-04T23:53:07.911002308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f6449fc66-sdj9r,Uid:b79db394-cdf6-4f69-a1b0-fe3bb4b1119d,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:53:07.912294 kubelet[2756]: I1104 23:53:07.912250 2756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7bc4ad9-0bde-41dc-bff9-79439b4aba01" path="/var/lib/kubelet/pods/c7bc4ad9-0bde-41dc-bff9-79439b4aba01/volumes" Nov 4 23:53:08.047820 kubelet[2756]: E1104 23:53:08.047746 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:53:08.048892 kubelet[2756]: E1104 23:53:08.047868 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:53:08.180423 containerd[1617]: time="2025-11-04T23:53:08.179024485Z" level=info msg="TaskExit event in podsandbox handler container_id:\"45ed525b59a068bf4a0a9a6d0d81a1d92f977a9175e483371ec92eec7d63c579\" id:\"5cbed3defbcf734c4c45a5f63de5d56f138897348494e0f6687135ec9f52e2ac\" pid:4047 exit_status:1 exited_at:{seconds:1762300388 nanos:178573557}" Nov 4 23:53:08.475680 systemd-networkd[1518]: cali5a7d8da3e9d: Link UP Nov 4 23:53:08.477695 systemd-networkd[1518]: cali5a7d8da3e9d: Gained carrier Nov 4 23:53:08.503232 containerd[1617]: 2025-11-04 23:53:07.812 [INFO][4021] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 23:53:08.503232 containerd[1617]: 2025-11-04 23:53:07.934 [INFO][4021] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6c9f67fff6--f7vtj-eth0 whisker-6c9f67fff6- calico-system be662a3c-e749-45bb-a12b-0eac658e4ad2 939 0 2025-11-04 23:53:07 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6c9f67fff6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6c9f67fff6-f7vtj eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5a7d8da3e9d [] [] }} ContainerID="6236723a4a665496e517f3e23ec315d7547f6fa14885f84cb171e28a5025daeb" Namespace="calico-system" Pod="whisker-6c9f67fff6-f7vtj" WorkloadEndpoint="localhost-k8s-whisker--6c9f67fff6--f7vtj-" Nov 4 23:53:08.503232 containerd[1617]: 2025-11-04 23:53:07.935 [INFO][4021] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6236723a4a665496e517f3e23ec315d7547f6fa14885f84cb171e28a5025daeb" Namespace="calico-system" Pod="whisker-6c9f67fff6-f7vtj" WorkloadEndpoint="localhost-k8s-whisker--6c9f67fff6--f7vtj-eth0" Nov 4 23:53:08.503232 containerd[1617]: 2025-11-04 23:53:08.206 [INFO][4057] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6236723a4a665496e517f3e23ec315d7547f6fa14885f84cb171e28a5025daeb" HandleID="k8s-pod-network.6236723a4a665496e517f3e23ec315d7547f6fa14885f84cb171e28a5025daeb" Workload="localhost-k8s-whisker--6c9f67fff6--f7vtj-eth0" Nov 4 23:53:08.504339 containerd[1617]: 2025-11-04 23:53:08.207 [INFO][4057] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6236723a4a665496e517f3e23ec315d7547f6fa14885f84cb171e28a5025daeb" HandleID="k8s-pod-network.6236723a4a665496e517f3e23ec315d7547f6fa14885f84cb171e28a5025daeb" Workload="localhost-k8s-whisker--6c9f67fff6--f7vtj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd5c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6c9f67fff6-f7vtj", "timestamp":"2025-11-04 23:53:08.206624888 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:53:08.504339 containerd[1617]: 2025-11-04 23:53:08.207 [INFO][4057] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:53:08.504339 containerd[1617]: 2025-11-04 23:53:08.208 [INFO][4057] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:53:08.504339 containerd[1617]: 2025-11-04 23:53:08.208 [INFO][4057] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 23:53:08.504339 containerd[1617]: 2025-11-04 23:53:08.319 [INFO][4057] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6236723a4a665496e517f3e23ec315d7547f6fa14885f84cb171e28a5025daeb" host="localhost" Nov 4 23:53:08.504339 containerd[1617]: 2025-11-04 23:53:08.348 [INFO][4057] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 23:53:08.504339 containerd[1617]: 2025-11-04 23:53:08.416 [INFO][4057] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 23:53:08.504339 containerd[1617]: 2025-11-04 23:53:08.433 [INFO][4057] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 23:53:08.504339 containerd[1617]: 2025-11-04 23:53:08.438 [INFO][4057] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 23:53:08.504339 containerd[1617]: 2025-11-04 23:53:08.438 [INFO][4057] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6236723a4a665496e517f3e23ec315d7547f6fa14885f84cb171e28a5025daeb" host="localhost" Nov 4 23:53:08.504591 containerd[1617]: 2025-11-04 23:53:08.440 [INFO][4057] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6236723a4a665496e517f3e23ec315d7547f6fa14885f84cb171e28a5025daeb Nov 4 23:53:08.504591 containerd[1617]: 2025-11-04 23:53:08.448 [INFO][4057] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6236723a4a665496e517f3e23ec315d7547f6fa14885f84cb171e28a5025daeb" host="localhost" Nov 4 23:53:08.504591 containerd[1617]: 2025-11-04 23:53:08.455 [INFO][4057] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.6236723a4a665496e517f3e23ec315d7547f6fa14885f84cb171e28a5025daeb" host="localhost" Nov 4 23:53:08.504591 containerd[1617]: 2025-11-04 23:53:08.456 [INFO][4057] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.6236723a4a665496e517f3e23ec315d7547f6fa14885f84cb171e28a5025daeb" host="localhost" Nov 4 23:53:08.504591 containerd[1617]: 2025-11-04 23:53:08.456 [INFO][4057] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:53:08.504591 containerd[1617]: 2025-11-04 23:53:08.456 [INFO][4057] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="6236723a4a665496e517f3e23ec315d7547f6fa14885f84cb171e28a5025daeb" HandleID="k8s-pod-network.6236723a4a665496e517f3e23ec315d7547f6fa14885f84cb171e28a5025daeb" Workload="localhost-k8s-whisker--6c9f67fff6--f7vtj-eth0" Nov 4 23:53:08.504737 containerd[1617]: 2025-11-04 23:53:08.463 [INFO][4021] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6236723a4a665496e517f3e23ec315d7547f6fa14885f84cb171e28a5025daeb" Namespace="calico-system" Pod="whisker-6c9f67fff6-f7vtj" WorkloadEndpoint="localhost-k8s-whisker--6c9f67fff6--f7vtj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6c9f67fff6--f7vtj-eth0", GenerateName:"whisker-6c9f67fff6-", Namespace:"calico-system", SelfLink:"", UID:"be662a3c-e749-45bb-a12b-0eac658e4ad2", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 53, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c9f67fff6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6c9f67fff6-f7vtj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5a7d8da3e9d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:53:08.504737 containerd[1617]: 2025-11-04 23:53:08.464 [INFO][4021] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="6236723a4a665496e517f3e23ec315d7547f6fa14885f84cb171e28a5025daeb" Namespace="calico-system" Pod="whisker-6c9f67fff6-f7vtj" WorkloadEndpoint="localhost-k8s-whisker--6c9f67fff6--f7vtj-eth0" Nov 4 23:53:08.504829 containerd[1617]: 2025-11-04 23:53:08.464 [INFO][4021] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5a7d8da3e9d ContainerID="6236723a4a665496e517f3e23ec315d7547f6fa14885f84cb171e28a5025daeb" Namespace="calico-system" Pod="whisker-6c9f67fff6-f7vtj" WorkloadEndpoint="localhost-k8s-whisker--6c9f67fff6--f7vtj-eth0" Nov 4 23:53:08.504829 containerd[1617]: 2025-11-04 23:53:08.479 [INFO][4021] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6236723a4a665496e517f3e23ec315d7547f6fa14885f84cb171e28a5025daeb" Namespace="calico-system" Pod="whisker-6c9f67fff6-f7vtj" WorkloadEndpoint="localhost-k8s-whisker--6c9f67fff6--f7vtj-eth0" Nov 4 23:53:08.504876 containerd[1617]: 2025-11-04 23:53:08.484 [INFO][4021] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6236723a4a665496e517f3e23ec315d7547f6fa14885f84cb171e28a5025daeb" Namespace="calico-system" Pod="whisker-6c9f67fff6-f7vtj" WorkloadEndpoint="localhost-k8s-whisker--6c9f67fff6--f7vtj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6c9f67fff6--f7vtj-eth0", GenerateName:"whisker-6c9f67fff6-", Namespace:"calico-system", SelfLink:"", UID:"be662a3c-e749-45bb-a12b-0eac658e4ad2", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 53, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c9f67fff6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6236723a4a665496e517f3e23ec315d7547f6fa14885f84cb171e28a5025daeb", Pod:"whisker-6c9f67fff6-f7vtj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5a7d8da3e9d", MAC:"9a:5c:7b:03:a9:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:53:08.504930 containerd[1617]: 2025-11-04 23:53:08.499 [INFO][4021] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6236723a4a665496e517f3e23ec315d7547f6fa14885f84cb171e28a5025daeb" Namespace="calico-system" Pod="whisker-6c9f67fff6-f7vtj" WorkloadEndpoint="localhost-k8s-whisker--6c9f67fff6--f7vtj-eth0" Nov 4 23:53:08.538124 systemd-networkd[1518]: calic3e6613a109: Link UP Nov 4 23:53:08.538342 systemd-networkd[1518]: calic3e6613a109: Gained carrier Nov 4 23:53:08.552716 containerd[1617]: 2025-11-04 23:53:08.358 [INFO][4068] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 23:53:08.552716 containerd[1617]: 2025-11-04 23:53:08.434 [INFO][4068] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--g5nlt-eth0 coredns-668d6bf9bc- kube-system 8edd433e-bff3-42d9-ba02-36618a779a17 848 0 2025-11-04 23:52:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-g5nlt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic3e6613a109 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="74171cddb15603c2fddc28dbe4a56b64453b61d409fa9ca443ba15809cf8736a" Namespace="kube-system" Pod="coredns-668d6bf9bc-g5nlt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--g5nlt-" Nov 4 23:53:08.552716 containerd[1617]: 2025-11-04 23:53:08.434 [INFO][4068] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="74171cddb15603c2fddc28dbe4a56b64453b61d409fa9ca443ba15809cf8736a" Namespace="kube-system" Pod="coredns-668d6bf9bc-g5nlt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--g5nlt-eth0" Nov 4 23:53:08.552716 containerd[1617]: 2025-11-04 23:53:08.495 [INFO][4096] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="74171cddb15603c2fddc28dbe4a56b64453b61d409fa9ca443ba15809cf8736a" HandleID="k8s-pod-network.74171cddb15603c2fddc28dbe4a56b64453b61d409fa9ca443ba15809cf8736a" Workload="localhost-k8s-coredns--668d6bf9bc--g5nlt-eth0" Nov 4 23:53:08.552967 containerd[1617]: 2025-11-04 23:53:08.496 [INFO][4096] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="74171cddb15603c2fddc28dbe4a56b64453b61d409fa9ca443ba15809cf8736a" HandleID="k8s-pod-network.74171cddb15603c2fddc28dbe4a56b64453b61d409fa9ca443ba15809cf8736a" Workload="localhost-k8s-coredns--668d6bf9bc--g5nlt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034da40), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-g5nlt", "timestamp":"2025-11-04 23:53:08.495798562 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:53:08.552967 containerd[1617]: 2025-11-04 23:53:08.496 [INFO][4096] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:53:08.552967 containerd[1617]: 2025-11-04 23:53:08.496 [INFO][4096] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:53:08.552967 containerd[1617]: 2025-11-04 23:53:08.496 [INFO][4096] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 23:53:08.552967 containerd[1617]: 2025-11-04 23:53:08.505 [INFO][4096] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.74171cddb15603c2fddc28dbe4a56b64453b61d409fa9ca443ba15809cf8736a" host="localhost" Nov 4 23:53:08.552967 containerd[1617]: 2025-11-04 23:53:08.510 [INFO][4096] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 23:53:08.552967 containerd[1617]: 2025-11-04 23:53:08.514 [INFO][4096] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 23:53:08.552967 containerd[1617]: 2025-11-04 23:53:08.518 [INFO][4096] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 23:53:08.552967 containerd[1617]: 2025-11-04 23:53:08.520 [INFO][4096] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 23:53:08.552967 containerd[1617]: 2025-11-04 23:53:08.520 [INFO][4096] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.74171cddb15603c2fddc28dbe4a56b64453b61d409fa9ca443ba15809cf8736a" host="localhost" Nov 4 23:53:08.553206 containerd[1617]: 2025-11-04 23:53:08.522 [INFO][4096] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.74171cddb15603c2fddc28dbe4a56b64453b61d409fa9ca443ba15809cf8736a Nov 4 23:53:08.553206 containerd[1617]: 2025-11-04 23:53:08.525 [INFO][4096] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.74171cddb15603c2fddc28dbe4a56b64453b61d409fa9ca443ba15809cf8736a" host="localhost" Nov 4 23:53:08.553206 containerd[1617]: 2025-11-04 23:53:08.531 [INFO][4096] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.74171cddb15603c2fddc28dbe4a56b64453b61d409fa9ca443ba15809cf8736a" host="localhost" Nov 4 23:53:08.553206 containerd[1617]: 2025-11-04 23:53:08.531 [INFO][4096] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.74171cddb15603c2fddc28dbe4a56b64453b61d409fa9ca443ba15809cf8736a" host="localhost" Nov 4 23:53:08.553206 containerd[1617]: 2025-11-04 23:53:08.532 [INFO][4096] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:53:08.553206 containerd[1617]: 2025-11-04 23:53:08.532 [INFO][4096] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="74171cddb15603c2fddc28dbe4a56b64453b61d409fa9ca443ba15809cf8736a" HandleID="k8s-pod-network.74171cddb15603c2fddc28dbe4a56b64453b61d409fa9ca443ba15809cf8736a" Workload="localhost-k8s-coredns--668d6bf9bc--g5nlt-eth0" Nov 4 23:53:08.553331 containerd[1617]: 2025-11-04 23:53:08.535 [INFO][4068] cni-plugin/k8s.go 418: Populated endpoint ContainerID="74171cddb15603c2fddc28dbe4a56b64453b61d409fa9ca443ba15809cf8736a" Namespace="kube-system" Pod="coredns-668d6bf9bc-g5nlt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--g5nlt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--g5nlt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8edd433e-bff3-42d9-ba02-36618a779a17", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 52, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-g5nlt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3e6613a109", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:53:08.553408 containerd[1617]: 2025-11-04 23:53:08.536 [INFO][4068] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="74171cddb15603c2fddc28dbe4a56b64453b61d409fa9ca443ba15809cf8736a" Namespace="kube-system" Pod="coredns-668d6bf9bc-g5nlt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--g5nlt-eth0" Nov 4 23:53:08.553408 containerd[1617]: 2025-11-04 23:53:08.536 [INFO][4068] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic3e6613a109 ContainerID="74171cddb15603c2fddc28dbe4a56b64453b61d409fa9ca443ba15809cf8736a" Namespace="kube-system" Pod="coredns-668d6bf9bc-g5nlt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--g5nlt-eth0" Nov 4 23:53:08.553408 containerd[1617]: 2025-11-04 23:53:08.538 [INFO][4068] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="74171cddb15603c2fddc28dbe4a56b64453b61d409fa9ca443ba15809cf8736a" Namespace="kube-system" Pod="coredns-668d6bf9bc-g5nlt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--g5nlt-eth0" Nov 4 23:53:08.553477 containerd[1617]: 2025-11-04 23:53:08.538 [INFO][4068] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="74171cddb15603c2fddc28dbe4a56b64453b61d409fa9ca443ba15809cf8736a" Namespace="kube-system" Pod="coredns-668d6bf9bc-g5nlt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--g5nlt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--g5nlt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8edd433e-bff3-42d9-ba02-36618a779a17", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 52, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"74171cddb15603c2fddc28dbe4a56b64453b61d409fa9ca443ba15809cf8736a", Pod:"coredns-668d6bf9bc-g5nlt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3e6613a109", MAC:"6e:cd:40:6e:aa:c6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:53:08.553477 containerd[1617]: 2025-11-04 23:53:08.549 [INFO][4068] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="74171cddb15603c2fddc28dbe4a56b64453b61d409fa9ca443ba15809cf8736a" Namespace="kube-system" Pod="coredns-668d6bf9bc-g5nlt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--g5nlt-eth0" Nov 4 23:53:08.692714 systemd-networkd[1518]: calicf19856fcba: Link UP Nov 4 23:53:08.692947 systemd-networkd[1518]: calicf19856fcba: Gained carrier Nov 4 23:53:08.712176 containerd[1617]: 2025-11-04 23:53:08.456 [INFO][4084] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 23:53:08.712176 containerd[1617]: 2025-11-04 23:53:08.474 [INFO][4084] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6f6449fc66--sdj9r-eth0 calico-apiserver-6f6449fc66- calico-apiserver b79db394-cdf6-4f69-a1b0-fe3bb4b1119d 856 0 2025-11-04 23:52:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f6449fc66 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6f6449fc66-sdj9r eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicf19856fcba [] [] }} ContainerID="96784e2163dc03379e737094b54e1421ee9156a23c7d676e6c609b931bb8562a" Namespace="calico-apiserver" Pod="calico-apiserver-6f6449fc66-sdj9r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f6449fc66--sdj9r-" Nov 4 23:53:08.712176 containerd[1617]: 2025-11-04 23:53:08.474 [INFO][4084] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="96784e2163dc03379e737094b54e1421ee9156a23c7d676e6c609b931bb8562a" Namespace="calico-apiserver" Pod="calico-apiserver-6f6449fc66-sdj9r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f6449fc66--sdj9r-eth0" Nov 4 23:53:08.712176 containerd[1617]: 2025-11-04 23:53:08.518 [INFO][4112] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="96784e2163dc03379e737094b54e1421ee9156a23c7d676e6c609b931bb8562a" HandleID="k8s-pod-network.96784e2163dc03379e737094b54e1421ee9156a23c7d676e6c609b931bb8562a" Workload="localhost-k8s-calico--apiserver--6f6449fc66--sdj9r-eth0" Nov 4 23:53:08.712176 containerd[1617]: 2025-11-04 23:53:08.518 [INFO][4112] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="96784e2163dc03379e737094b54e1421ee9156a23c7d676e6c609b931bb8562a" HandleID="k8s-pod-network.96784e2163dc03379e737094b54e1421ee9156a23c7d676e6c609b931bb8562a" Workload="localhost-k8s-calico--apiserver--6f6449fc66--sdj9r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df310), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6f6449fc66-sdj9r", "timestamp":"2025-11-04 23:53:08.517994762 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:53:08.712176 containerd[1617]: 2025-11-04 23:53:08.518 [INFO][4112] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:53:08.712176 containerd[1617]: 2025-11-04 23:53:08.532 [INFO][4112] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:53:08.712176 containerd[1617]: 2025-11-04 23:53:08.532 [INFO][4112] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 23:53:08.712176 containerd[1617]: 2025-11-04 23:53:08.605 [INFO][4112] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.96784e2163dc03379e737094b54e1421ee9156a23c7d676e6c609b931bb8562a" host="localhost" Nov 4 23:53:08.712176 containerd[1617]: 2025-11-04 23:53:08.622 [INFO][4112] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 23:53:08.712176 containerd[1617]: 2025-11-04 23:53:08.644 [INFO][4112] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 23:53:08.712176 containerd[1617]: 2025-11-04 23:53:08.654 [INFO][4112] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 23:53:08.712176 containerd[1617]: 2025-11-04 23:53:08.669 [INFO][4112] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 23:53:08.712176 containerd[1617]: 2025-11-04 23:53:08.669 [INFO][4112] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.96784e2163dc03379e737094b54e1421ee9156a23c7d676e6c609b931bb8562a" host="localhost" Nov 4 23:53:08.712176 containerd[1617]: 2025-11-04 23:53:08.672 [INFO][4112] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.96784e2163dc03379e737094b54e1421ee9156a23c7d676e6c609b931bb8562a Nov 4 23:53:08.712176 containerd[1617]: 2025-11-04 23:53:08.678 [INFO][4112] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.96784e2163dc03379e737094b54e1421ee9156a23c7d676e6c609b931bb8562a" host="localhost" Nov 4 23:53:08.712176 containerd[1617]: 2025-11-04 23:53:08.684 [INFO][4112] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.96784e2163dc03379e737094b54e1421ee9156a23c7d676e6c609b931bb8562a" host="localhost" Nov 4 23:53:08.712176 containerd[1617]: 2025-11-04 23:53:08.684 [INFO][4112] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.96784e2163dc03379e737094b54e1421ee9156a23c7d676e6c609b931bb8562a" host="localhost" Nov 4 23:53:08.712176 containerd[1617]: 2025-11-04 23:53:08.684 [INFO][4112] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:53:08.712176 containerd[1617]: 2025-11-04 23:53:08.684 [INFO][4112] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="96784e2163dc03379e737094b54e1421ee9156a23c7d676e6c609b931bb8562a" HandleID="k8s-pod-network.96784e2163dc03379e737094b54e1421ee9156a23c7d676e6c609b931bb8562a" Workload="localhost-k8s-calico--apiserver--6f6449fc66--sdj9r-eth0" Nov 4 23:53:08.713117 containerd[1617]: 2025-11-04 23:53:08.689 [INFO][4084] cni-plugin/k8s.go 418: Populated endpoint ContainerID="96784e2163dc03379e737094b54e1421ee9156a23c7d676e6c609b931bb8562a" Namespace="calico-apiserver" Pod="calico-apiserver-6f6449fc66-sdj9r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f6449fc66--sdj9r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f6449fc66--sdj9r-eth0", GenerateName:"calico-apiserver-6f6449fc66-", Namespace:"calico-apiserver", SelfLink:"", UID:"b79db394-cdf6-4f69-a1b0-fe3bb4b1119d", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 52, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f6449fc66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6f6449fc66-sdj9r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicf19856fcba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:53:08.713117 containerd[1617]: 2025-11-04 23:53:08.689 [INFO][4084] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="96784e2163dc03379e737094b54e1421ee9156a23c7d676e6c609b931bb8562a" Namespace="calico-apiserver" Pod="calico-apiserver-6f6449fc66-sdj9r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f6449fc66--sdj9r-eth0" Nov 4 23:53:08.713117 containerd[1617]: 2025-11-04 23:53:08.689 [INFO][4084] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicf19856fcba ContainerID="96784e2163dc03379e737094b54e1421ee9156a23c7d676e6c609b931bb8562a" Namespace="calico-apiserver" Pod="calico-apiserver-6f6449fc66-sdj9r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f6449fc66--sdj9r-eth0" Nov 4 23:53:08.713117 containerd[1617]: 2025-11-04 23:53:08.692 [INFO][4084] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="96784e2163dc03379e737094b54e1421ee9156a23c7d676e6c609b931bb8562a" Namespace="calico-apiserver" Pod="calico-apiserver-6f6449fc66-sdj9r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f6449fc66--sdj9r-eth0" Nov 4 23:53:08.713117 containerd[1617]: 2025-11-04 23:53:08.692 [INFO][4084] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="96784e2163dc03379e737094b54e1421ee9156a23c7d676e6c609b931bb8562a" Namespace="calico-apiserver" Pod="calico-apiserver-6f6449fc66-sdj9r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f6449fc66--sdj9r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f6449fc66--sdj9r-eth0", GenerateName:"calico-apiserver-6f6449fc66-", Namespace:"calico-apiserver", SelfLink:"", UID:"b79db394-cdf6-4f69-a1b0-fe3bb4b1119d", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 52, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f6449fc66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"96784e2163dc03379e737094b54e1421ee9156a23c7d676e6c609b931bb8562a", Pod:"calico-apiserver-6f6449fc66-sdj9r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicf19856fcba", MAC:"72:27:3d:0a:52:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:53:08.713117 containerd[1617]: 2025-11-04 23:53:08.707 [INFO][4084] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="96784e2163dc03379e737094b54e1421ee9156a23c7d676e6c609b931bb8562a" Namespace="calico-apiserver" Pod="calico-apiserver-6f6449fc66-sdj9r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f6449fc66--sdj9r-eth0" Nov 4 23:53:08.729672 containerd[1617]: time="2025-11-04T23:53:08.729499969Z" level=info msg="connecting to shim 6236723a4a665496e517f3e23ec315d7547f6fa14885f84cb171e28a5025daeb" address="unix:///run/containerd/s/914a0a164faebb9e4975fb88b9c281a18630e0e33aa265cb773c268571fb93e6" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:53:08.741493 containerd[1617]: time="2025-11-04T23:53:08.741418881Z" level=info msg="connecting to shim 74171cddb15603c2fddc28dbe4a56b64453b61d409fa9ca443ba15809cf8736a" address="unix:///run/containerd/s/80d53996bb3bf78b71dc97a3f908d001ccea3212d2e4ecffb65201ef9df0bee4" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:53:08.773974 containerd[1617]: time="2025-11-04T23:53:08.773214516Z" level=info msg="connecting to shim 96784e2163dc03379e737094b54e1421ee9156a23c7d676e6c609b931bb8562a" address="unix:///run/containerd/s/5cee4189aa5d36a1b93e5eae250c86e2a918edc948d2a8a42c3bfc3a4193bea7" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:53:08.809227 systemd[1]: Started cri-containerd-96784e2163dc03379e737094b54e1421ee9156a23c7d676e6c609b931bb8562a.scope - libcontainer container 96784e2163dc03379e737094b54e1421ee9156a23c7d676e6c609b931bb8562a. Nov 4 23:53:08.830601 systemd[1]: Started cri-containerd-74171cddb15603c2fddc28dbe4a56b64453b61d409fa9ca443ba15809cf8736a.scope - libcontainer container 74171cddb15603c2fddc28dbe4a56b64453b61d409fa9ca443ba15809cf8736a. Nov 4 23:53:08.846117 systemd[1]: Started cri-containerd-6236723a4a665496e517f3e23ec315d7547f6fa14885f84cb171e28a5025daeb.scope - libcontainer container 6236723a4a665496e517f3e23ec315d7547f6fa14885f84cb171e28a5025daeb. Nov 4 23:53:08.846724 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 23:53:08.867121 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 23:53:08.903984 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 23:53:08.912061 containerd[1617]: time="2025-11-04T23:53:08.911402943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vmhxx,Uid:aba4eacc-4aef-4d09-939a-0ecd4f64c80b,Namespace:calico-system,Attempt:0,}" Nov 4 23:53:08.975499 containerd[1617]: time="2025-11-04T23:53:08.975410294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g5nlt,Uid:8edd433e-bff3-42d9-ba02-36618a779a17,Namespace:kube-system,Attempt:0,} returns sandbox id \"74171cddb15603c2fddc28dbe4a56b64453b61d409fa9ca443ba15809cf8736a\"" Nov 4 23:53:08.980660 kubelet[2756]: E1104 23:53:08.980121 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:53:08.982401 containerd[1617]: time="2025-11-04T23:53:08.982376312Z" level=info msg="CreateContainer within sandbox \"74171cddb15603c2fddc28dbe4a56b64453b61d409fa9ca443ba15809cf8736a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 23:53:08.982584 containerd[1617]: time="2025-11-04T23:53:08.982541782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f6449fc66-sdj9r,Uid:b79db394-cdf6-4f69-a1b0-fe3bb4b1119d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"96784e2163dc03379e737094b54e1421ee9156a23c7d676e6c609b931bb8562a\"" Nov 4 23:53:08.988381 containerd[1617]: time="2025-11-04T23:53:08.988352730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c9f67fff6-f7vtj,Uid:be662a3c-e749-45bb-a12b-0eac658e4ad2,Namespace:calico-system,Attempt:0,} returns sandbox id \"6236723a4a665496e517f3e23ec315d7547f6fa14885f84cb171e28a5025daeb\"" Nov 4 23:53:08.992913 containerd[1617]: time="2025-11-04T23:53:08.992816807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:53:09.006606 containerd[1617]: time="2025-11-04T23:53:09.006580583Z" level=info msg="Container 8cf3867e530eaedfe853080d77bc429f1ffdfc25beb7f1f7f5bf16d91ba41e0a: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:53:09.016534 containerd[1617]: time="2025-11-04T23:53:09.016491160Z" level=info msg="CreateContainer within sandbox \"74171cddb15603c2fddc28dbe4a56b64453b61d409fa9ca443ba15809cf8736a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8cf3867e530eaedfe853080d77bc429f1ffdfc25beb7f1f7f5bf16d91ba41e0a\"" Nov 4 23:53:09.017248 containerd[1617]: time="2025-11-04T23:53:09.017216523Z" level=info msg="StartContainer for \"8cf3867e530eaedfe853080d77bc429f1ffdfc25beb7f1f7f5bf16d91ba41e0a\"" Nov 4 23:53:09.018371 containerd[1617]: time="2025-11-04T23:53:09.018316279Z" level=info msg="connecting to shim 8cf3867e530eaedfe853080d77bc429f1ffdfc25beb7f1f7f5bf16d91ba41e0a" address="unix:///run/containerd/s/80d53996bb3bf78b71dc97a3f908d001ccea3212d2e4ecffb65201ef9df0bee4" protocol=ttrpc version=3 Nov 4 23:53:09.049515 systemd[1]: Started cri-containerd-8cf3867e530eaedfe853080d77bc429f1ffdfc25beb7f1f7f5bf16d91ba41e0a.scope - libcontainer container 8cf3867e530eaedfe853080d77bc429f1ffdfc25beb7f1f7f5bf16d91ba41e0a. Nov 4 23:53:09.058307 kubelet[2756]: E1104 23:53:09.058265 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:53:09.106623 containerd[1617]: time="2025-11-04T23:53:09.106568691Z" level=info msg="StartContainer for \"8cf3867e530eaedfe853080d77bc429f1ffdfc25beb7f1f7f5bf16d91ba41e0a\" returns successfully" Nov 4 23:53:09.185402 systemd-networkd[1518]: caliadb2293a6e8: Link UP Nov 4 23:53:09.185616 systemd-networkd[1518]: caliadb2293a6e8: Gained carrier Nov 4 23:53:09.206696 containerd[1617]: 2025-11-04 23:53:09.011 [INFO][4298] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 23:53:09.206696 containerd[1617]: 2025-11-04 23:53:09.030 [INFO][4298] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--vmhxx-eth0 csi-node-driver- calico-system aba4eacc-4aef-4d09-939a-0ecd4f64c80b 752 0 2025-11-04 23:52:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-vmhxx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliadb2293a6e8 [] [] }} ContainerID="40075f81e3203c09d7946e2d82aca5fac4d15e416d34e1b2236efa777f8109f2" Namespace="calico-system" Pod="csi-node-driver-vmhxx" WorkloadEndpoint="localhost-k8s-csi--node--driver--vmhxx-" Nov 4 23:53:09.206696 containerd[1617]: 2025-11-04 23:53:09.030 [INFO][4298] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="40075f81e3203c09d7946e2d82aca5fac4d15e416d34e1b2236efa777f8109f2" Namespace="calico-system" Pod="csi-node-driver-vmhxx" WorkloadEndpoint="localhost-k8s-csi--node--driver--vmhxx-eth0" Nov 4 23:53:09.206696 containerd[1617]: 2025-11-04 23:53:09.065 [INFO][4360] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="40075f81e3203c09d7946e2d82aca5fac4d15e416d34e1b2236efa777f8109f2" HandleID="k8s-pod-network.40075f81e3203c09d7946e2d82aca5fac4d15e416d34e1b2236efa777f8109f2" Workload="localhost-k8s-csi--node--driver--vmhxx-eth0" Nov 4 23:53:09.206696 containerd[1617]: 2025-11-04 23:53:09.065 [INFO][4360] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="40075f81e3203c09d7946e2d82aca5fac4d15e416d34e1b2236efa777f8109f2" HandleID="k8s-pod-network.40075f81e3203c09d7946e2d82aca5fac4d15e416d34e1b2236efa777f8109f2" Workload="localhost-k8s-csi--node--driver--vmhxx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c78f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-vmhxx", "timestamp":"2025-11-04 23:53:09.065547804 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:53:09.206696 containerd[1617]: 2025-11-04 23:53:09.065 [INFO][4360] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:53:09.206696 containerd[1617]: 2025-11-04 23:53:09.066 [INFO][4360] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:53:09.206696 containerd[1617]: 2025-11-04 23:53:09.066 [INFO][4360] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 23:53:09.206696 containerd[1617]: 2025-11-04 23:53:09.079 [INFO][4360] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.40075f81e3203c09d7946e2d82aca5fac4d15e416d34e1b2236efa777f8109f2" host="localhost" Nov 4 23:53:09.206696 containerd[1617]: 2025-11-04 23:53:09.142 [INFO][4360] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 23:53:09.206696 containerd[1617]: 2025-11-04 23:53:09.150 [INFO][4360] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 23:53:09.206696 containerd[1617]: 2025-11-04 23:53:09.153 [INFO][4360] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 23:53:09.206696 containerd[1617]: 2025-11-04 23:53:09.157 [INFO][4360] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 23:53:09.206696 containerd[1617]: 2025-11-04 23:53:09.157 [INFO][4360] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.40075f81e3203c09d7946e2d82aca5fac4d15e416d34e1b2236efa777f8109f2" host="localhost" Nov 4 23:53:09.206696 containerd[1617]: 2025-11-04 23:53:09.160 [INFO][4360] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.40075f81e3203c09d7946e2d82aca5fac4d15e416d34e1b2236efa777f8109f2 Nov 4 23:53:09.206696 containerd[1617]: 2025-11-04 23:53:09.165 [INFO][4360] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.40075f81e3203c09d7946e2d82aca5fac4d15e416d34e1b2236efa777f8109f2" host="localhost" Nov 4 23:53:09.206696 containerd[1617]: 2025-11-04 23:53:09.174 [INFO][4360] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.40075f81e3203c09d7946e2d82aca5fac4d15e416d34e1b2236efa777f8109f2" host="localhost" Nov 4 23:53:09.206696 containerd[1617]: 2025-11-04 23:53:09.174 [INFO][4360] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.40075f81e3203c09d7946e2d82aca5fac4d15e416d34e1b2236efa777f8109f2" host="localhost" Nov 4 23:53:09.206696 containerd[1617]: 2025-11-04 23:53:09.174 [INFO][4360] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:53:09.206696 containerd[1617]: 2025-11-04 23:53:09.174 [INFO][4360] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="40075f81e3203c09d7946e2d82aca5fac4d15e416d34e1b2236efa777f8109f2" HandleID="k8s-pod-network.40075f81e3203c09d7946e2d82aca5fac4d15e416d34e1b2236efa777f8109f2" Workload="localhost-k8s-csi--node--driver--vmhxx-eth0" Nov 4 23:53:09.207339 containerd[1617]: 2025-11-04 23:53:09.179 [INFO][4298] cni-plugin/k8s.go 418: Populated endpoint ContainerID="40075f81e3203c09d7946e2d82aca5fac4d15e416d34e1b2236efa777f8109f2" Namespace="calico-system" Pod="csi-node-driver-vmhxx" WorkloadEndpoint="localhost-k8s-csi--node--driver--vmhxx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vmhxx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"aba4eacc-4aef-4d09-939a-0ecd4f64c80b", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 52, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-vmhxx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliadb2293a6e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:53:09.207339 containerd[1617]: 2025-11-04 23:53:09.179 [INFO][4298] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="40075f81e3203c09d7946e2d82aca5fac4d15e416d34e1b2236efa777f8109f2" Namespace="calico-system" Pod="csi-node-driver-vmhxx" WorkloadEndpoint="localhost-k8s-csi--node--driver--vmhxx-eth0" Nov 4 23:53:09.207339 containerd[1617]: 2025-11-04 23:53:09.179 [INFO][4298] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliadb2293a6e8 ContainerID="40075f81e3203c09d7946e2d82aca5fac4d15e416d34e1b2236efa777f8109f2" Namespace="calico-system" Pod="csi-node-driver-vmhxx" WorkloadEndpoint="localhost-k8s-csi--node--driver--vmhxx-eth0" Nov 4 23:53:09.207339 containerd[1617]: 2025-11-04 23:53:09.185 [INFO][4298] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="40075f81e3203c09d7946e2d82aca5fac4d15e416d34e1b2236efa777f8109f2" Namespace="calico-system" Pod="csi-node-driver-vmhxx" WorkloadEndpoint="localhost-k8s-csi--node--driver--vmhxx-eth0" Nov 4 23:53:09.207339 containerd[1617]: 2025-11-04 23:53:09.186 [INFO][4298] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="40075f81e3203c09d7946e2d82aca5fac4d15e416d34e1b2236efa777f8109f2" Namespace="calico-system" Pod="csi-node-driver-vmhxx" WorkloadEndpoint="localhost-k8s-csi--node--driver--vmhxx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vmhxx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"aba4eacc-4aef-4d09-939a-0ecd4f64c80b", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 52, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"40075f81e3203c09d7946e2d82aca5fac4d15e416d34e1b2236efa777f8109f2", Pod:"csi-node-driver-vmhxx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliadb2293a6e8", MAC:"0a:7f:c9:cc:4f:a1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:53:09.207339 containerd[1617]: 2025-11-04 23:53:09.198 [INFO][4298] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="40075f81e3203c09d7946e2d82aca5fac4d15e416d34e1b2236efa777f8109f2" Namespace="calico-system" Pod="csi-node-driver-vmhxx" WorkloadEndpoint="localhost-k8s-csi--node--driver--vmhxx-eth0" Nov 4 23:53:09.210024 systemd[1]: Started sshd@7-10.0.0.97:22-10.0.0.1:36746.service - OpenSSH per-connection server daemon (10.0.0.1:36746). Nov 4 23:53:09.223781 containerd[1617]: time="2025-11-04T23:53:09.223715872Z" level=info msg="TaskExit event in podsandbox handler container_id:\"45ed525b59a068bf4a0a9a6d0d81a1d92f977a9175e483371ec92eec7d63c579\" id:\"45ae59ff8fa4c693431aa0f0134a8d4676776e9dbeb169df1a747bc1454c1331\" pid:4391 exit_status:1 exited_at:{seconds:1762300389 nanos:205226771}" Nov 4 23:53:09.243379 containerd[1617]: time="2025-11-04T23:53:09.243213189Z" level=info msg="connecting to shim 40075f81e3203c09d7946e2d82aca5fac4d15e416d34e1b2236efa777f8109f2" address="unix:///run/containerd/s/d7eb771d1a476c3e6a8ddf1f1a44fe8aa159d40a23ffdac997f50301ccd772ee" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:53:09.280225 systemd[1]: Started cri-containerd-40075f81e3203c09d7946e2d82aca5fac4d15e416d34e1b2236efa777f8109f2.scope - libcontainer container 40075f81e3203c09d7946e2d82aca5fac4d15e416d34e1b2236efa777f8109f2. Nov 4 23:53:09.297533 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 23:53:09.308721 sshd[4428]: Accepted publickey for core from 10.0.0.1 port 36746 ssh2: RSA SHA256:v8z7uopbB1B1OOL2xS9KndxAowBPo6/CiwqBjTrJpz4 Nov 4 23:53:09.311117 sshd-session[4428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:09.311827 containerd[1617]: time="2025-11-04T23:53:09.311783288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vmhxx,Uid:aba4eacc-4aef-4d09-939a-0ecd4f64c80b,Namespace:calico-system,Attempt:0,} returns sandbox id \"40075f81e3203c09d7946e2d82aca5fac4d15e416d34e1b2236efa777f8109f2\"" Nov 4 23:53:09.318089 systemd-logind[1590]: New session 8 of user core. Nov 4 23:53:09.326192 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 4 23:53:09.352336 containerd[1617]: time="2025-11-04T23:53:09.352255955Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:53:09.353798 containerd[1617]: time="2025-11-04T23:53:09.353742207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:53:09.358649 containerd[1617]: time="2025-11-04T23:53:09.358595033Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:53:09.358923 kubelet[2756]: E1104 23:53:09.358849 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:53:09.359074 kubelet[2756]: E1104 23:53:09.358929 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:53:09.359996 containerd[1617]: time="2025-11-04T23:53:09.359719255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 23:53:09.368814 kubelet[2756]: E1104 23:53:09.368709 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-btwfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f6449fc66-sdj9r_calico-apiserver(b79db394-cdf6-4f69-a1b0-fe3bb4b1119d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:53:09.369913 kubelet[2756]: E1104 23:53:09.369869 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f6449fc66-sdj9r" podUID="b79db394-cdf6-4f69-a1b0-fe3bb4b1119d" Nov 4 23:53:09.481124 sshd[4485]: Connection closed by 10.0.0.1 port 36746 Nov 4 23:53:09.482281 sshd-session[4428]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:09.489099 systemd[1]: sshd@7-10.0.0.97:22-10.0.0.1:36746.service: Deactivated successfully. Nov 4 23:53:09.489364 systemd-logind[1590]: Session 8 logged out. Waiting for processes to exit. Nov 4 23:53:09.492466 systemd[1]: session-8.scope: Deactivated successfully. Nov 4 23:53:09.494806 systemd-logind[1590]: Removed session 8. Nov 4 23:53:09.528988 systemd-networkd[1518]: vxlan.calico: Link UP Nov 4 23:53:09.529002 systemd-networkd[1518]: vxlan.calico: Gained carrier Nov 4 23:53:09.680269 containerd[1617]: time="2025-11-04T23:53:09.680202574Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:53:09.685066 containerd[1617]: time="2025-11-04T23:53:09.685012730Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 23:53:09.685131 containerd[1617]: time="2025-11-04T23:53:09.685066370Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 4 23:53:09.685293 kubelet[2756]: E1104 23:53:09.685256 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:53:09.685348 kubelet[2756]: E1104 23:53:09.685307 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:53:09.685653 kubelet[2756]: E1104 23:53:09.685580 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b46ab3eb8c8041dba6f2cfdb3b9d0d0f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5hp9m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c9f67fff6-f7vtj_calico-system(be662a3c-e749-45bb-a12b-0eac658e4ad2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 23:53:09.686120 containerd[1617]: time="2025-11-04T23:53:09.685780062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 23:53:09.892274 systemd-networkd[1518]: cali5a7d8da3e9d: Gained IPv6LL Nov 4 23:53:09.912587 kubelet[2756]: E1104 23:53:09.912211 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:53:09.915637 containerd[1617]: time="2025-11-04T23:53:09.914194618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5mbcv,Uid:db2f6229-0a76-421d-9c33-f0b44fd98a47,Namespace:kube-system,Attempt:0,}" Nov 4 23:53:10.020247 systemd-networkd[1518]: calic3e6613a109: Gained IPv6LL Nov 4 23:53:10.024922 containerd[1617]: time="2025-11-04T23:53:10.024877703Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:53:10.026391 containerd[1617]: time="2025-11-04T23:53:10.026305525Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 23:53:10.026674 containerd[1617]: time="2025-11-04T23:53:10.026407005Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 4 23:53:10.026902 kubelet[2756]: E1104 23:53:10.026859 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:53:10.027318 kubelet[2756]: E1104 23:53:10.026920 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:53:10.027318 kubelet[2756]: E1104 23:53:10.027243 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tntpn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vmhxx_calico-system(aba4eacc-4aef-4d09-939a-0ecd4f64c80b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 23:53:10.028433 containerd[1617]: time="2025-11-04T23:53:10.027909838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 23:53:10.048865 systemd-networkd[1518]: cali68153ae91e3: Link UP Nov 4 23:53:10.049627 systemd-networkd[1518]: cali68153ae91e3: Gained carrier Nov 4 23:53:10.069696 kubelet[2756]: E1104 23:53:10.069652 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:53:10.070628 containerd[1617]: 2025-11-04 23:53:09.968 [INFO][4580] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--5mbcv-eth0 coredns-668d6bf9bc- kube-system db2f6229-0a76-421d-9c33-f0b44fd98a47 855 0 2025-11-04 23:52:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-5mbcv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali68153ae91e3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6e30d32a8e8b701b5cc8dc433a4bde47449cc1472888662a6d43dca754069c61" Namespace="kube-system" Pod="coredns-668d6bf9bc-5mbcv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5mbcv-" Nov 4 23:53:10.070628 containerd[1617]: 2025-11-04 23:53:09.968 [INFO][4580] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6e30d32a8e8b701b5cc8dc433a4bde47449cc1472888662a6d43dca754069c61" Namespace="kube-system" Pod="coredns-668d6bf9bc-5mbcv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5mbcv-eth0" Nov 4 23:53:10.070628 containerd[1617]: 2025-11-04 23:53:09.998 [INFO][4594] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6e30d32a8e8b701b5cc8dc433a4bde47449cc1472888662a6d43dca754069c61" HandleID="k8s-pod-network.6e30d32a8e8b701b5cc8dc433a4bde47449cc1472888662a6d43dca754069c61" Workload="localhost-k8s-coredns--668d6bf9bc--5mbcv-eth0" Nov 4 23:53:10.070628 containerd[1617]: 2025-11-04 23:53:09.998 [INFO][4594] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6e30d32a8e8b701b5cc8dc433a4bde47449cc1472888662a6d43dca754069c61" HandleID="k8s-pod-network.6e30d32a8e8b701b5cc8dc433a4bde47449cc1472888662a6d43dca754069c61" Workload="localhost-k8s-coredns--668d6bf9bc--5mbcv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7060), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-5mbcv", "timestamp":"2025-11-04 23:53:09.99811292 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:53:10.070628 containerd[1617]: 2025-11-04 23:53:09.998 [INFO][4594] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:53:10.070628 containerd[1617]: 2025-11-04 23:53:09.998 [INFO][4594] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:53:10.070628 containerd[1617]: 2025-11-04 23:53:09.998 [INFO][4594] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 23:53:10.070628 containerd[1617]: 2025-11-04 23:53:10.005 [INFO][4594] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6e30d32a8e8b701b5cc8dc433a4bde47449cc1472888662a6d43dca754069c61" host="localhost" Nov 4 23:53:10.070628 containerd[1617]: 2025-11-04 23:53:10.010 [INFO][4594] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 23:53:10.070628 containerd[1617]: 2025-11-04 23:53:10.014 [INFO][4594] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 23:53:10.070628 containerd[1617]: 2025-11-04 23:53:10.016 [INFO][4594] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 23:53:10.070628 containerd[1617]: 2025-11-04 23:53:10.019 [INFO][4594] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 23:53:10.070628 containerd[1617]: 2025-11-04 23:53:10.019 [INFO][4594] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6e30d32a8e8b701b5cc8dc433a4bde47449cc1472888662a6d43dca754069c61" host="localhost" Nov 4 23:53:10.070628 containerd[1617]: 2025-11-04 23:53:10.021 [INFO][4594] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6e30d32a8e8b701b5cc8dc433a4bde47449cc1472888662a6d43dca754069c61 Nov 4 23:53:10.070628 containerd[1617]: 2025-11-04 23:53:10.029 [INFO][4594] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6e30d32a8e8b701b5cc8dc433a4bde47449cc1472888662a6d43dca754069c61" host="localhost" Nov 4 23:53:10.070628 containerd[1617]: 2025-11-04 23:53:10.038 [INFO][4594] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.6e30d32a8e8b701b5cc8dc433a4bde47449cc1472888662a6d43dca754069c61" host="localhost" Nov 4 23:53:10.070628 containerd[1617]: 2025-11-04 23:53:10.038 [INFO][4594] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.6e30d32a8e8b701b5cc8dc433a4bde47449cc1472888662a6d43dca754069c61" host="localhost" Nov 4 23:53:10.070628 containerd[1617]: 2025-11-04 23:53:10.038 [INFO][4594] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:53:10.070628 containerd[1617]: 2025-11-04 23:53:10.038 [INFO][4594] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="6e30d32a8e8b701b5cc8dc433a4bde47449cc1472888662a6d43dca754069c61" HandleID="k8s-pod-network.6e30d32a8e8b701b5cc8dc433a4bde47449cc1472888662a6d43dca754069c61" Workload="localhost-k8s-coredns--668d6bf9bc--5mbcv-eth0" Nov 4 23:53:10.072233 containerd[1617]: 2025-11-04 23:53:10.045 [INFO][4580] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6e30d32a8e8b701b5cc8dc433a4bde47449cc1472888662a6d43dca754069c61" Namespace="kube-system" Pod="coredns-668d6bf9bc-5mbcv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5mbcv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--5mbcv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"db2f6229-0a76-421d-9c33-f0b44fd98a47", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 52, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-5mbcv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali68153ae91e3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:53:10.072233 containerd[1617]: 2025-11-04 23:53:10.045 [INFO][4580] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="6e30d32a8e8b701b5cc8dc433a4bde47449cc1472888662a6d43dca754069c61" Namespace="kube-system" Pod="coredns-668d6bf9bc-5mbcv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5mbcv-eth0" Nov 4 23:53:10.072233 containerd[1617]: 2025-11-04 23:53:10.045 [INFO][4580] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali68153ae91e3 ContainerID="6e30d32a8e8b701b5cc8dc433a4bde47449cc1472888662a6d43dca754069c61" Namespace="kube-system" Pod="coredns-668d6bf9bc-5mbcv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5mbcv-eth0" Nov 4 23:53:10.072233 containerd[1617]: 2025-11-04 23:53:10.050 [INFO][4580] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6e30d32a8e8b701b5cc8dc433a4bde47449cc1472888662a6d43dca754069c61" Namespace="kube-system" Pod="coredns-668d6bf9bc-5mbcv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5mbcv-eth0" Nov 4 23:53:10.072233 containerd[1617]: 2025-11-04 23:53:10.050 [INFO][4580] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6e30d32a8e8b701b5cc8dc433a4bde47449cc1472888662a6d43dca754069c61" Namespace="kube-system" Pod="coredns-668d6bf9bc-5mbcv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5mbcv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--5mbcv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"db2f6229-0a76-421d-9c33-f0b44fd98a47", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 52, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6e30d32a8e8b701b5cc8dc433a4bde47449cc1472888662a6d43dca754069c61", Pod:"coredns-668d6bf9bc-5mbcv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali68153ae91e3", MAC:"fa:ca:49:95:6f:f8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:53:10.072233 containerd[1617]: 2025-11-04 23:53:10.064 [INFO][4580] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6e30d32a8e8b701b5cc8dc433a4bde47449cc1472888662a6d43dca754069c61" Namespace="kube-system" Pod="coredns-668d6bf9bc-5mbcv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5mbcv-eth0" Nov 4 23:53:10.076362 kubelet[2756]: E1104 23:53:10.076240 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f6449fc66-sdj9r" podUID="b79db394-cdf6-4f69-a1b0-fe3bb4b1119d" Nov 4 23:53:10.087284 kubelet[2756]: I1104 23:53:10.087206 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-g5nlt" podStartSLOduration=34.087179665 podStartE2EDuration="34.087179665s" podCreationTimestamp="2025-11-04 23:52:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:53:10.084512365 +0000 UTC m=+40.267113328" watchObservedRunningTime="2025-11-04 23:53:10.087179665 +0000 UTC m=+40.269780608" Nov 4 23:53:10.117610 containerd[1617]: time="2025-11-04T23:53:10.117547234Z" level=info msg="connecting to shim 6e30d32a8e8b701b5cc8dc433a4bde47449cc1472888662a6d43dca754069c61" address="unix:///run/containerd/s/80ccfc496bc98f566e421f75fc88dd318815d574d9df2b42da169dff3752f075" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:53:10.147323 systemd[1]: Started cri-containerd-6e30d32a8e8b701b5cc8dc433a4bde47449cc1472888662a6d43dca754069c61.scope - libcontainer container 6e30d32a8e8b701b5cc8dc433a4bde47449cc1472888662a6d43dca754069c61. Nov 4 23:53:10.168090 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 23:53:10.260146 containerd[1617]: time="2025-11-04T23:53:10.260092218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5mbcv,Uid:db2f6229-0a76-421d-9c33-f0b44fd98a47,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e30d32a8e8b701b5cc8dc433a4bde47449cc1472888662a6d43dca754069c61\"" Nov 4 23:53:10.260961 kubelet[2756]: E1104 23:53:10.260925 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:53:10.262708 containerd[1617]: time="2025-11-04T23:53:10.262668987Z" level=info msg="CreateContainer within sandbox \"6e30d32a8e8b701b5cc8dc433a4bde47449cc1472888662a6d43dca754069c61\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 23:53:10.339228 systemd-networkd[1518]: calicf19856fcba: Gained IPv6LL Nov 4 23:53:10.352298 containerd[1617]: time="2025-11-04T23:53:10.352234870Z" level=info msg="Container 3774fcc16e377fa09cb5b2facbe248c83d148669e1a5807c0ca23a7692742af2: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:53:10.357145 containerd[1617]: time="2025-11-04T23:53:10.357118903Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:53:10.359845 containerd[1617]: time="2025-11-04T23:53:10.359797745Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 23:53:10.360582 containerd[1617]: time="2025-11-04T23:53:10.359876603Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 4 23:53:10.360582 containerd[1617]: time="2025-11-04T23:53:10.360443468Z" level=info msg="CreateContainer within sandbox \"6e30d32a8e8b701b5cc8dc433a4bde47449cc1472888662a6d43dca754069c61\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3774fcc16e377fa09cb5b2facbe248c83d148669e1a5807c0ca23a7692742af2\"" Nov 4 23:53:10.360582 containerd[1617]: time="2025-11-04T23:53:10.360496467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 23:53:10.360699 kubelet[2756]: E1104 23:53:10.360094 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:53:10.360699 kubelet[2756]: E1104 23:53:10.360166 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:53:10.360699 kubelet[2756]: E1104 23:53:10.360359 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5hp9m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c9f67fff6-f7vtj_calico-system(be662a3c-e749-45bb-a12b-0eac658e4ad2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 23:53:10.361230 containerd[1617]: time="2025-11-04T23:53:10.361206611Z" level=info msg="StartContainer for \"3774fcc16e377fa09cb5b2facbe248c83d148669e1a5807c0ca23a7692742af2\"" Nov 4 23:53:10.362382 containerd[1617]: time="2025-11-04T23:53:10.362356982Z" level=info msg="connecting to shim 3774fcc16e377fa09cb5b2facbe248c83d148669e1a5807c0ca23a7692742af2" address="unix:///run/containerd/s/80ccfc496bc98f566e421f75fc88dd318815d574d9df2b42da169dff3752f075" protocol=ttrpc version=3 Nov 4 23:53:10.362434 kubelet[2756]: E1104 23:53:10.362379 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c9f67fff6-f7vtj" podUID="be662a3c-e749-45bb-a12b-0eac658e4ad2" Nov 4 23:53:10.392216 systemd[1]: Started cri-containerd-3774fcc16e377fa09cb5b2facbe248c83d148669e1a5807c0ca23a7692742af2.scope - libcontainer container 3774fcc16e377fa09cb5b2facbe248c83d148669e1a5807c0ca23a7692742af2. Nov 4 23:53:10.434008 containerd[1617]: time="2025-11-04T23:53:10.433733760Z" level=info msg="StartContainer for \"3774fcc16e377fa09cb5b2facbe248c83d148669e1a5807c0ca23a7692742af2\" returns successfully" Nov 4 23:53:10.707285 containerd[1617]: time="2025-11-04T23:53:10.707136223Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:53:10.708994 containerd[1617]: time="2025-11-04T23:53:10.708886941Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 23:53:10.709155 containerd[1617]: time="2025-11-04T23:53:10.708948417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 4 23:53:10.709419 kubelet[2756]: E1104 23:53:10.709365 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:53:10.709501 kubelet[2756]: E1104 23:53:10.709444 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:53:10.709662 kubelet[2756]: E1104 23:53:10.709610 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tntpn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vmhxx_calico-system(aba4eacc-4aef-4d09-939a-0ecd4f64c80b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 23:53:10.710937 kubelet[2756]: E1104 23:53:10.710845 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vmhxx" podUID="aba4eacc-4aef-4d09-939a-0ecd4f64c80b" Nov 4 23:53:10.912340 containerd[1617]: time="2025-11-04T23:53:10.912211438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68ffc7886c-bvp99,Uid:61ed265f-0860-4f8f-9e00-9c62a99949f4,Namespace:calico-system,Attempt:0,}" Nov 4 23:53:10.925364 containerd[1617]: time="2025-11-04T23:53:10.925302476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dzl9n,Uid:59d88858-9079-42f7-b468-71dc6a4f5e97,Namespace:calico-system,Attempt:0,}" Nov 4 23:53:11.080305 kubelet[2756]: E1104 23:53:11.080161 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:53:11.080859 kubelet[2756]: E1104 23:53:11.080738 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:53:11.082713 kubelet[2756]: E1104 23:53:11.082641 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vmhxx" podUID="aba4eacc-4aef-4d09-939a-0ecd4f64c80b" Nov 4 23:53:11.082872 kubelet[2756]: E1104 23:53:11.082682 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c9f67fff6-f7vtj" podUID="be662a3c-e749-45bb-a12b-0eac658e4ad2" Nov 4 23:53:11.146376 systemd-networkd[1518]: cali4b6e2f27b1c: Link UP Nov 4 23:53:11.147792 systemd-networkd[1518]: cali4b6e2f27b1c: Gained carrier Nov 4 23:53:11.236250 systemd-networkd[1518]: caliadb2293a6e8: Gained IPv6LL Nov 4 23:53:11.236614 systemd-networkd[1518]: cali68153ae91e3: Gained IPv6LL Nov 4 23:53:11.287971 kubelet[2756]: I1104 23:53:11.287751 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5mbcv" podStartSLOduration=35.287730263 podStartE2EDuration="35.287730263s" podCreationTimestamp="2025-11-04 23:52:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:53:11.285953787 +0000 UTC m=+41.468554740" watchObservedRunningTime="2025-11-04 23:53:11.287730263 +0000 UTC m=+41.470331217" Nov 4 23:53:11.292349 containerd[1617]: 2025-11-04 23:53:10.957 [INFO][4698] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--68ffc7886c--bvp99-eth0 calico-kube-controllers-68ffc7886c- calico-system 61ed265f-0860-4f8f-9e00-9c62a99949f4 853 0 2025-11-04 23:52:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:68ffc7886c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-68ffc7886c-bvp99 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4b6e2f27b1c [] [] }} ContainerID="c6f6990141ce026d5723b0f1ba62473aa949704c256491bb0f837382dc92aa82" Namespace="calico-system" Pod="calico-kube-controllers-68ffc7886c-bvp99" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68ffc7886c--bvp99-" Nov 4 23:53:11.292349 containerd[1617]: 2025-11-04 23:53:10.958 [INFO][4698] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c6f6990141ce026d5723b0f1ba62473aa949704c256491bb0f837382dc92aa82" Namespace="calico-system" Pod="calico-kube-controllers-68ffc7886c-bvp99" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68ffc7886c--bvp99-eth0" Nov 4 23:53:11.292349 containerd[1617]: 2025-11-04 23:53:10.997 [INFO][4726] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c6f6990141ce026d5723b0f1ba62473aa949704c256491bb0f837382dc92aa82" HandleID="k8s-pod-network.c6f6990141ce026d5723b0f1ba62473aa949704c256491bb0f837382dc92aa82" Workload="localhost-k8s-calico--kube--controllers--68ffc7886c--bvp99-eth0" Nov 4 23:53:11.292349 containerd[1617]: 2025-11-04 23:53:10.998 [INFO][4726] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c6f6990141ce026d5723b0f1ba62473aa949704c256491bb0f837382dc92aa82" HandleID="k8s-pod-network.c6f6990141ce026d5723b0f1ba62473aa949704c256491bb0f837382dc92aa82" Workload="localhost-k8s-calico--kube--controllers--68ffc7886c--bvp99-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139690), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-68ffc7886c-bvp99", "timestamp":"2025-11-04 23:53:10.997948428 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:53:11.292349 containerd[1617]: 2025-11-04 23:53:10.998 [INFO][4726] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:53:11.292349 containerd[1617]: 2025-11-04 23:53:10.998 [INFO][4726] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:53:11.292349 containerd[1617]: 2025-11-04 23:53:10.998 [INFO][4726] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 23:53:11.292349 containerd[1617]: 2025-11-04 23:53:11.009 [INFO][4726] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c6f6990141ce026d5723b0f1ba62473aa949704c256491bb0f837382dc92aa82" host="localhost" Nov 4 23:53:11.292349 containerd[1617]: 2025-11-04 23:53:11.015 [INFO][4726] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 23:53:11.292349 containerd[1617]: 2025-11-04 23:53:11.020 [INFO][4726] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 23:53:11.292349 containerd[1617]: 2025-11-04 23:53:11.022 [INFO][4726] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 23:53:11.292349 containerd[1617]: 2025-11-04 23:53:11.024 [INFO][4726] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 23:53:11.292349 containerd[1617]: 2025-11-04 23:53:11.024 [INFO][4726] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c6f6990141ce026d5723b0f1ba62473aa949704c256491bb0f837382dc92aa82" host="localhost" Nov 4 23:53:11.292349 containerd[1617]: 2025-11-04 23:53:11.026 [INFO][4726] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c6f6990141ce026d5723b0f1ba62473aa949704c256491bb0f837382dc92aa82 Nov 4 23:53:11.292349 containerd[1617]: 2025-11-04 23:53:11.064 [INFO][4726] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c6f6990141ce026d5723b0f1ba62473aa949704c256491bb0f837382dc92aa82" host="localhost" Nov 4 23:53:11.292349 containerd[1617]: 2025-11-04 23:53:11.137 [INFO][4726] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.c6f6990141ce026d5723b0f1ba62473aa949704c256491bb0f837382dc92aa82" host="localhost" Nov 4 23:53:11.292349 containerd[1617]: 2025-11-04 23:53:11.138 [INFO][4726] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.c6f6990141ce026d5723b0f1ba62473aa949704c256491bb0f837382dc92aa82" host="localhost" Nov 4 23:53:11.292349 containerd[1617]: 2025-11-04 23:53:11.138 [INFO][4726] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:53:11.292349 containerd[1617]: 2025-11-04 23:53:11.138 [INFO][4726] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="c6f6990141ce026d5723b0f1ba62473aa949704c256491bb0f837382dc92aa82" HandleID="k8s-pod-network.c6f6990141ce026d5723b0f1ba62473aa949704c256491bb0f837382dc92aa82" Workload="localhost-k8s-calico--kube--controllers--68ffc7886c--bvp99-eth0" Nov 4 23:53:11.293646 containerd[1617]: 2025-11-04 23:53:11.142 [INFO][4698] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c6f6990141ce026d5723b0f1ba62473aa949704c256491bb0f837382dc92aa82" Namespace="calico-system" Pod="calico-kube-controllers-68ffc7886c-bvp99" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68ffc7886c--bvp99-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--68ffc7886c--bvp99-eth0", GenerateName:"calico-kube-controllers-68ffc7886c-", Namespace:"calico-system", SelfLink:"", UID:"61ed265f-0860-4f8f-9e00-9c62a99949f4", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 52, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68ffc7886c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-68ffc7886c-bvp99", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4b6e2f27b1c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:53:11.293646 containerd[1617]: 2025-11-04 23:53:11.142 [INFO][4698] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="c6f6990141ce026d5723b0f1ba62473aa949704c256491bb0f837382dc92aa82" Namespace="calico-system" Pod="calico-kube-controllers-68ffc7886c-bvp99" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68ffc7886c--bvp99-eth0" Nov 4 23:53:11.293646 containerd[1617]: 2025-11-04 23:53:11.143 [INFO][4698] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4b6e2f27b1c ContainerID="c6f6990141ce026d5723b0f1ba62473aa949704c256491bb0f837382dc92aa82" Namespace="calico-system" Pod="calico-kube-controllers-68ffc7886c-bvp99" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68ffc7886c--bvp99-eth0" Nov 4 23:53:11.293646 containerd[1617]: 2025-11-04 23:53:11.149 [INFO][4698] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c6f6990141ce026d5723b0f1ba62473aa949704c256491bb0f837382dc92aa82" Namespace="calico-system" Pod="calico-kube-controllers-68ffc7886c-bvp99" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68ffc7886c--bvp99-eth0" Nov 4 23:53:11.293646 containerd[1617]: 2025-11-04 23:53:11.150 [INFO][4698] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c6f6990141ce026d5723b0f1ba62473aa949704c256491bb0f837382dc92aa82" Namespace="calico-system" Pod="calico-kube-controllers-68ffc7886c-bvp99" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68ffc7886c--bvp99-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--68ffc7886c--bvp99-eth0", GenerateName:"calico-kube-controllers-68ffc7886c-", Namespace:"calico-system", SelfLink:"", UID:"61ed265f-0860-4f8f-9e00-9c62a99949f4", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 52, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68ffc7886c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c6f6990141ce026d5723b0f1ba62473aa949704c256491bb0f837382dc92aa82", Pod:"calico-kube-controllers-68ffc7886c-bvp99", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4b6e2f27b1c", MAC:"9a:9e:7b:d9:75:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:53:11.293646 containerd[1617]: 2025-11-04 23:53:11.286 [INFO][4698] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c6f6990141ce026d5723b0f1ba62473aa949704c256491bb0f837382dc92aa82" Namespace="calico-system" Pod="calico-kube-controllers-68ffc7886c-bvp99" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68ffc7886c--bvp99-eth0" Nov 4 23:53:11.391138 containerd[1617]: time="2025-11-04T23:53:11.391021827Z" level=info msg="connecting to shim c6f6990141ce026d5723b0f1ba62473aa949704c256491bb0f837382dc92aa82" address="unix:///run/containerd/s/d0f9132cb36c5cf6ca4bbc0490ccc4bbeda78319a010d7345afc438833580b20" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:53:11.418993 systemd-networkd[1518]: cali1063084c48c: Link UP Nov 4 23:53:11.420350 systemd-networkd[1518]: cali1063084c48c: Gained carrier Nov 4 23:53:11.426478 systemd[1]: Started cri-containerd-c6f6990141ce026d5723b0f1ba62473aa949704c256491bb0f837382dc92aa82.scope - libcontainer container c6f6990141ce026d5723b0f1ba62473aa949704c256491bb0f837382dc92aa82. Nov 4 23:53:11.446625 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 23:53:11.447257 containerd[1617]: 2025-11-04 23:53:10.983 [INFO][4711] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--dzl9n-eth0 goldmane-666569f655- calico-system 59d88858-9079-42f7-b468-71dc6a4f5e97 859 0 2025-11-04 23:52:46 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-dzl9n eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1063084c48c [] [] }} ContainerID="627f870c7839ddbecce2fb78ee89560fed95053d1440fdf956ba62f1c43a6a9e" Namespace="calico-system" Pod="goldmane-666569f655-dzl9n" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--dzl9n-" Nov 4 23:53:11.447257 containerd[1617]: 2025-11-04 23:53:10.984 [INFO][4711] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="627f870c7839ddbecce2fb78ee89560fed95053d1440fdf956ba62f1c43a6a9e" Namespace="calico-system" Pod="goldmane-666569f655-dzl9n" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--dzl9n-eth0" Nov 4 23:53:11.447257 containerd[1617]: 2025-11-04 23:53:11.029 [INFO][4736] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="627f870c7839ddbecce2fb78ee89560fed95053d1440fdf956ba62f1c43a6a9e" HandleID="k8s-pod-network.627f870c7839ddbecce2fb78ee89560fed95053d1440fdf956ba62f1c43a6a9e" Workload="localhost-k8s-goldmane--666569f655--dzl9n-eth0" Nov 4 23:53:11.447257 containerd[1617]: 2025-11-04 23:53:11.029 [INFO][4736] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="627f870c7839ddbecce2fb78ee89560fed95053d1440fdf956ba62f1c43a6a9e" HandleID="k8s-pod-network.627f870c7839ddbecce2fb78ee89560fed95053d1440fdf956ba62f1c43a6a9e" Workload="localhost-k8s-goldmane--666569f655--dzl9n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004940d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-dzl9n", "timestamp":"2025-11-04 23:53:11.029336729 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:53:11.447257 containerd[1617]: 2025-11-04 23:53:11.029 [INFO][4736] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:53:11.447257 containerd[1617]: 2025-11-04 23:53:11.138 [INFO][4736] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:53:11.447257 containerd[1617]: 2025-11-04 23:53:11.139 [INFO][4736] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 23:53:11.447257 containerd[1617]: 2025-11-04 23:53:11.286 [INFO][4736] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.627f870c7839ddbecce2fb78ee89560fed95053d1440fdf956ba62f1c43a6a9e" host="localhost" Nov 4 23:53:11.447257 containerd[1617]: 2025-11-04 23:53:11.351 [INFO][4736] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 23:53:11.447257 containerd[1617]: 2025-11-04 23:53:11.360 [INFO][4736] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 23:53:11.447257 containerd[1617]: 2025-11-04 23:53:11.363 [INFO][4736] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 23:53:11.447257 containerd[1617]: 2025-11-04 23:53:11.366 [INFO][4736] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 23:53:11.447257 containerd[1617]: 2025-11-04 23:53:11.366 [INFO][4736] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.627f870c7839ddbecce2fb78ee89560fed95053d1440fdf956ba62f1c43a6a9e" host="localhost" Nov 4 23:53:11.447257 containerd[1617]: 2025-11-04 23:53:11.368 [INFO][4736] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.627f870c7839ddbecce2fb78ee89560fed95053d1440fdf956ba62f1c43a6a9e Nov 4 23:53:11.447257 containerd[1617]: 2025-11-04 23:53:11.379 [INFO][4736] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.627f870c7839ddbecce2fb78ee89560fed95053d1440fdf956ba62f1c43a6a9e" host="localhost" Nov 4 23:53:11.447257 containerd[1617]: 2025-11-04 23:53:11.394 [INFO][4736] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.627f870c7839ddbecce2fb78ee89560fed95053d1440fdf956ba62f1c43a6a9e" host="localhost" Nov 4 23:53:11.447257 containerd[1617]: 2025-11-04 23:53:11.394 [INFO][4736] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.627f870c7839ddbecce2fb78ee89560fed95053d1440fdf956ba62f1c43a6a9e" host="localhost" Nov 4 23:53:11.447257 containerd[1617]: 2025-11-04 23:53:11.394 [INFO][4736] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:53:11.447257 containerd[1617]: 2025-11-04 23:53:11.394 [INFO][4736] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="627f870c7839ddbecce2fb78ee89560fed95053d1440fdf956ba62f1c43a6a9e" HandleID="k8s-pod-network.627f870c7839ddbecce2fb78ee89560fed95053d1440fdf956ba62f1c43a6a9e" Workload="localhost-k8s-goldmane--666569f655--dzl9n-eth0" Nov 4 23:53:11.448087 containerd[1617]: 2025-11-04 23:53:11.406 [INFO][4711] cni-plugin/k8s.go 418: Populated endpoint ContainerID="627f870c7839ddbecce2fb78ee89560fed95053d1440fdf956ba62f1c43a6a9e" Namespace="calico-system" Pod="goldmane-666569f655-dzl9n" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--dzl9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--dzl9n-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"59d88858-9079-42f7-b468-71dc6a4f5e97", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 52, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-dzl9n", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1063084c48c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:53:11.448087 containerd[1617]: 2025-11-04 23:53:11.409 [INFO][4711] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="627f870c7839ddbecce2fb78ee89560fed95053d1440fdf956ba62f1c43a6a9e" Namespace="calico-system" Pod="goldmane-666569f655-dzl9n" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--dzl9n-eth0" Nov 4 23:53:11.448087 containerd[1617]: 2025-11-04 23:53:11.409 [INFO][4711] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1063084c48c ContainerID="627f870c7839ddbecce2fb78ee89560fed95053d1440fdf956ba62f1c43a6a9e" Namespace="calico-system" Pod="goldmane-666569f655-dzl9n" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--dzl9n-eth0" Nov 4 23:53:11.448087 containerd[1617]: 2025-11-04 23:53:11.424 [INFO][4711] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="627f870c7839ddbecce2fb78ee89560fed95053d1440fdf956ba62f1c43a6a9e" Namespace="calico-system" Pod="goldmane-666569f655-dzl9n" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--dzl9n-eth0" Nov 4 23:53:11.448087 containerd[1617]: 2025-11-04 23:53:11.427 [INFO][4711] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="627f870c7839ddbecce2fb78ee89560fed95053d1440fdf956ba62f1c43a6a9e" Namespace="calico-system" Pod="goldmane-666569f655-dzl9n" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--dzl9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--dzl9n-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"59d88858-9079-42f7-b468-71dc6a4f5e97", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 52, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"627f870c7839ddbecce2fb78ee89560fed95053d1440fdf956ba62f1c43a6a9e", Pod:"goldmane-666569f655-dzl9n", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1063084c48c", MAC:"7a:20:8c:3e:68:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:53:11.448087 containerd[1617]: 2025-11-04 23:53:11.442 [INFO][4711] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="627f870c7839ddbecce2fb78ee89560fed95053d1440fdf956ba62f1c43a6a9e" Namespace="calico-system" Pod="goldmane-666569f655-dzl9n" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--dzl9n-eth0" Nov 4 23:53:11.475535 containerd[1617]: time="2025-11-04T23:53:11.475445355Z" level=info msg="connecting to shim 627f870c7839ddbecce2fb78ee89560fed95053d1440fdf956ba62f1c43a6a9e" address="unix:///run/containerd/s/3cb43bd81b8d554f508c84bda2e0d3eff2c55268f523c2adef80368f95d4aef1" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:53:11.488011 containerd[1617]: time="2025-11-04T23:53:11.487944911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68ffc7886c-bvp99,Uid:61ed265f-0860-4f8f-9e00-9c62a99949f4,Namespace:calico-system,Attempt:0,} returns sandbox id \"c6f6990141ce026d5723b0f1ba62473aa949704c256491bb0f837382dc92aa82\"" Nov 4 23:53:11.490135 containerd[1617]: time="2025-11-04T23:53:11.490105409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 23:53:11.507298 systemd[1]: Started cri-containerd-627f870c7839ddbecce2fb78ee89560fed95053d1440fdf956ba62f1c43a6a9e.scope - libcontainer container 627f870c7839ddbecce2fb78ee89560fed95053d1440fdf956ba62f1c43a6a9e. Nov 4 23:53:11.528710 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 23:53:11.557017 systemd-networkd[1518]: vxlan.calico: Gained IPv6LL Nov 4 23:53:11.583557 containerd[1617]: time="2025-11-04T23:53:11.583449340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dzl9n,Uid:59d88858-9079-42f7-b468-71dc6a4f5e97,Namespace:calico-system,Attempt:0,} returns sandbox id \"627f870c7839ddbecce2fb78ee89560fed95053d1440fdf956ba62f1c43a6a9e\"" Nov 4 23:53:11.848243 containerd[1617]: time="2025-11-04T23:53:11.848086341Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:53:11.849468 containerd[1617]: time="2025-11-04T23:53:11.849392534Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 23:53:11.849468 containerd[1617]: time="2025-11-04T23:53:11.849446986Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 4 23:53:11.849722 kubelet[2756]: E1104 23:53:11.849677 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:53:11.849774 kubelet[2756]: E1104 23:53:11.849735 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:53:11.850126 containerd[1617]: time="2025-11-04T23:53:11.850096476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 23:53:11.850182 kubelet[2756]: E1104 23:53:11.850075 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rj4jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-68ffc7886c-bvp99_calico-system(61ed265f-0860-4f8f-9e00-9c62a99949f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 23:53:11.851342 kubelet[2756]: E1104 23:53:11.851299 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68ffc7886c-bvp99" podUID="61ed265f-0860-4f8f-9e00-9c62a99949f4" Nov 4 23:53:11.910614 containerd[1617]: time="2025-11-04T23:53:11.910563730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f6449fc66-4tmvj,Uid:403117e1-6656-4ef1-bd00-648990dd9320,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:53:12.014224 systemd-networkd[1518]: calia6157bb678a: Link UP Nov 4 23:53:12.014969 systemd-networkd[1518]: calia6157bb678a: Gained carrier Nov 4 23:53:12.027407 containerd[1617]: 2025-11-04 23:53:11.949 [INFO][4855] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6f6449fc66--4tmvj-eth0 calico-apiserver-6f6449fc66- calico-apiserver 403117e1-6656-4ef1-bd00-648990dd9320 858 0 2025-11-04 23:52:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f6449fc66 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6f6449fc66-4tmvj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia6157bb678a [] [] }} ContainerID="25c0409f2b7a3ece9d09b8a10b30aefc2d82099ead9fe5ffd24fc54a1bdab02a" Namespace="calico-apiserver" Pod="calico-apiserver-6f6449fc66-4tmvj" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f6449fc66--4tmvj-" Nov 4 23:53:12.027407 containerd[1617]: 2025-11-04 23:53:11.949 [INFO][4855] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="25c0409f2b7a3ece9d09b8a10b30aefc2d82099ead9fe5ffd24fc54a1bdab02a" Namespace="calico-apiserver" Pod="calico-apiserver-6f6449fc66-4tmvj" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f6449fc66--4tmvj-eth0" Nov 4 23:53:12.027407 containerd[1617]: 2025-11-04 23:53:11.977 [INFO][4871] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="25c0409f2b7a3ece9d09b8a10b30aefc2d82099ead9fe5ffd24fc54a1bdab02a" HandleID="k8s-pod-network.25c0409f2b7a3ece9d09b8a10b30aefc2d82099ead9fe5ffd24fc54a1bdab02a" Workload="localhost-k8s-calico--apiserver--6f6449fc66--4tmvj-eth0" Nov 4 23:53:12.027407 containerd[1617]: 2025-11-04 23:53:11.977 [INFO][4871] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="25c0409f2b7a3ece9d09b8a10b30aefc2d82099ead9fe5ffd24fc54a1bdab02a" HandleID="k8s-pod-network.25c0409f2b7a3ece9d09b8a10b30aefc2d82099ead9fe5ffd24fc54a1bdab02a" Workload="localhost-k8s-calico--apiserver--6f6449fc66--4tmvj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f7a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6f6449fc66-4tmvj", "timestamp":"2025-11-04 23:53:11.977493573 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:53:12.027407 containerd[1617]: 2025-11-04 23:53:11.977 [INFO][4871] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:53:12.027407 containerd[1617]: 2025-11-04 23:53:11.977 [INFO][4871] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:53:12.027407 containerd[1617]: 2025-11-04 23:53:11.977 [INFO][4871] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 23:53:12.027407 containerd[1617]: 2025-11-04 23:53:11.984 [INFO][4871] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.25c0409f2b7a3ece9d09b8a10b30aefc2d82099ead9fe5ffd24fc54a1bdab02a" host="localhost" Nov 4 23:53:12.027407 containerd[1617]: 2025-11-04 23:53:11.988 [INFO][4871] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 23:53:12.027407 containerd[1617]: 2025-11-04 23:53:11.992 [INFO][4871] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 23:53:12.027407 containerd[1617]: 2025-11-04 23:53:11.994 [INFO][4871] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 23:53:12.027407 containerd[1617]: 2025-11-04 23:53:11.996 [INFO][4871] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 23:53:12.027407 containerd[1617]: 2025-11-04 23:53:11.996 [INFO][4871] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.25c0409f2b7a3ece9d09b8a10b30aefc2d82099ead9fe5ffd24fc54a1bdab02a" host="localhost" Nov 4 23:53:12.027407 containerd[1617]: 2025-11-04 23:53:11.997 [INFO][4871] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.25c0409f2b7a3ece9d09b8a10b30aefc2d82099ead9fe5ffd24fc54a1bdab02a Nov 4 23:53:12.027407 containerd[1617]: 2025-11-04 23:53:12.001 [INFO][4871] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.25c0409f2b7a3ece9d09b8a10b30aefc2d82099ead9fe5ffd24fc54a1bdab02a" host="localhost" Nov 4 23:53:12.027407 containerd[1617]: 2025-11-04 23:53:12.007 [INFO][4871] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.25c0409f2b7a3ece9d09b8a10b30aefc2d82099ead9fe5ffd24fc54a1bdab02a" host="localhost" Nov 4 23:53:12.027407 containerd[1617]: 2025-11-04 23:53:12.007 [INFO][4871] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.25c0409f2b7a3ece9d09b8a10b30aefc2d82099ead9fe5ffd24fc54a1bdab02a" host="localhost" Nov 4 23:53:12.027407 containerd[1617]: 2025-11-04 23:53:12.007 [INFO][4871] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:53:12.027407 containerd[1617]: 2025-11-04 23:53:12.008 [INFO][4871] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="25c0409f2b7a3ece9d09b8a10b30aefc2d82099ead9fe5ffd24fc54a1bdab02a" HandleID="k8s-pod-network.25c0409f2b7a3ece9d09b8a10b30aefc2d82099ead9fe5ffd24fc54a1bdab02a" Workload="localhost-k8s-calico--apiserver--6f6449fc66--4tmvj-eth0" Nov 4 23:53:12.027960 containerd[1617]: 2025-11-04 23:53:12.011 [INFO][4855] cni-plugin/k8s.go 418: Populated endpoint ContainerID="25c0409f2b7a3ece9d09b8a10b30aefc2d82099ead9fe5ffd24fc54a1bdab02a" Namespace="calico-apiserver" Pod="calico-apiserver-6f6449fc66-4tmvj" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f6449fc66--4tmvj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f6449fc66--4tmvj-eth0", GenerateName:"calico-apiserver-6f6449fc66-", Namespace:"calico-apiserver", SelfLink:"", UID:"403117e1-6656-4ef1-bd00-648990dd9320", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 52, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f6449fc66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6f6449fc66-4tmvj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia6157bb678a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:53:12.027960 containerd[1617]: 2025-11-04 23:53:12.011 [INFO][4855] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="25c0409f2b7a3ece9d09b8a10b30aefc2d82099ead9fe5ffd24fc54a1bdab02a" Namespace="calico-apiserver" Pod="calico-apiserver-6f6449fc66-4tmvj" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f6449fc66--4tmvj-eth0" Nov 4 23:53:12.027960 containerd[1617]: 2025-11-04 23:53:12.011 [INFO][4855] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia6157bb678a ContainerID="25c0409f2b7a3ece9d09b8a10b30aefc2d82099ead9fe5ffd24fc54a1bdab02a" Namespace="calico-apiserver" Pod="calico-apiserver-6f6449fc66-4tmvj" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f6449fc66--4tmvj-eth0" Nov 4 23:53:12.027960 containerd[1617]: 2025-11-04 23:53:12.014 [INFO][4855] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="25c0409f2b7a3ece9d09b8a10b30aefc2d82099ead9fe5ffd24fc54a1bdab02a" Namespace="calico-apiserver" Pod="calico-apiserver-6f6449fc66-4tmvj" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f6449fc66--4tmvj-eth0" Nov 4 23:53:12.027960 containerd[1617]: 2025-11-04 23:53:12.015 [INFO][4855] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="25c0409f2b7a3ece9d09b8a10b30aefc2d82099ead9fe5ffd24fc54a1bdab02a" Namespace="calico-apiserver" Pod="calico-apiserver-6f6449fc66-4tmvj" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f6449fc66--4tmvj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f6449fc66--4tmvj-eth0", GenerateName:"calico-apiserver-6f6449fc66-", Namespace:"calico-apiserver", SelfLink:"", UID:"403117e1-6656-4ef1-bd00-648990dd9320", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 52, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f6449fc66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"25c0409f2b7a3ece9d09b8a10b30aefc2d82099ead9fe5ffd24fc54a1bdab02a", Pod:"calico-apiserver-6f6449fc66-4tmvj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia6157bb678a", MAC:"5e:53:9e:e0:7c:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:53:12.027960 containerd[1617]: 2025-11-04 23:53:12.023 [INFO][4855] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="25c0409f2b7a3ece9d09b8a10b30aefc2d82099ead9fe5ffd24fc54a1bdab02a" Namespace="calico-apiserver" Pod="calico-apiserver-6f6449fc66-4tmvj" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f6449fc66--4tmvj-eth0" Nov 4 23:53:12.070271 containerd[1617]: time="2025-11-04T23:53:12.070209764Z" level=info msg="connecting to shim 25c0409f2b7a3ece9d09b8a10b30aefc2d82099ead9fe5ffd24fc54a1bdab02a" address="unix:///run/containerd/s/44ac8b485cf06b2538d5b25cc8471cbbc06dedbc5564583c40a6da9ee1d52240" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:53:12.100550 kubelet[2756]: E1104 23:53:12.100406 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:53:12.101709 kubelet[2756]: E1104 23:53:12.101390 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:53:12.108507 kubelet[2756]: E1104 23:53:12.108441 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68ffc7886c-bvp99" podUID="61ed265f-0860-4f8f-9e00-9c62a99949f4" Nov 4 23:53:12.127333 systemd[1]: Started cri-containerd-25c0409f2b7a3ece9d09b8a10b30aefc2d82099ead9fe5ffd24fc54a1bdab02a.scope - libcontainer container 25c0409f2b7a3ece9d09b8a10b30aefc2d82099ead9fe5ffd24fc54a1bdab02a. Nov 4 23:53:12.146991 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 23:53:12.181486 containerd[1617]: time="2025-11-04T23:53:12.180826807Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:53:12.183135 containerd[1617]: time="2025-11-04T23:53:12.183099244Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 23:53:12.183277 containerd[1617]: time="2025-11-04T23:53:12.183147675Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 4 23:53:12.183535 kubelet[2756]: E1104 23:53:12.183479 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:53:12.183586 kubelet[2756]: E1104 23:53:12.183543 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:53:12.183749 kubelet[2756]: E1104 23:53:12.183703 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x5kmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dzl9n_calico-system(59d88858-9079-42f7-b468-71dc6a4f5e97): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 23:53:12.184982 kubelet[2756]: E1104 23:53:12.184920 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dzl9n" podUID="59d88858-9079-42f7-b468-71dc6a4f5e97" Nov 4 23:53:12.211809 containerd[1617]: time="2025-11-04T23:53:12.211398468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f6449fc66-4tmvj,Uid:403117e1-6656-4ef1-bd00-648990dd9320,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"25c0409f2b7a3ece9d09b8a10b30aefc2d82099ead9fe5ffd24fc54a1bdab02a\"" Nov 4 23:53:12.218243 containerd[1617]: time="2025-11-04T23:53:12.218133908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:53:12.323254 systemd-networkd[1518]: cali4b6e2f27b1c: Gained IPv6LL Nov 4 23:53:12.556876 containerd[1617]: time="2025-11-04T23:53:12.556806988Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:53:12.558072 containerd[1617]: time="2025-11-04T23:53:12.558020236Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:53:12.558133 containerd[1617]: time="2025-11-04T23:53:12.558093184Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:53:12.558399 kubelet[2756]: E1104 23:53:12.558337 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:53:12.558494 kubelet[2756]: E1104 23:53:12.558410 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:53:12.558595 kubelet[2756]: E1104 23:53:12.558557 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qf728,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f6449fc66-4tmvj_calico-apiserver(403117e1-6656-4ef1-bd00-648990dd9320): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:53:12.559787 kubelet[2756]: E1104 23:53:12.559715 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f6449fc66-4tmvj" podUID="403117e1-6656-4ef1-bd00-648990dd9320" Nov 4 23:53:13.104294 kubelet[2756]: E1104 23:53:13.104226 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:53:13.104294 kubelet[2756]: E1104 23:53:13.104256 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f6449fc66-4tmvj" podUID="403117e1-6656-4ef1-bd00-648990dd9320" Nov 4 23:53:13.105367 kubelet[2756]: E1104 23:53:13.104328 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68ffc7886c-bvp99" podUID="61ed265f-0860-4f8f-9e00-9c62a99949f4" Nov 4 23:53:13.105367 kubelet[2756]: E1104 23:53:13.104377 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dzl9n" podUID="59d88858-9079-42f7-b468-71dc6a4f5e97" Nov 4 23:53:13.347311 systemd-networkd[1518]: cali1063084c48c: Gained IPv6LL Nov 4 23:53:13.540360 systemd-networkd[1518]: calia6157bb678a: Gained IPv6LL Nov 4 23:53:14.106269 kubelet[2756]: E1104 23:53:14.106193 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f6449fc66-4tmvj" podUID="403117e1-6656-4ef1-bd00-648990dd9320" Nov 4 23:53:14.498097 systemd[1]: Started sshd@8-10.0.0.97:22-10.0.0.1:36748.service - OpenSSH per-connection server daemon (10.0.0.1:36748). Nov 4 23:53:14.602562 sshd[4941]: Accepted publickey for core from 10.0.0.1 port 36748 ssh2: RSA SHA256:v8z7uopbB1B1OOL2xS9KndxAowBPo6/CiwqBjTrJpz4 Nov 4 23:53:14.604664 sshd-session[4941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:14.609886 systemd-logind[1590]: New session 9 of user core. Nov 4 23:53:14.617354 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 4 23:53:14.753521 sshd[4946]: Connection closed by 10.0.0.1 port 36748 Nov 4 23:53:14.753824 sshd-session[4941]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:14.759363 systemd[1]: sshd@8-10.0.0.97:22-10.0.0.1:36748.service: Deactivated successfully. Nov 4 23:53:14.762239 systemd[1]: session-9.scope: Deactivated successfully. Nov 4 23:53:14.763167 systemd-logind[1590]: Session 9 logged out. Waiting for processes to exit. Nov 4 23:53:14.764746 systemd-logind[1590]: Removed session 9. Nov 4 23:53:19.767837 systemd[1]: Started sshd@9-10.0.0.97:22-10.0.0.1:41438.service - OpenSSH per-connection server daemon (10.0.0.1:41438). Nov 4 23:53:19.827480 sshd[4968]: Accepted publickey for core from 10.0.0.1 port 41438 ssh2: RSA SHA256:v8z7uopbB1B1OOL2xS9KndxAowBPo6/CiwqBjTrJpz4 Nov 4 23:53:19.829295 sshd-session[4968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:19.834294 systemd-logind[1590]: New session 10 of user core. Nov 4 23:53:19.843182 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 4 23:53:19.971707 sshd[4971]: Connection closed by 10.0.0.1 port 41438 Nov 4 23:53:19.972198 sshd-session[4968]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:19.983894 systemd[1]: sshd@9-10.0.0.97:22-10.0.0.1:41438.service: Deactivated successfully. Nov 4 23:53:19.986517 systemd[1]: session-10.scope: Deactivated successfully. Nov 4 23:53:19.987512 systemd-logind[1590]: Session 10 logged out. Waiting for processes to exit. Nov 4 23:53:19.992778 systemd[1]: Started sshd@10-10.0.0.97:22-10.0.0.1:41454.service - OpenSSH per-connection server daemon (10.0.0.1:41454). Nov 4 23:53:19.995598 systemd-logind[1590]: Removed session 10. Nov 4 23:53:20.056140 sshd[4991]: Accepted publickey for core from 10.0.0.1 port 41454 ssh2: RSA SHA256:v8z7uopbB1B1OOL2xS9KndxAowBPo6/CiwqBjTrJpz4 Nov 4 23:53:20.058221 sshd-session[4991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:20.063993 systemd-logind[1590]: New session 11 of user core. Nov 4 23:53:20.071224 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 4 23:53:20.221775 sshd[4994]: Connection closed by 10.0.0.1 port 41454 Nov 4 23:53:20.223519 sshd-session[4991]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:20.237740 systemd[1]: sshd@10-10.0.0.97:22-10.0.0.1:41454.service: Deactivated successfully. Nov 4 23:53:20.242378 systemd[1]: session-11.scope: Deactivated successfully. Nov 4 23:53:20.245308 systemd-logind[1590]: Session 11 logged out. Waiting for processes to exit. Nov 4 23:53:20.250411 systemd[1]: Started sshd@11-10.0.0.97:22-10.0.0.1:41466.service - OpenSSH per-connection server daemon (10.0.0.1:41466). Nov 4 23:53:20.251310 systemd-logind[1590]: Removed session 11. Nov 4 23:53:20.314986 sshd[5006]: Accepted publickey for core from 10.0.0.1 port 41466 ssh2: RSA SHA256:v8z7uopbB1B1OOL2xS9KndxAowBPo6/CiwqBjTrJpz4 Nov 4 23:53:20.316909 sshd-session[5006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:20.322291 systemd-logind[1590]: New session 12 of user core. Nov 4 23:53:20.330221 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 4 23:53:20.466548 sshd[5009]: Connection closed by 10.0.0.1 port 41466 Nov 4 23:53:20.467094 sshd-session[5006]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:20.472187 systemd[1]: sshd@11-10.0.0.97:22-10.0.0.1:41466.service: Deactivated successfully. Nov 4 23:53:20.474348 systemd[1]: session-12.scope: Deactivated successfully. Nov 4 23:53:20.475368 systemd-logind[1590]: Session 12 logged out. Waiting for processes to exit. Nov 4 23:53:20.476726 systemd-logind[1590]: Removed session 12. Nov 4 23:53:22.911592 containerd[1617]: time="2025-11-04T23:53:22.911520812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:53:23.306739 containerd[1617]: time="2025-11-04T23:53:23.306559501Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:53:23.348296 containerd[1617]: time="2025-11-04T23:53:23.348203312Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:53:23.348296 containerd[1617]: time="2025-11-04T23:53:23.348259257Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:53:23.348572 kubelet[2756]: E1104 23:53:23.348512 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:53:23.349011 kubelet[2756]: E1104 23:53:23.348589 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:53:23.349011 kubelet[2756]: E1104 23:53:23.348831 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-btwfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f6449fc66-sdj9r_calico-apiserver(b79db394-cdf6-4f69-a1b0-fe3bb4b1119d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:53:23.350163 kubelet[2756]: E1104 23:53:23.350075 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f6449fc66-sdj9r" podUID="b79db394-cdf6-4f69-a1b0-fe3bb4b1119d" Nov 4 23:53:23.911875 containerd[1617]: time="2025-11-04T23:53:23.911756665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 23:53:24.251967 containerd[1617]: time="2025-11-04T23:53:24.251778811Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:53:24.253427 containerd[1617]: time="2025-11-04T23:53:24.253382250Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 23:53:24.253500 containerd[1617]: time="2025-11-04T23:53:24.253475865Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 4 23:53:24.253732 kubelet[2756]: E1104 23:53:24.253657 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:53:24.253819 kubelet[2756]: E1104 23:53:24.253735 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:53:24.254349 kubelet[2756]: E1104 23:53:24.254008 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tntpn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vmhxx_calico-system(aba4eacc-4aef-4d09-939a-0ecd4f64c80b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 23:53:24.254486 containerd[1617]: time="2025-11-04T23:53:24.254125945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 23:53:24.592447 containerd[1617]: time="2025-11-04T23:53:24.592299796Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:53:24.593587 containerd[1617]: time="2025-11-04T23:53:24.593532449Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 23:53:24.593642 containerd[1617]: time="2025-11-04T23:53:24.593613782Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 4 23:53:24.593813 kubelet[2756]: E1104 23:53:24.593765 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:53:24.594239 kubelet[2756]: E1104 23:53:24.593824 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:53:24.594239 kubelet[2756]: E1104 23:53:24.594077 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b46ab3eb8c8041dba6f2cfdb3b9d0d0f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5hp9m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c9f67fff6-f7vtj_calico-system(be662a3c-e749-45bb-a12b-0eac658e4ad2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 23:53:24.594402 containerd[1617]: time="2025-11-04T23:53:24.594230529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 23:53:24.915738 containerd[1617]: time="2025-11-04T23:53:24.915694233Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:53:24.916940 containerd[1617]: time="2025-11-04T23:53:24.916882432Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 23:53:24.916940 containerd[1617]: time="2025-11-04T23:53:24.916912969Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 4 23:53:24.917164 kubelet[2756]: E1104 23:53:24.917121 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:53:24.917228 kubelet[2756]: E1104 23:53:24.917176 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:53:24.917449 kubelet[2756]: E1104 23:53:24.917390 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tntpn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vmhxx_calico-system(aba4eacc-4aef-4d09-939a-0ecd4f64c80b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 23:53:24.917557 containerd[1617]: time="2025-11-04T23:53:24.917544745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 23:53:24.918943 kubelet[2756]: E1104 23:53:24.918894 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vmhxx" podUID="aba4eacc-4aef-4d09-939a-0ecd4f64c80b" Nov 4 23:53:25.256687 containerd[1617]: time="2025-11-04T23:53:25.256492519Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:53:25.257981 containerd[1617]: time="2025-11-04T23:53:25.257915038Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 23:53:25.258058 containerd[1617]: time="2025-11-04T23:53:25.258010427Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 4 23:53:25.258345 kubelet[2756]: E1104 23:53:25.258266 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:53:25.258435 kubelet[2756]: E1104 23:53:25.258357 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:53:25.258590 kubelet[2756]: E1104 23:53:25.258526 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5hp9m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c9f67fff6-f7vtj_calico-system(be662a3c-e749-45bb-a12b-0eac658e4ad2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 23:53:25.259846 kubelet[2756]: E1104 23:53:25.259745 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c9f67fff6-f7vtj" podUID="be662a3c-e749-45bb-a12b-0eac658e4ad2" Nov 4 23:53:25.489581 systemd[1]: Started sshd@12-10.0.0.97:22-10.0.0.1:41482.service - OpenSSH per-connection server daemon (10.0.0.1:41482). Nov 4 23:53:25.560025 sshd[5024]: Accepted publickey for core from 10.0.0.1 port 41482 ssh2: RSA SHA256:v8z7uopbB1B1OOL2xS9KndxAowBPo6/CiwqBjTrJpz4 Nov 4 23:53:25.561529 sshd-session[5024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:25.566107 systemd-logind[1590]: New session 13 of user core. Nov 4 23:53:25.578182 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 4 23:53:25.700011 sshd[5027]: Connection closed by 10.0.0.1 port 41482 Nov 4 23:53:25.700380 sshd-session[5024]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:25.705838 systemd[1]: sshd@12-10.0.0.97:22-10.0.0.1:41482.service: Deactivated successfully. Nov 4 23:53:25.708776 systemd[1]: session-13.scope: Deactivated successfully. Nov 4 23:53:25.709910 systemd-logind[1590]: Session 13 logged out. Waiting for processes to exit. Nov 4 23:53:25.712075 systemd-logind[1590]: Removed session 13. Nov 4 23:53:26.911114 containerd[1617]: time="2025-11-04T23:53:26.910999861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:53:27.299553 containerd[1617]: time="2025-11-04T23:53:27.299391733Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:53:27.386308 containerd[1617]: time="2025-11-04T23:53:27.386219682Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:53:27.386308 containerd[1617]: time="2025-11-04T23:53:27.386250359Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:53:27.390940 kubelet[2756]: E1104 23:53:27.390873 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:53:27.391410 kubelet[2756]: E1104 23:53:27.390949 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:53:27.391410 kubelet[2756]: E1104 23:53:27.391208 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qf728,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f6449fc66-4tmvj_calico-apiserver(403117e1-6656-4ef1-bd00-648990dd9320): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:53:27.391698 containerd[1617]: time="2025-11-04T23:53:27.391662474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 23:53:27.393165 kubelet[2756]: E1104 23:53:27.393115 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f6449fc66-4tmvj" podUID="403117e1-6656-4ef1-bd00-648990dd9320" Nov 4 23:53:27.795439 containerd[1617]: time="2025-11-04T23:53:27.795361892Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:53:27.796702 containerd[1617]: time="2025-11-04T23:53:27.796653545Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 23:53:27.796786 containerd[1617]: time="2025-11-04T23:53:27.796695544Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 4 23:53:27.796986 kubelet[2756]: E1104 23:53:27.796918 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:53:27.797076 kubelet[2756]: E1104 23:53:27.796997 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:53:27.797256 kubelet[2756]: E1104 23:53:27.797188 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rj4jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-68ffc7886c-bvp99_calico-system(61ed265f-0860-4f8f-9e00-9c62a99949f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 23:53:27.798441 kubelet[2756]: E1104 23:53:27.798401 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68ffc7886c-bvp99" podUID="61ed265f-0860-4f8f-9e00-9c62a99949f4" Nov 4 23:53:27.913064 containerd[1617]: time="2025-11-04T23:53:27.912386670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 23:53:28.276150 containerd[1617]: time="2025-11-04T23:53:28.275980329Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:53:28.277434 containerd[1617]: time="2025-11-04T23:53:28.277399501Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 23:53:28.277497 containerd[1617]: time="2025-11-04T23:53:28.277439817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 4 23:53:28.277721 kubelet[2756]: E1104 23:53:28.277668 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:53:28.277796 kubelet[2756]: E1104 23:53:28.277732 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:53:28.277950 kubelet[2756]: E1104 23:53:28.277882 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x5kmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dzl9n_calico-system(59d88858-9079-42f7-b468-71dc6a4f5e97): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 23:53:28.279107 kubelet[2756]: E1104 23:53:28.279057 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dzl9n" podUID="59d88858-9079-42f7-b468-71dc6a4f5e97" Nov 4 23:53:30.718681 systemd[1]: Started sshd@13-10.0.0.97:22-10.0.0.1:49912.service - OpenSSH per-connection server daemon (10.0.0.1:49912). Nov 4 23:53:30.784102 sshd[5053]: Accepted publickey for core from 10.0.0.1 port 49912 ssh2: RSA SHA256:v8z7uopbB1B1OOL2xS9KndxAowBPo6/CiwqBjTrJpz4 Nov 4 23:53:30.785794 sshd-session[5053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:30.790309 systemd-logind[1590]: New session 14 of user core. Nov 4 23:53:30.800173 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 4 23:53:30.922300 sshd[5056]: Connection closed by 10.0.0.1 port 49912 Nov 4 23:53:30.922627 sshd-session[5053]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:30.927006 systemd[1]: sshd@13-10.0.0.97:22-10.0.0.1:49912.service: Deactivated successfully. Nov 4 23:53:30.929403 systemd[1]: session-14.scope: Deactivated successfully. Nov 4 23:53:30.930561 systemd-logind[1590]: Session 14 logged out. Waiting for processes to exit. Nov 4 23:53:30.931981 systemd-logind[1590]: Removed session 14. Nov 4 23:53:34.910713 kubelet[2756]: E1104 23:53:34.910649 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f6449fc66-sdj9r" podUID="b79db394-cdf6-4f69-a1b0-fe3bb4b1119d" Nov 4 23:53:35.911832 kubelet[2756]: E1104 23:53:35.911762 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vmhxx" podUID="aba4eacc-4aef-4d09-939a-0ecd4f64c80b" Nov 4 23:53:35.937419 systemd[1]: Started sshd@14-10.0.0.97:22-10.0.0.1:49918.service - OpenSSH per-connection server daemon (10.0.0.1:49918). Nov 4 23:53:35.992862 sshd[5071]: Accepted publickey for core from 10.0.0.1 port 49918 ssh2: RSA SHA256:v8z7uopbB1B1OOL2xS9KndxAowBPo6/CiwqBjTrJpz4 Nov 4 23:53:35.994458 sshd-session[5071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:35.998716 systemd-logind[1590]: New session 15 of user core. Nov 4 23:53:36.009157 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 4 23:53:36.120799 sshd[5074]: Connection closed by 10.0.0.1 port 49918 Nov 4 23:53:36.121164 sshd-session[5071]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:36.127986 systemd[1]: sshd@14-10.0.0.97:22-10.0.0.1:49918.service: Deactivated successfully. Nov 4 23:53:36.130214 systemd[1]: session-15.scope: Deactivated successfully. Nov 4 23:53:36.131133 systemd-logind[1590]: Session 15 logged out. Waiting for processes to exit. Nov 4 23:53:36.132415 systemd-logind[1590]: Removed session 15. Nov 4 23:53:38.910870 kubelet[2756]: E1104 23:53:38.910802 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68ffc7886c-bvp99" podUID="61ed265f-0860-4f8f-9e00-9c62a99949f4" Nov 4 23:53:38.911692 kubelet[2756]: E1104 23:53:38.911637 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c9f67fff6-f7vtj" podUID="be662a3c-e749-45bb-a12b-0eac658e4ad2" Nov 4 23:53:39.220697 containerd[1617]: time="2025-11-04T23:53:39.220523934Z" level=info msg="TaskExit event in podsandbox handler container_id:\"45ed525b59a068bf4a0a9a6d0d81a1d92f977a9175e483371ec92eec7d63c579\" id:\"d86acb98d6a6f0efbe9473b89eaa7155658b6081abe764abbac9f3e9b6152518\" pid:5103 exited_at:{seconds:1762300419 nanos:219728880}" Nov 4 23:53:39.222323 kubelet[2756]: E1104 23:53:39.222290 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:53:39.911417 kubelet[2756]: E1104 23:53:39.911010 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dzl9n" podUID="59d88858-9079-42f7-b468-71dc6a4f5e97" Nov 4 23:53:40.910373 kubelet[2756]: E1104 23:53:40.910302 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f6449fc66-4tmvj" podUID="403117e1-6656-4ef1-bd00-648990dd9320" Nov 4 23:53:41.131709 systemd[1]: Started sshd@15-10.0.0.97:22-10.0.0.1:51438.service - OpenSSH per-connection server daemon (10.0.0.1:51438). Nov 4 23:53:41.191852 sshd[5118]: Accepted publickey for core from 10.0.0.1 port 51438 ssh2: RSA SHA256:v8z7uopbB1B1OOL2xS9KndxAowBPo6/CiwqBjTrJpz4 Nov 4 23:53:41.193591 sshd-session[5118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:41.198308 systemd-logind[1590]: New session 16 of user core. Nov 4 23:53:41.209160 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 4 23:53:41.329383 sshd[5121]: Connection closed by 10.0.0.1 port 51438 Nov 4 23:53:41.329813 sshd-session[5118]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:41.342309 systemd[1]: sshd@15-10.0.0.97:22-10.0.0.1:51438.service: Deactivated successfully. Nov 4 23:53:41.344370 systemd[1]: session-16.scope: Deactivated successfully. Nov 4 23:53:41.345175 systemd-logind[1590]: Session 16 logged out. Waiting for processes to exit. Nov 4 23:53:41.348110 systemd[1]: Started sshd@16-10.0.0.97:22-10.0.0.1:51450.service - OpenSSH per-connection server daemon (10.0.0.1:51450). Nov 4 23:53:41.348800 systemd-logind[1590]: Removed session 16. Nov 4 23:53:41.412876 sshd[5135]: Accepted publickey for core from 10.0.0.1 port 51450 ssh2: RSA SHA256:v8z7uopbB1B1OOL2xS9KndxAowBPo6/CiwqBjTrJpz4 Nov 4 23:53:41.414254 sshd-session[5135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:41.419027 systemd-logind[1590]: New session 17 of user core. Nov 4 23:53:41.430226 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 4 23:53:41.748548 sshd[5139]: Connection closed by 10.0.0.1 port 51450 Nov 4 23:53:41.749178 sshd-session[5135]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:41.760794 systemd[1]: sshd@16-10.0.0.97:22-10.0.0.1:51450.service: Deactivated successfully. Nov 4 23:53:41.762791 systemd[1]: session-17.scope: Deactivated successfully. Nov 4 23:53:41.763561 systemd-logind[1590]: Session 17 logged out. Waiting for processes to exit. Nov 4 23:53:41.766526 systemd[1]: Started sshd@17-10.0.0.97:22-10.0.0.1:51462.service - OpenSSH per-connection server daemon (10.0.0.1:51462). Nov 4 23:53:41.767251 systemd-logind[1590]: Removed session 17. Nov 4 23:53:41.829998 sshd[5153]: Accepted publickey for core from 10.0.0.1 port 51462 ssh2: RSA SHA256:v8z7uopbB1B1OOL2xS9KndxAowBPo6/CiwqBjTrJpz4 Nov 4 23:53:41.831473 sshd-session[5153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:41.836051 systemd-logind[1590]: New session 18 of user core. Nov 4 23:53:41.848309 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 4 23:53:41.912066 kubelet[2756]: E1104 23:53:41.911475 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:53:42.645385 sshd[5156]: Connection closed by 10.0.0.1 port 51462 Nov 4 23:53:42.647269 sshd-session[5153]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:42.655473 systemd[1]: sshd@17-10.0.0.97:22-10.0.0.1:51462.service: Deactivated successfully. Nov 4 23:53:42.657988 systemd[1]: session-18.scope: Deactivated successfully. Nov 4 23:53:42.660591 systemd-logind[1590]: Session 18 logged out. Waiting for processes to exit. Nov 4 23:53:42.664001 systemd[1]: Started sshd@18-10.0.0.97:22-10.0.0.1:51466.service - OpenSSH per-connection server daemon (10.0.0.1:51466). Nov 4 23:53:42.665654 systemd-logind[1590]: Removed session 18. Nov 4 23:53:42.720590 sshd[5178]: Accepted publickey for core from 10.0.0.1 port 51466 ssh2: RSA SHA256:v8z7uopbB1B1OOL2xS9KndxAowBPo6/CiwqBjTrJpz4 Nov 4 23:53:42.722385 sshd-session[5178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:42.727292 systemd-logind[1590]: New session 19 of user core. Nov 4 23:53:42.731370 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 4 23:53:42.992339 sshd[5181]: Connection closed by 10.0.0.1 port 51466 Nov 4 23:53:42.991742 sshd-session[5178]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:43.003830 systemd[1]: sshd@18-10.0.0.97:22-10.0.0.1:51466.service: Deactivated successfully. Nov 4 23:53:43.006160 systemd[1]: session-19.scope: Deactivated successfully. Nov 4 23:53:43.006927 systemd-logind[1590]: Session 19 logged out. Waiting for processes to exit. Nov 4 23:53:43.009934 systemd[1]: Started sshd@19-10.0.0.97:22-10.0.0.1:51470.service - OpenSSH per-connection server daemon (10.0.0.1:51470). Nov 4 23:53:43.010687 systemd-logind[1590]: Removed session 19. Nov 4 23:53:43.061358 sshd[5193]: Accepted publickey for core from 10.0.0.1 port 51470 ssh2: RSA SHA256:v8z7uopbB1B1OOL2xS9KndxAowBPo6/CiwqBjTrJpz4 Nov 4 23:53:43.062830 sshd-session[5193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:43.067903 systemd-logind[1590]: New session 20 of user core. Nov 4 23:53:43.073174 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 4 23:53:43.186944 sshd[5196]: Connection closed by 10.0.0.1 port 51470 Nov 4 23:53:43.187341 sshd-session[5193]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:43.192430 systemd[1]: sshd@19-10.0.0.97:22-10.0.0.1:51470.service: Deactivated successfully. Nov 4 23:53:43.195203 systemd[1]: session-20.scope: Deactivated successfully. Nov 4 23:53:43.197397 systemd-logind[1590]: Session 20 logged out. Waiting for processes to exit. Nov 4 23:53:43.198840 systemd-logind[1590]: Removed session 20. Nov 4 23:53:44.910539 kubelet[2756]: E1104 23:53:44.910495 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:53:44.910539 kubelet[2756]: E1104 23:53:44.910496 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:53:45.912099 containerd[1617]: time="2025-11-04T23:53:45.911738058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:53:46.279594 containerd[1617]: time="2025-11-04T23:53:46.279430472Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:53:46.317595 containerd[1617]: time="2025-11-04T23:53:46.317505394Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:53:46.317824 containerd[1617]: time="2025-11-04T23:53:46.317540160Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:53:46.317899 kubelet[2756]: E1104 23:53:46.317845 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:53:46.318354 kubelet[2756]: E1104 23:53:46.317916 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:53:46.318354 kubelet[2756]: E1104 23:53:46.318116 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-btwfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f6449fc66-sdj9r_calico-apiserver(b79db394-cdf6-4f69-a1b0-fe3bb4b1119d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:53:46.319503 kubelet[2756]: E1104 23:53:46.319283 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f6449fc66-sdj9r" podUID="b79db394-cdf6-4f69-a1b0-fe3bb4b1119d" Nov 4 23:53:48.204511 systemd[1]: Started sshd@20-10.0.0.97:22-10.0.0.1:35502.service - OpenSSH per-connection server daemon (10.0.0.1:35502). Nov 4 23:53:48.254987 sshd[5209]: Accepted publickey for core from 10.0.0.1 port 35502 ssh2: RSA SHA256:v8z7uopbB1B1OOL2xS9KndxAowBPo6/CiwqBjTrJpz4 Nov 4 23:53:48.256808 sshd-session[5209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:48.261881 systemd-logind[1590]: New session 21 of user core. Nov 4 23:53:48.282180 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 4 23:53:48.409165 sshd[5212]: Connection closed by 10.0.0.1 port 35502 Nov 4 23:53:48.409521 sshd-session[5209]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:48.414721 systemd[1]: sshd@20-10.0.0.97:22-10.0.0.1:35502.service: Deactivated successfully. Nov 4 23:53:48.417127 systemd[1]: session-21.scope: Deactivated successfully. Nov 4 23:53:48.417880 systemd-logind[1590]: Session 21 logged out. Waiting for processes to exit. Nov 4 23:53:48.419697 systemd-logind[1590]: Removed session 21. Nov 4 23:53:49.916899 containerd[1617]: time="2025-11-04T23:53:49.916829578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 23:53:50.330301 containerd[1617]: time="2025-11-04T23:53:50.330163457Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:53:50.375697 containerd[1617]: time="2025-11-04T23:53:50.375647531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 4 23:53:50.375808 containerd[1617]: time="2025-11-04T23:53:50.375705882Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 23:53:50.375991 kubelet[2756]: E1104 23:53:50.375935 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:53:50.376445 kubelet[2756]: E1104 23:53:50.375991 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:53:50.376445 kubelet[2756]: E1104 23:53:50.376174 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tntpn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vmhxx_calico-system(aba4eacc-4aef-4d09-939a-0ecd4f64c80b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 23:53:50.378392 containerd[1617]: time="2025-11-04T23:53:50.377931864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 23:53:50.766581 containerd[1617]: time="2025-11-04T23:53:50.766521212Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:53:50.799517 containerd[1617]: time="2025-11-04T23:53:50.799485762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 4 23:53:50.808654 containerd[1617]: time="2025-11-04T23:53:50.808611737Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 23:53:50.808903 kubelet[2756]: E1104 23:53:50.808843 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:53:50.808981 kubelet[2756]: E1104 23:53:50.808906 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:53:50.809083 kubelet[2756]: E1104 23:53:50.809023 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tntpn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vmhxx_calico-system(aba4eacc-4aef-4d09-939a-0ecd4f64c80b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 23:53:50.811187 kubelet[2756]: E1104 23:53:50.811157 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vmhxx" podUID="aba4eacc-4aef-4d09-939a-0ecd4f64c80b" Nov 4 23:53:51.911684 containerd[1617]: time="2025-11-04T23:53:51.911570811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 23:53:52.232596 containerd[1617]: time="2025-11-04T23:53:52.232441432Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:53:52.233573 containerd[1617]: time="2025-11-04T23:53:52.233532580Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 23:53:52.233658 containerd[1617]: time="2025-11-04T23:53:52.233608044Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 4 23:53:52.233837 kubelet[2756]: E1104 23:53:52.233762 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:53:52.234257 kubelet[2756]: E1104 23:53:52.233839 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:53:52.234257 kubelet[2756]: E1104 23:53:52.233958 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b46ab3eb8c8041dba6f2cfdb3b9d0d0f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5hp9m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c9f67fff6-f7vtj_calico-system(be662a3c-e749-45bb-a12b-0eac658e4ad2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 23:53:52.236058 containerd[1617]: time="2025-11-04T23:53:52.235924185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 23:53:52.567089 containerd[1617]: time="2025-11-04T23:53:52.566900583Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:53:52.568158 containerd[1617]: time="2025-11-04T23:53:52.568118553Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 23:53:52.568247 containerd[1617]: time="2025-11-04T23:53:52.568207663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 4 23:53:52.568439 kubelet[2756]: E1104 23:53:52.568380 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:53:52.568519 kubelet[2756]: E1104 23:53:52.568458 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:53:52.568657 kubelet[2756]: E1104 23:53:52.568615 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5hp9m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c9f67fff6-f7vtj_calico-system(be662a3c-e749-45bb-a12b-0eac658e4ad2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 23:53:52.569889 kubelet[2756]: E1104 23:53:52.569828 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c9f67fff6-f7vtj" podUID="be662a3c-e749-45bb-a12b-0eac658e4ad2" Nov 4 23:53:52.910981 containerd[1617]: time="2025-11-04T23:53:52.910925583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:53:53.354321 containerd[1617]: time="2025-11-04T23:53:53.354147065Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:53:53.422334 systemd[1]: Started sshd@21-10.0.0.97:22-10.0.0.1:35518.service - OpenSSH per-connection server daemon (10.0.0.1:35518). Nov 4 23:53:53.439414 containerd[1617]: time="2025-11-04T23:53:53.439347742Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:53:53.439574 containerd[1617]: time="2025-11-04T23:53:53.439409039Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:53:53.439703 kubelet[2756]: E1104 23:53:53.439654 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:53:53.440094 kubelet[2756]: E1104 23:53:53.439717 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:53:53.440094 kubelet[2756]: E1104 23:53:53.440003 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qf728,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f6449fc66-4tmvj_calico-apiserver(403117e1-6656-4ef1-bd00-648990dd9320): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:53:53.440242 containerd[1617]: time="2025-11-04T23:53:53.440218961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 23:53:53.441313 kubelet[2756]: E1104 23:53:53.441269 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f6449fc66-4tmvj" podUID="403117e1-6656-4ef1-bd00-648990dd9320" Nov 4 23:53:53.505844 sshd[5236]: Accepted publickey for core from 10.0.0.1 port 35518 ssh2: RSA SHA256:v8z7uopbB1B1OOL2xS9KndxAowBPo6/CiwqBjTrJpz4 Nov 4 23:53:53.507479 sshd-session[5236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:53.512371 systemd-logind[1590]: New session 22 of user core. Nov 4 23:53:53.520538 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 4 23:53:53.688134 sshd[5239]: Connection closed by 10.0.0.1 port 35518 Nov 4 23:53:53.688512 sshd-session[5236]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:53.693278 systemd[1]: sshd@21-10.0.0.97:22-10.0.0.1:35518.service: Deactivated successfully. Nov 4 23:53:53.695230 systemd[1]: session-22.scope: Deactivated successfully. Nov 4 23:53:53.696236 systemd-logind[1590]: Session 22 logged out. Waiting for processes to exit. Nov 4 23:53:53.697378 systemd-logind[1590]: Removed session 22. Nov 4 23:53:53.930830 containerd[1617]: time="2025-11-04T23:53:53.930766911Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:53:53.934365 containerd[1617]: time="2025-11-04T23:53:53.934290339Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 23:53:53.934365 containerd[1617]: time="2025-11-04T23:53:53.934339913Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 4 23:53:53.934726 kubelet[2756]: E1104 23:53:53.934668 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:53:53.934873 kubelet[2756]: E1104 23:53:53.934752 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:53:53.935201 kubelet[2756]: E1104 23:53:53.935116 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rj4jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-68ffc7886c-bvp99_calico-system(61ed265f-0860-4f8f-9e00-9c62a99949f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 23:53:53.935722 containerd[1617]: time="2025-11-04T23:53:53.935643675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 23:53:53.936812 kubelet[2756]: E1104 23:53:53.936759 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68ffc7886c-bvp99" podUID="61ed265f-0860-4f8f-9e00-9c62a99949f4" Nov 4 23:53:54.256068 containerd[1617]: time="2025-11-04T23:53:54.255977934Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:53:54.310434 containerd[1617]: time="2025-11-04T23:53:54.310341227Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 23:53:54.310616 containerd[1617]: time="2025-11-04T23:53:54.310451877Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 4 23:53:54.310714 kubelet[2756]: E1104 23:53:54.310651 2756 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:53:54.310773 kubelet[2756]: E1104 23:53:54.310730 2756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:53:54.310982 kubelet[2756]: E1104 23:53:54.310922 2756 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x5kmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dzl9n_calico-system(59d88858-9079-42f7-b468-71dc6a4f5e97): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 23:53:54.312808 kubelet[2756]: E1104 23:53:54.312696 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dzl9n" podUID="59d88858-9079-42f7-b468-71dc6a4f5e97" Nov 4 23:53:58.700727 systemd[1]: Started sshd@22-10.0.0.97:22-10.0.0.1:56646.service - OpenSSH per-connection server daemon (10.0.0.1:56646). Nov 4 23:53:58.764810 sshd[5253]: Accepted publickey for core from 10.0.0.1 port 56646 ssh2: RSA SHA256:v8z7uopbB1B1OOL2xS9KndxAowBPo6/CiwqBjTrJpz4 Nov 4 23:53:58.766891 sshd-session[5253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:53:58.772118 systemd-logind[1590]: New session 23 of user core. Nov 4 23:53:58.782425 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 4 23:53:58.894319 sshd[5256]: Connection closed by 10.0.0.1 port 56646 Nov 4 23:53:58.894756 sshd-session[5253]: pam_unix(sshd:session): session closed for user core Nov 4 23:53:58.900852 systemd[1]: sshd@22-10.0.0.97:22-10.0.0.1:56646.service: Deactivated successfully. Nov 4 23:53:58.903175 systemd[1]: session-23.scope: Deactivated successfully. Nov 4 23:53:58.903991 systemd-logind[1590]: Session 23 logged out. Waiting for processes to exit. Nov 4 23:53:58.905334 systemd-logind[1590]: Removed session 23. Nov 4 23:53:59.910495 kubelet[2756]: E1104 23:53:59.910197 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:53:59.911021 kubelet[2756]: E1104 23:53:59.910896 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f6449fc66-sdj9r" podUID="b79db394-cdf6-4f69-a1b0-fe3bb4b1119d" Nov 4 23:54:02.911783 kubelet[2756]: E1104 23:54:02.911677 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vmhxx" podUID="aba4eacc-4aef-4d09-939a-0ecd4f64c80b" Nov 4 23:54:03.908415 systemd[1]: Started sshd@23-10.0.0.97:22-10.0.0.1:56652.service - OpenSSH per-connection server daemon (10.0.0.1:56652). Nov 4 23:54:03.915573 kubelet[2756]: E1104 23:54:03.915502 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c9f67fff6-f7vtj" podUID="be662a3c-e749-45bb-a12b-0eac658e4ad2" Nov 4 23:54:03.976661 sshd[5270]: Accepted publickey for core from 10.0.0.1 port 56652 ssh2: RSA SHA256:v8z7uopbB1B1OOL2xS9KndxAowBPo6/CiwqBjTrJpz4 Nov 4 23:54:03.978581 sshd-session[5270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:54:03.983255 systemd-logind[1590]: New session 24 of user core. Nov 4 23:54:03.992226 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 4 23:54:04.133960 sshd[5273]: Connection closed by 10.0.0.1 port 56652 Nov 4 23:54:04.135103 sshd-session[5270]: pam_unix(sshd:session): session closed for user core Nov 4 23:54:04.142401 systemd[1]: sshd@23-10.0.0.97:22-10.0.0.1:56652.service: Deactivated successfully. Nov 4 23:54:04.145096 systemd[1]: session-24.scope: Deactivated successfully. Nov 4 23:54:04.146344 systemd-logind[1590]: Session 24 logged out. Waiting for processes to exit. Nov 4 23:54:04.148408 systemd-logind[1590]: Removed session 24.