Jan 23 01:06:02.788375 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 22:22:03 -00 2026 Jan 23 01:06:02.788490 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:06:02.788503 kernel: BIOS-provided physical RAM map: Jan 23 01:06:02.788518 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 01:06:02.788617 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 23 01:06:02.788628 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 23 01:06:02.788639 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 23 01:06:02.788649 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 23 01:06:02.788741 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 23 01:06:02.788754 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 23 01:06:02.788764 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 23 01:06:02.788774 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 23 01:06:02.788788 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 23 01:06:02.788797 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 23 01:06:02.788807 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 23 01:06:02.788816 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 23 01:06:02.788901 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 23 01:06:02.788916 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 23 01:06:02.788924 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 23 01:06:02.788933 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 23 01:06:02.788942 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 23 01:06:02.788951 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 23 01:06:02.788961 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 23 01:06:02.788970 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 01:06:02.788980 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 23 01:06:02.788990 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 23 01:06:02.789000 kernel: NX (Execute Disable) protection: active Jan 23 01:06:02.789012 kernel: APIC: Static calls initialized Jan 23 01:06:02.789029 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jan 23 01:06:02.789039 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jan 23 01:06:02.789048 kernel: extended physical RAM map: Jan 23 01:06:02.789057 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 01:06:02.789225 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 23 01:06:02.789238 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 23 01:06:02.789249 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 23 01:06:02.789260 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 23 01:06:02.789272 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 23 01:06:02.789281 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 23 01:06:02.789290 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jan 23 01:06:02.789304 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jan 23 01:06:02.789322 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jan 23 01:06:02.789334 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jan 23 01:06:02.789343 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jan 23 01:06:02.789353 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 23 01:06:02.789366 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 23 01:06:02.789375 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 23 01:06:02.789387 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 23 01:06:02.789399 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 23 01:06:02.789409 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 23 01:06:02.789418 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 23 01:06:02.789427 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 23 01:06:02.789436 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 23 01:06:02.789446 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 23 01:06:02.789456 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 23 01:06:02.789467 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 23 01:06:02.789482 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 01:06:02.789493 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 23 01:06:02.789503 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 23 01:06:02.789610 kernel: efi: EFI v2.7 by EDK II Jan 23 01:06:02.789624 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jan 23 01:06:02.789702 kernel: random: crng init done Jan 23 01:06:02.789714 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 23 01:06:02.789787 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 23 01:06:02.789799 kernel: secureboot: Secure boot disabled Jan 23 01:06:02.789810 kernel: SMBIOS 2.8 present. Jan 23 01:06:02.789821 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 23 01:06:02.789835 kernel: DMI: Memory slots populated: 1/1 Jan 23 01:06:02.789846 kernel: Hypervisor detected: KVM Jan 23 01:06:02.789857 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 23 01:06:02.789867 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 01:06:02.789878 kernel: kvm-clock: using sched offset of 20133895961 cycles Jan 23 01:06:02.789890 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 01:06:02.789902 kernel: tsc: Detected 2445.424 MHz processor Jan 23 01:06:02.789913 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 01:06:02.789924 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 01:06:02.789935 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 23 01:06:02.789946 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 23 01:06:02.789961 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 01:06:02.789972 kernel: Using GB pages for direct mapping Jan 23 01:06:02.789983 kernel: ACPI: Early table checksum verification disabled Jan 23 01:06:02.789994 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 23 01:06:02.790005 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 23 01:06:02.790016 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:06:02.790027 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:06:02.790038 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 23 01:06:02.790050 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:06:02.790214 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:06:02.790229 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:06:02.790241 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:06:02.790252 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 23 01:06:02.790263 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 23 01:06:02.790274 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 23 01:06:02.790285 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 23 01:06:02.790296 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 23 01:06:02.790312 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 23 01:06:02.790323 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 23 01:06:02.790334 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 23 01:06:02.790345 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 23 01:06:02.790356 kernel: No NUMA configuration found Jan 23 01:06:02.790368 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 23 01:06:02.790379 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jan 23 01:06:02.790390 kernel: Zone ranges: Jan 23 01:06:02.790401 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 01:06:02.790412 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 23 01:06:02.790427 kernel: Normal empty Jan 23 01:06:02.790437 kernel: Device empty Jan 23 01:06:02.790448 kernel: Movable zone start for each node Jan 23 01:06:02.790460 kernel: Early memory node ranges Jan 23 01:06:02.790472 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 23 01:06:02.790617 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 23 01:06:02.790629 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 23 01:06:02.790638 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 23 01:06:02.790649 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 23 01:06:02.790668 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 23 01:06:02.790678 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jan 23 01:06:02.790687 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jan 23 01:06:02.790697 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 23 01:06:02.790789 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 01:06:02.790814 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 23 01:06:02.790827 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 23 01:06:02.790837 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 01:06:02.790848 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 23 01:06:02.790860 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 23 01:06:02.790872 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 23 01:06:02.790883 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 23 01:06:02.790899 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 23 01:06:02.790910 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 23 01:06:02.790922 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 01:06:02.790933 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 01:06:02.790945 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 23 01:06:02.790960 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 01:06:02.790972 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 01:06:02.790983 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 01:06:02.790995 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 01:06:02.791006 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 01:06:02.791018 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 23 01:06:02.791029 kernel: TSC deadline timer available Jan 23 01:06:02.791041 kernel: CPU topo: Max. logical packages: 1 Jan 23 01:06:02.791053 kernel: CPU topo: Max. logical dies: 1 Jan 23 01:06:02.791378 kernel: CPU topo: Max. dies per package: 1 Jan 23 01:06:02.791393 kernel: CPU topo: Max. threads per core: 1 Jan 23 01:06:02.791405 kernel: CPU topo: Num. cores per package: 4 Jan 23 01:06:02.791417 kernel: CPU topo: Num. threads per package: 4 Jan 23 01:06:02.791428 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 23 01:06:02.791440 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 01:06:02.791452 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 23 01:06:02.791463 kernel: kvm-guest: setup PV sched yield Jan 23 01:06:02.791474 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 23 01:06:02.791490 kernel: Booting paravirtualized kernel on KVM Jan 23 01:06:02.791502 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 01:06:02.791514 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 23 01:06:02.791615 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 23 01:06:02.791629 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 23 01:06:02.791641 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 23 01:06:02.791652 kernel: kvm-guest: PV spinlocks enabled Jan 23 01:06:02.791664 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 01:06:02.791748 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:06:02.791765 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 01:06:02.791777 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 01:06:02.791788 kernel: Fallback order for Node 0: 0 Jan 23 01:06:02.791800 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jan 23 01:06:02.791812 kernel: Policy zone: DMA32 Jan 23 01:06:02.791824 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 01:06:02.791835 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 23 01:06:02.791847 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 01:06:02.791862 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 01:06:02.791873 kernel: Dynamic Preempt: voluntary Jan 23 01:06:02.791885 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 01:06:02.791897 kernel: rcu: RCU event tracing is enabled. Jan 23 01:06:02.791909 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 23 01:06:02.791921 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 01:06:02.791932 kernel: Rude variant of Tasks RCU enabled. Jan 23 01:06:02.791945 kernel: Tracing variant of Tasks RCU enabled. Jan 23 01:06:02.791956 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 01:06:02.791971 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 23 01:06:02.792052 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 01:06:02.792214 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 01:06:02.792229 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 01:06:02.792241 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 23 01:06:02.792252 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 01:06:02.792264 kernel: Console: colour dummy device 80x25 Jan 23 01:06:02.792275 kernel: printk: legacy console [ttyS0] enabled Jan 23 01:06:02.792287 kernel: ACPI: Core revision 20240827 Jan 23 01:06:02.792304 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 23 01:06:02.792315 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 01:06:02.792327 kernel: x2apic enabled Jan 23 01:06:02.792339 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 01:06:02.792350 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 23 01:06:02.792362 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 23 01:06:02.792374 kernel: kvm-guest: setup PV IPIs Jan 23 01:06:02.792385 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 23 01:06:02.792397 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd5e8294, max_idle_ns: 440795237246 ns Jan 23 01:06:02.792413 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Jan 23 01:06:02.792424 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 01:06:02.792436 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 23 01:06:02.792447 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 23 01:06:02.792459 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 01:06:02.792470 kernel: Spectre V2 : Mitigation: Retpolines Jan 23 01:06:02.792482 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 01:06:02.792493 kernel: Speculative Store Bypass: Vulnerable Jan 23 01:06:02.792505 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 23 01:06:02.792521 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 23 01:06:02.792706 kernel: active return thunk: srso_alias_return_thunk Jan 23 01:06:02.792720 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 23 01:06:02.792731 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 23 01:06:02.792743 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 23 01:06:02.792754 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 01:06:02.792766 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 01:06:02.792778 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 01:06:02.792793 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 01:06:02.792805 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 23 01:06:02.792817 kernel: Freeing SMP alternatives memory: 32K Jan 23 01:06:02.792829 kernel: pid_max: default: 32768 minimum: 301 Jan 23 01:06:02.792841 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 01:06:02.792852 kernel: landlock: Up and running. Jan 23 01:06:02.792863 kernel: SELinux: Initializing. Jan 23 01:06:02.792875 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 01:06:02.792887 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 01:06:02.792902 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 23 01:06:02.792914 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 23 01:06:02.792925 kernel: signal: max sigframe size: 1776 Jan 23 01:06:02.792937 kernel: rcu: Hierarchical SRCU implementation. Jan 23 01:06:02.792949 kernel: rcu: Max phase no-delay instances is 400. Jan 23 01:06:02.792961 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 01:06:02.792972 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 23 01:06:02.792984 kernel: smp: Bringing up secondary CPUs ... Jan 23 01:06:02.792996 kernel: smpboot: x86: Booting SMP configuration: Jan 23 01:06:02.793010 kernel: .... node #0, CPUs: #1 #2 #3 Jan 23 01:06:02.793021 kernel: smp: Brought up 1 node, 4 CPUs Jan 23 01:06:02.793033 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Jan 23 01:06:02.793045 kernel: Memory: 2414476K/2565800K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46196K init, 2564K bss, 145388K reserved, 0K cma-reserved) Jan 23 01:06:02.793057 kernel: devtmpfs: initialized Jan 23 01:06:02.793240 kernel: x86/mm: Memory block size: 128MB Jan 23 01:06:02.793254 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 23 01:06:02.793266 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 23 01:06:02.793278 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 23 01:06:02.793295 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 23 01:06:02.793307 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jan 23 01:06:02.793318 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 23 01:06:02.793330 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 01:06:02.793341 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 23 01:06:02.793353 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 01:06:02.793365 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 01:06:02.793377 kernel: audit: initializing netlink subsys (disabled) Jan 23 01:06:02.793388 kernel: audit: type=2000 audit(1769130346.263:1): state=initialized audit_enabled=0 res=1 Jan 23 01:06:02.793403 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 01:06:02.793415 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 01:06:02.793427 kernel: cpuidle: using governor menu Jan 23 01:06:02.793438 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 01:06:02.793450 kernel: dca service started, version 1.12.1 Jan 23 01:06:02.793462 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 23 01:06:02.793473 kernel: PCI: Using configuration type 1 for base access Jan 23 01:06:02.793485 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 01:06:02.793496 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 01:06:02.793512 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 01:06:02.793798 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 01:06:02.793819 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 01:06:02.793831 kernel: ACPI: Added _OSI(Module Device) Jan 23 01:06:02.793843 kernel: ACPI: Added _OSI(Processor Device) Jan 23 01:06:02.793855 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 01:06:02.793866 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 01:06:02.793878 kernel: ACPI: Interpreter enabled Jan 23 01:06:02.793889 kernel: ACPI: PM: (supports S0 S3 S5) Jan 23 01:06:02.793907 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 01:06:02.793918 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 01:06:02.793928 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 01:06:02.793937 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 23 01:06:02.793947 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 01:06:02.795932 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 01:06:02.796457 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 23 01:06:02.796774 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 23 01:06:02.796794 kernel: PCI host bridge to bus 0000:00 Jan 23 01:06:02.797415 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 01:06:02.797708 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 01:06:02.797893 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 01:06:02.798248 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 23 01:06:02.798436 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 23 01:06:02.798728 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 23 01:06:02.798930 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 01:06:02.799631 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 23 01:06:02.800649 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 23 01:06:02.800859 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jan 23 01:06:02.801062 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jan 23 01:06:02.801423 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 23 01:06:02.801722 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 01:06:02.801924 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 15625 usecs Jan 23 01:06:02.802669 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 23 01:06:02.802879 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jan 23 01:06:02.803256 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jan 23 01:06:02.803462 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jan 23 01:06:02.803944 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 23 01:06:02.804330 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jan 23 01:06:02.804630 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jan 23 01:06:02.804838 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jan 23 01:06:02.805490 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 23 01:06:02.805794 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jan 23 01:06:02.805991 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jan 23 01:06:02.806503 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 23 01:06:02.806811 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jan 23 01:06:02.807457 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 23 01:06:02.807751 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 23 01:06:02.807946 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 11718 usecs Jan 23 01:06:02.808420 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 23 01:06:02.808726 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jan 23 01:06:02.808932 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jan 23 01:06:02.809694 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 23 01:06:02.809899 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jan 23 01:06:02.809918 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 01:06:02.809930 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 01:06:02.809942 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 01:06:02.809953 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 01:06:02.809966 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 23 01:06:02.809985 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 23 01:06:02.809998 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 23 01:06:02.810010 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 23 01:06:02.810020 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 23 01:06:02.810030 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 23 01:06:02.810040 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 23 01:06:02.810052 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 23 01:06:02.810063 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 23 01:06:02.810344 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 23 01:06:02.810363 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 23 01:06:02.810376 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 23 01:06:02.810386 kernel: iommu: Default domain type: Translated Jan 23 01:06:02.810396 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 01:06:02.810410 kernel: efivars: Registered efivars operations Jan 23 01:06:02.810420 kernel: PCI: Using ACPI for IRQ routing Jan 23 01:06:02.810430 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 01:06:02.810443 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 23 01:06:02.810455 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 23 01:06:02.810470 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jan 23 01:06:02.810481 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jan 23 01:06:02.810493 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 23 01:06:02.810506 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 23 01:06:02.810517 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jan 23 01:06:02.810628 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 23 01:06:02.810844 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 23 01:06:02.811047 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 23 01:06:02.811747 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 01:06:02.811770 kernel: vgaarb: loaded Jan 23 01:06:02.811781 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 23 01:06:02.811791 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 23 01:06:02.811802 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 01:06:02.811812 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 01:06:02.811822 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 01:06:02.811833 kernel: pnp: PnP ACPI init Jan 23 01:06:02.812985 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 23 01:06:02.813015 kernel: pnp: PnP ACPI: found 6 devices Jan 23 01:06:02.813028 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 01:06:02.813039 kernel: NET: Registered PF_INET protocol family Jan 23 01:06:02.813050 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 01:06:02.813062 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 01:06:02.813361 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 01:06:02.813380 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 01:06:02.813391 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 01:06:02.813406 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 01:06:02.813417 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 01:06:02.813427 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 01:06:02.813437 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 01:06:02.813449 kernel: NET: Registered PF_XDP protocol family Jan 23 01:06:02.813757 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jan 23 01:06:02.813970 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jan 23 01:06:02.814439 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 01:06:02.814745 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 01:06:02.814930 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 01:06:02.815712 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 23 01:06:02.815900 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 23 01:06:02.816288 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 23 01:06:02.816309 kernel: PCI: CLS 0 bytes, default 64 Jan 23 01:06:02.816325 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd5e8294, max_idle_ns: 440795237246 ns Jan 23 01:06:02.816337 kernel: Initialise system trusted keyrings Jan 23 01:06:02.816357 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 01:06:02.816368 kernel: Key type asymmetric registered Jan 23 01:06:02.816379 kernel: Asymmetric key parser 'x509' registered Jan 23 01:06:02.816389 kernel: hrtimer: interrupt took 7096501 ns Jan 23 01:06:02.816400 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 01:06:02.816410 kernel: io scheduler mq-deadline registered Jan 23 01:06:02.816423 kernel: io scheduler kyber registered Jan 23 01:06:02.816436 kernel: io scheduler bfq registered Jan 23 01:06:02.816446 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 01:06:02.816464 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 23 01:06:02.816478 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 23 01:06:02.816489 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 23 01:06:02.816499 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 01:06:02.816510 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 01:06:02.816735 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 01:06:02.816763 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 01:06:02.816777 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 01:06:02.817500 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 23 01:06:02.817636 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 01:06:02.817848 kernel: rtc_cmos 00:04: registered as rtc0 Jan 23 01:06:02.818057 kernel: rtc_cmos 00:04: setting system clock to 2026-01-23T01:05:59 UTC (1769130359) Jan 23 01:06:02.818452 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 23 01:06:02.818478 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 23 01:06:02.818500 kernel: efifb: probing for efifb Jan 23 01:06:02.818516 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 23 01:06:02.818630 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 23 01:06:02.818645 kernel: efifb: scrolling: redraw Jan 23 01:06:02.818656 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 01:06:02.818666 kernel: Console: switching to colour frame buffer device 160x50 Jan 23 01:06:02.818677 kernel: fb0: EFI VGA frame buffer device Jan 23 01:06:02.818687 kernel: pstore: Using crash dump compression: deflate Jan 23 01:06:02.818697 kernel: pstore: Registered efi_pstore as persistent store backend Jan 23 01:06:02.818714 kernel: NET: Registered PF_INET6 protocol family Jan 23 01:06:02.818724 kernel: Segment Routing with IPv6 Jan 23 01:06:02.818736 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 01:06:02.818749 kernel: NET: Registered PF_PACKET protocol family Jan 23 01:06:02.818760 kernel: Key type dns_resolver registered Jan 23 01:06:02.818770 kernel: IPI shorthand broadcast: enabled Jan 23 01:06:02.818780 kernel: sched_clock: Marking stable (13346072513, 1076003771)->(14974097002, -552020718) Jan 23 01:06:02.818791 kernel: registered taskstats version 1 Jan 23 01:06:02.818801 kernel: Loading compiled-in X.509 certificates Jan 23 01:06:02.818817 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: ed54f39d0282729985c39b8ffa9938cacff38d8a' Jan 23 01:06:02.818827 kernel: Demotion targets for Node 0: null Jan 23 01:06:02.818839 kernel: Key type .fscrypt registered Jan 23 01:06:02.818852 kernel: Key type fscrypt-provisioning registered Jan 23 01:06:02.818865 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 01:06:02.818875 kernel: ima: Allocated hash algorithm: sha1 Jan 23 01:06:02.818885 kernel: ima: No architecture policies found Jan 23 01:06:02.818895 kernel: clk: Disabling unused clocks Jan 23 01:06:02.818906 kernel: Warning: unable to open an initial console. Jan 23 01:06:02.818923 kernel: Freeing unused kernel image (initmem) memory: 46196K Jan 23 01:06:02.818934 kernel: Write protecting the kernel read-only data: 40960k Jan 23 01:06:02.818947 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 01:06:02.818960 kernel: Run /init as init process Jan 23 01:06:02.818970 kernel: with arguments: Jan 23 01:06:02.818981 kernel: /init Jan 23 01:06:02.818992 kernel: with environment: Jan 23 01:06:02.819007 kernel: HOME=/ Jan 23 01:06:02.819017 kernel: TERM=linux Jan 23 01:06:02.819034 systemd[1]: Successfully made /usr/ read-only. Jan 23 01:06:02.819053 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 01:06:02.819241 systemd[1]: Detected virtualization kvm. Jan 23 01:06:02.819256 systemd[1]: Detected architecture x86-64. Jan 23 01:06:02.819267 systemd[1]: Running in initrd. Jan 23 01:06:02.819278 systemd[1]: No hostname configured, using default hostname. Jan 23 01:06:02.819289 systemd[1]: Hostname set to . Jan 23 01:06:02.819306 systemd[1]: Initializing machine ID from VM UUID. Jan 23 01:06:02.819317 systemd[1]: Queued start job for default target initrd.target. Jan 23 01:06:02.819331 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:06:02.819343 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:06:02.819355 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 01:06:02.819366 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 01:06:02.819377 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 01:06:02.819394 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 01:06:02.819407 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 01:06:02.819421 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 01:06:02.819435 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:06:02.819447 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:06:02.819458 systemd[1]: Reached target paths.target - Path Units. Jan 23 01:06:02.819469 systemd[1]: Reached target slices.target - Slice Units. Jan 23 01:06:02.819480 systemd[1]: Reached target swap.target - Swaps. Jan 23 01:06:02.819496 systemd[1]: Reached target timers.target - Timer Units. Jan 23 01:06:02.819507 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 01:06:02.819519 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 01:06:02.819631 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 01:06:02.819646 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 01:06:02.819660 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:06:02.819671 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 01:06:02.819683 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:06:02.819693 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 01:06:02.819710 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 01:06:02.819721 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 01:06:02.819733 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 01:06:02.819747 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 01:06:02.819761 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 01:06:02.819772 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 01:06:02.819783 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 01:06:02.819794 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:06:02.819861 systemd-journald[203]: Collecting audit messages is disabled. Jan 23 01:06:02.819891 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 01:06:02.819909 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:06:02.819922 systemd-journald[203]: Journal started Jan 23 01:06:02.819951 systemd-journald[203]: Runtime Journal (/run/log/journal/03569212a3124c3690b14871405eac03) is 6M, max 48.1M, 42.1M free. Jan 23 01:06:02.874914 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 01:06:02.888252 systemd-modules-load[204]: Inserted module 'overlay' Jan 23 01:06:02.891427 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 01:06:02.941438 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 01:06:02.990372 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 01:06:03.037618 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:06:03.129684 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 01:06:03.216298 systemd-tmpfiles[217]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 01:06:03.253523 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 01:06:03.271813 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 01:06:03.301803 kernel: Bridge firewalling registered Jan 23 01:06:03.301047 systemd-modules-load[204]: Inserted module 'br_netfilter' Jan 23 01:06:03.302036 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:06:03.322646 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 01:06:03.372625 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:06:03.374979 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 01:06:03.394994 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 01:06:03.430485 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 01:06:03.473025 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:06:03.506035 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:06:03.527481 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 01:06:03.546848 dracut-cmdline[239]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:06:03.687617 systemd-resolved[259]: Positive Trust Anchors: Jan 23 01:06:03.687707 systemd-resolved[259]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 01:06:03.687734 systemd-resolved[259]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 01:06:03.692038 systemd-resolved[259]: Defaulting to hostname 'linux'. Jan 23 01:06:03.698732 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 01:06:03.724976 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:06:04.060323 kernel: SCSI subsystem initialized Jan 23 01:06:04.090371 kernel: Loading iSCSI transport class v2.0-870. Jan 23 01:06:04.136696 kernel: iscsi: registered transport (tcp) Jan 23 01:06:04.204951 kernel: iscsi: registered transport (qla4xxx) Jan 23 01:06:04.205055 kernel: QLogic iSCSI HBA Driver Jan 23 01:06:04.284917 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 01:06:04.356889 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:06:04.360034 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 01:06:04.581872 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 01:06:04.603728 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 01:06:04.872648 kernel: raid6: avx2x4 gen() 18262 MB/s Jan 23 01:06:04.896035 kernel: raid6: avx2x2 gen() 8790 MB/s Jan 23 01:06:04.930360 kernel: raid6: avx2x1 gen() 6767 MB/s Jan 23 01:06:04.931265 kernel: raid6: using algorithm avx2x4 gen() 18262 MB/s Jan 23 01:06:04.968322 kernel: raid6: .... xor() 1814 MB/s, rmw enabled Jan 23 01:06:04.969408 kernel: raid6: using avx2x2 recovery algorithm Jan 23 01:06:05.130307 kernel: xor: automatically using best checksumming function avx Jan 23 01:06:07.212610 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 01:06:07.269653 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 01:06:07.292770 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:06:07.399711 systemd-udevd[454]: Using default interface naming scheme 'v255'. Jan 23 01:06:07.419683 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:06:07.472856 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 01:06:07.618272 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Jan 23 01:06:07.762517 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 01:06:07.785317 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 01:06:08.068695 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:06:08.085810 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 01:06:08.298501 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 23 01:06:08.323703 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 01:06:08.355917 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:06:08.388494 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 23 01:06:08.356361 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:06:08.424685 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 01:06:08.424736 kernel: GPT:9289727 != 19775487 Jan 23 01:06:08.424759 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 01:06:08.424776 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 23 01:06:08.396663 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:06:08.464252 kernel: GPT:9289727 != 19775487 Jan 23 01:06:08.464295 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 01:06:08.464310 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 01:06:08.483633 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:06:08.510928 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:06:08.566520 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:06:08.566946 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:06:08.584745 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:06:08.694963 kernel: libata version 3.00 loaded. Jan 23 01:06:08.738773 kernel: AES CTR mode by8 optimization enabled Jan 23 01:06:08.749451 kernel: ahci 0000:00:1f.2: version 3.0 Jan 23 01:06:08.763251 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 23 01:06:08.787386 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:06:08.827314 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 23 01:06:08.827782 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 23 01:06:08.828038 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 23 01:06:08.875256 kernel: scsi host0: ahci Jan 23 01:06:08.884408 kernel: scsi host1: ahci Jan 23 01:06:08.903399 kernel: scsi host2: ahci Jan 23 01:06:08.911407 kernel: scsi host3: ahci Jan 23 01:06:08.911812 kernel: scsi host4: ahci Jan 23 01:06:08.916789 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 23 01:06:08.996753 kernel: scsi host5: ahci Jan 23 01:06:08.997286 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Jan 23 01:06:08.997307 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Jan 23 01:06:08.997321 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Jan 23 01:06:08.997335 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Jan 23 01:06:08.997350 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Jan 23 01:06:08.997364 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Jan 23 01:06:09.023672 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 01:06:09.052237 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 23 01:06:09.079247 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 23 01:06:09.079705 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 23 01:06:09.139904 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 01:06:09.222805 disk-uuid[619]: Primary Header is updated. Jan 23 01:06:09.222805 disk-uuid[619]: Secondary Entries is updated. Jan 23 01:06:09.222805 disk-uuid[619]: Secondary Header is updated. Jan 23 01:06:09.267821 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 01:06:09.316506 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 23 01:06:09.325314 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 23 01:06:09.334335 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 23 01:06:09.345421 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 23 01:06:09.370733 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 23 01:06:09.380518 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 23 01:06:09.396276 kernel: ata3.00: LPM support broken, forcing max_power Jan 23 01:06:09.396391 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 23 01:06:09.396412 kernel: ata3.00: applying bridge limits Jan 23 01:06:09.411262 kernel: ata3.00: LPM support broken, forcing max_power Jan 23 01:06:09.411311 kernel: ata3.00: configured for UDMA/100 Jan 23 01:06:09.436860 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 23 01:06:09.534756 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 23 01:06:09.535536 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 01:06:09.568840 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 23 01:06:10.198683 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 01:06:10.217945 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 01:06:10.319496 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 01:06:10.319648 disk-uuid[621]: The operation has completed successfully. Jan 23 01:06:10.246468 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:06:10.269951 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 01:06:10.285362 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 01:06:10.374829 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 01:06:10.497385 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 01:06:10.497779 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 01:06:10.541385 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 01:06:10.622366 sh[650]: Success Jan 23 01:06:10.727247 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 01:06:10.734393 kernel: device-mapper: uevent: version 1.0.3 Jan 23 01:06:10.734477 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 01:06:10.825497 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 23 01:06:10.965242 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 01:06:10.984654 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 01:06:11.027690 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 01:06:11.085460 kernel: BTRFS: device fsid f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (662) Jan 23 01:06:11.085516 kernel: BTRFS info (device dm-0): first mount of filesystem f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 Jan 23 01:06:11.085533 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:06:11.134765 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 01:06:11.134853 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 01:06:11.144262 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 01:06:11.148378 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 01:06:11.162905 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 01:06:11.165674 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 01:06:11.221906 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 01:06:11.323776 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (686) Jan 23 01:06:11.346619 kernel: BTRFS info (device vda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:06:11.346684 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:06:11.384861 kernel: BTRFS info (device vda6): turning on async discard Jan 23 01:06:11.384939 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 01:06:11.420860 kernel: BTRFS info (device vda6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:06:11.432543 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 01:06:11.466321 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 01:06:11.742801 ignition[748]: Ignition 2.22.0 Jan 23 01:06:11.742903 ignition[748]: Stage: fetch-offline Jan 23 01:06:11.742962 ignition[748]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:06:11.742979 ignition[748]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 01:06:11.743358 ignition[748]: parsed url from cmdline: "" Jan 23 01:06:11.743366 ignition[748]: no config URL provided Jan 23 01:06:11.743375 ignition[748]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 01:06:11.743390 ignition[748]: no config at "/usr/lib/ignition/user.ign" Jan 23 01:06:11.743427 ignition[748]: op(1): [started] loading QEMU firmware config module Jan 23 01:06:11.743434 ignition[748]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 23 01:06:11.784223 ignition[748]: op(1): [finished] loading QEMU firmware config module Jan 23 01:06:11.858913 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 01:06:11.878753 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 01:06:12.003358 systemd-networkd[839]: lo: Link UP Jan 23 01:06:12.003441 systemd-networkd[839]: lo: Gained carrier Jan 23 01:06:12.006928 systemd-networkd[839]: Enumeration completed Jan 23 01:06:12.007356 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 01:06:12.013860 systemd-networkd[839]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:06:12.013867 systemd-networkd[839]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 01:06:12.020369 systemd-networkd[839]: eth0: Link UP Jan 23 01:06:12.021929 systemd-networkd[839]: eth0: Gained carrier Jan 23 01:06:12.021943 systemd-networkd[839]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:06:12.035319 systemd[1]: Reached target network.target - Network. Jan 23 01:06:12.132294 systemd-networkd[839]: eth0: DHCPv4 address 10.0.0.42/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 23 01:06:12.577693 ignition[748]: parsing config with SHA512: db8c89f5343fe1d66ec507c3d678d24c3f427291525c2b4ae5b044db973cf57c7d9946cc19ef04e28f05807866b31eac112391994f5f8794036419d18acac333 Jan 23 01:06:12.597866 unknown[748]: fetched base config from "system" Jan 23 01:06:12.597945 unknown[748]: fetched user config from "qemu" Jan 23 01:06:12.598764 ignition[748]: fetch-offline: fetch-offline passed Jan 23 01:06:12.606866 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 01:06:12.598856 ignition[748]: Ignition finished successfully Jan 23 01:06:12.620021 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 23 01:06:12.622047 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 01:06:12.715751 ignition[844]: Ignition 2.22.0 Jan 23 01:06:12.715824 ignition[844]: Stage: kargs Jan 23 01:06:12.716180 ignition[844]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:06:12.725510 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 01:06:12.716201 ignition[844]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 01:06:12.721214 ignition[844]: kargs: kargs passed Jan 23 01:06:12.721293 ignition[844]: Ignition finished successfully Jan 23 01:06:12.764663 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 01:06:12.848010 ignition[852]: Ignition 2.22.0 Jan 23 01:06:12.848179 ignition[852]: Stage: disks Jan 23 01:06:12.848681 ignition[852]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:06:12.848699 ignition[852]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 01:06:12.850416 ignition[852]: disks: disks passed Jan 23 01:06:12.850475 ignition[852]: Ignition finished successfully Jan 23 01:06:12.876977 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 01:06:12.887905 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 01:06:12.894261 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 01:06:12.906998 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 01:06:12.912748 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 01:06:12.920253 systemd[1]: Reached target basic.target - Basic System. Jan 23 01:06:12.933956 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 01:06:13.009465 systemd-fsck[863]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 23 01:06:13.019986 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 01:06:13.041680 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 01:06:13.332395 kernel: EXT4-fs (vda9): mounted filesystem 2036722e-4586-420e-8dc7-a3b65e840c36 r/w with ordered data mode. Quota mode: none. Jan 23 01:06:13.333467 systemd-networkd[839]: eth0: Gained IPv6LL Jan 23 01:06:13.334705 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 01:06:13.359859 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 01:06:13.374961 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 01:06:13.393532 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 01:06:13.398763 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 01:06:13.433038 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (871) Jan 23 01:06:13.398836 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 01:06:13.398872 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 01:06:13.473222 kernel: BTRFS info (device vda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:06:13.473270 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:06:13.485941 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 01:06:13.503444 kernel: BTRFS info (device vda6): turning on async discard Jan 23 01:06:13.503520 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 01:06:13.497529 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 01:06:13.509508 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 01:06:13.638721 initrd-setup-root[895]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 01:06:13.654370 initrd-setup-root[902]: cut: /sysroot/etc/group: No such file or directory Jan 23 01:06:13.666878 initrd-setup-root[909]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 01:06:13.676803 initrd-setup-root[916]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 01:06:13.944003 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 01:06:13.957948 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 01:06:13.960737 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 01:06:14.002209 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 01:06:14.012974 kernel: BTRFS info (device vda6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:06:14.040459 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 01:06:14.089646 ignition[985]: INFO : Ignition 2.22.0 Jan 23 01:06:14.089646 ignition[985]: INFO : Stage: mount Jan 23 01:06:14.106412 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:06:14.106412 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 01:06:14.106412 ignition[985]: INFO : mount: mount passed Jan 23 01:06:14.106412 ignition[985]: INFO : Ignition finished successfully Jan 23 01:06:14.094804 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 01:06:14.103908 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 01:06:14.371522 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 01:06:14.510786 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (998) Jan 23 01:06:14.573960 kernel: BTRFS info (device vda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:06:14.581464 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:06:14.676524 kernel: BTRFS info (device vda6): turning on async discard Jan 23 01:06:14.676784 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 01:06:14.692047 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 01:06:14.908161 ignition[1015]: INFO : Ignition 2.22.0 Jan 23 01:06:14.908161 ignition[1015]: INFO : Stage: files Jan 23 01:06:14.919700 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:06:14.919700 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 01:06:14.919700 ignition[1015]: DEBUG : files: compiled without relabeling support, skipping Jan 23 01:06:14.944650 ignition[1015]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 01:06:14.944650 ignition[1015]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 01:06:14.944650 ignition[1015]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 01:06:15.009270 ignition[1015]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 01:06:15.009270 ignition[1015]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 01:06:15.009270 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 01:06:15.009270 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 23 01:06:14.953514 unknown[1015]: wrote ssh authorized keys file for user: core Jan 23 01:06:15.118292 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 01:06:15.304022 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 01:06:15.323050 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 01:06:15.338688 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 01:06:15.338688 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 01:06:15.338688 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 01:06:15.338688 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 01:06:15.338688 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 01:06:15.338688 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 01:06:15.338688 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 01:06:15.491324 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 01:06:15.491324 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 01:06:15.491324 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 01:06:15.491324 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 01:06:15.491324 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 01:06:15.491324 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 23 01:06:15.757730 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 01:06:17.048417 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 01:06:17.048417 ignition[1015]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 01:06:17.123937 ignition[1015]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 01:06:17.123937 ignition[1015]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 01:06:17.123937 ignition[1015]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 01:06:17.123937 ignition[1015]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 23 01:06:17.123937 ignition[1015]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 23 01:06:17.123937 ignition[1015]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 23 01:06:17.123937 ignition[1015]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 23 01:06:17.123937 ignition[1015]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 23 01:06:17.596019 ignition[1015]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 23 01:06:17.628889 ignition[1015]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 23 01:06:17.628889 ignition[1015]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 23 01:06:17.687664 ignition[1015]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 23 01:06:17.687664 ignition[1015]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 01:06:17.687664 ignition[1015]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 01:06:17.687664 ignition[1015]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 01:06:17.687664 ignition[1015]: INFO : files: files passed Jan 23 01:06:17.687664 ignition[1015]: INFO : Ignition finished successfully Jan 23 01:06:17.645421 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 01:06:17.696237 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 01:06:17.802240 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 01:06:17.883876 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 01:06:17.921316 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 01:06:18.003641 initrd-setup-root-after-ignition[1043]: grep: /sysroot/oem/oem-release: No such file or directory Jan 23 01:06:18.018214 initrd-setup-root-after-ignition[1049]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:06:18.044848 initrd-setup-root-after-ignition[1045]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:06:18.044848 initrd-setup-root-after-ignition[1045]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:06:18.091859 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 01:06:18.129180 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 01:06:18.154381 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 01:06:18.341350 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 01:06:18.349872 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 01:06:18.381008 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 01:06:18.419327 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 01:06:18.443371 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 01:06:18.446553 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 01:06:18.703555 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 01:06:18.722301 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 01:06:18.818693 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:06:18.836667 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:06:18.884226 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 01:06:18.896364 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 01:06:18.896841 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 01:06:18.905244 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 01:06:18.906052 systemd[1]: Stopped target basic.target - Basic System. Jan 23 01:06:18.916668 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 01:06:18.923933 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 01:06:18.931754 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 01:06:18.942469 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 01:06:18.942926 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 01:06:18.949404 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 01:06:18.954558 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 01:06:18.987681 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 01:06:18.991818 systemd[1]: Stopped target swap.target - Swaps. Jan 23 01:06:18.997802 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 01:06:18.998519 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 01:06:19.002553 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:06:19.002772 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:06:19.002879 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 01:06:19.003638 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:06:19.009498 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 01:06:19.009830 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 01:06:19.018192 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 01:06:19.018371 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 01:06:19.025330 systemd[1]: Stopped target paths.target - Path Units. Jan 23 01:06:19.025859 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 01:06:19.030865 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:06:19.037920 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 01:06:19.403470 ignition[1069]: INFO : Ignition 2.22.0 Jan 23 01:06:19.403470 ignition[1069]: INFO : Stage: umount Jan 23 01:06:19.403470 ignition[1069]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:06:19.403470 ignition[1069]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 01:06:19.403470 ignition[1069]: INFO : umount: umount passed Jan 23 01:06:19.403470 ignition[1069]: INFO : Ignition finished successfully Jan 23 01:06:19.044930 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 01:06:19.047919 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 01:06:19.048329 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 01:06:19.049354 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 01:06:19.049540 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 01:06:19.050735 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 01:06:19.050950 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 01:06:19.078774 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 01:06:19.078991 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 01:06:19.089679 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 01:06:19.099913 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 01:06:19.108219 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 01:06:19.108559 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:06:19.109493 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 01:06:19.109852 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 01:06:19.129757 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 01:06:19.159505 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 01:06:19.209793 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 01:06:19.252494 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 01:06:19.252701 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 01:06:19.390301 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 01:06:19.390563 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 01:06:19.396738 systemd[1]: Stopped target network.target - Network. Jan 23 01:06:19.406004 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 01:06:19.406246 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 01:06:19.418979 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 01:06:19.419202 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 01:06:19.430662 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 01:06:19.430736 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 01:06:19.462299 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 01:06:19.462561 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 01:06:19.471210 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 01:06:19.471474 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 01:06:19.496626 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 01:06:19.519343 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 01:06:19.544334 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 01:06:19.544968 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 01:06:19.594670 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 01:06:19.597818 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 01:06:19.597954 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:06:19.657908 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:06:19.659005 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 01:06:19.660763 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 01:06:19.707183 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 01:06:19.712012 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 01:06:19.727362 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 01:06:19.729701 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:06:19.747389 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 01:06:19.769363 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 01:06:19.769747 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 01:06:19.773731 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 01:06:19.773828 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:06:19.823948 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 01:06:19.825837 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 01:06:19.844726 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:06:19.916733 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 01:06:19.928280 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 01:06:19.928734 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:06:19.936860 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 01:06:19.936969 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 01:06:19.974334 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 01:06:19.977882 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:06:19.994837 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 01:06:19.994958 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 01:06:20.010056 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 01:06:20.010375 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 01:06:20.042925 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 01:06:20.043229 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 01:06:20.103016 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 01:06:20.845641 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Jan 23 01:06:20.105710 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 01:06:20.105802 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:06:20.132417 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 01:06:20.132526 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:06:20.193919 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 01:06:20.194030 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 01:06:20.220810 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 01:06:20.220960 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:06:20.236479 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:06:20.236725 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:06:20.328347 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 01:06:20.328459 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jan 23 01:06:20.328667 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 01:06:20.328751 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:06:20.332835 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 01:06:20.333050 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 01:06:20.430781 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 01:06:20.430996 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 01:06:20.495322 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 01:06:20.531401 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 01:06:20.704288 systemd[1]: Switching root. Jan 23 01:06:20.998931 systemd-journald[203]: Journal stopped Jan 23 01:06:32.383787 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 5065633481 wd_nsec: 5065631616 Jan 23 01:06:32.384244 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 01:06:32.384270 kernel: SELinux: policy capability open_perms=1 Jan 23 01:06:32.384364 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 01:06:32.384385 kernel: SELinux: policy capability always_check_network=0 Jan 23 01:06:32.384412 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 01:06:32.384440 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 01:06:32.384457 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 01:06:32.384473 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 01:06:32.384489 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 01:06:32.384506 kernel: audit: type=1403 audit(1769130386.467:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 01:06:32.384527 systemd[1]: Successfully loaded SELinux policy in 261.575ms. Jan 23 01:06:32.384554 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 20.518ms. Jan 23 01:06:32.384571 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 01:06:32.384722 systemd[1]: Detected virtualization kvm. Jan 23 01:06:32.384750 systemd[1]: Detected architecture x86-64. Jan 23 01:06:32.384770 systemd[1]: Detected first boot. Jan 23 01:06:32.384789 systemd[1]: Initializing machine ID from VM UUID. Jan 23 01:06:32.384812 zram_generator::config[1113]: No configuration found. Jan 23 01:06:32.384835 kernel: Guest personality initialized and is inactive Jan 23 01:06:32.384851 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 01:06:32.384865 kernel: Initialized host personality Jan 23 01:06:32.384881 kernel: NET: Registered PF_VSOCK protocol family Jan 23 01:06:32.384907 systemd[1]: Populated /etc with preset unit settings. Jan 23 01:06:32.384926 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 01:06:32.384943 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 01:06:32.384963 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 01:06:32.384977 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 01:06:32.384989 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 01:06:32.385001 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 01:06:32.385012 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 01:06:32.385025 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 01:06:32.385041 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 01:06:32.385052 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 01:06:32.385232 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 01:06:32.385253 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 01:06:32.385266 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:06:32.385278 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:06:32.385290 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 01:06:32.385303 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 01:06:32.385320 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 01:06:32.385333 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 01:06:32.385344 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 01:06:32.385356 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:06:32.385368 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:06:32.385380 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 01:06:32.385391 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 01:06:32.385403 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 01:06:32.385417 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 01:06:32.385428 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:06:32.385440 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 01:06:32.385452 systemd[1]: Reached target slices.target - Slice Units. Jan 23 01:06:32.385464 systemd[1]: Reached target swap.target - Swaps. Jan 23 01:06:32.385475 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 01:06:32.385487 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 01:06:32.385498 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 01:06:32.385512 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:06:32.385538 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 01:06:32.385709 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:06:32.385726 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 01:06:32.385740 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 01:06:32.385752 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 01:06:32.385763 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 01:06:32.385775 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:06:32.385787 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 01:06:32.385800 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 01:06:32.385828 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 01:06:32.385849 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 01:06:32.385868 systemd[1]: Reached target machines.target - Containers. Jan 23 01:06:32.385888 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 01:06:32.385981 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:06:32.386003 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 01:06:32.386023 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 01:06:32.386042 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:06:32.386060 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 01:06:32.386276 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:06:32.386369 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 01:06:32.386393 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 01:06:32.386412 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 01:06:32.386430 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 01:06:32.386447 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 01:06:32.386535 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 01:06:32.386556 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 01:06:32.386580 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:06:32.386669 kernel: fuse: init (API version 7.41) Jan 23 01:06:32.386697 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 01:06:32.386717 kernel: ACPI: bus type drm_connector registered Jan 23 01:06:32.386734 kernel: loop: module loaded Jan 23 01:06:32.386752 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 01:06:32.386770 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 01:06:32.386789 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 01:06:32.386848 systemd-journald[1198]: Collecting audit messages is disabled. Jan 23 01:06:32.387031 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 01:06:32.387055 systemd-journald[1198]: Journal started Jan 23 01:06:32.387227 systemd-journald[1198]: Runtime Journal (/run/log/journal/03569212a3124c3690b14871405eac03) is 6M, max 48.1M, 42.1M free. Jan 23 01:06:30.087376 systemd[1]: Queued start job for default target multi-user.target. Jan 23 01:06:30.121464 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 23 01:06:30.123297 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 01:06:30.124840 systemd[1]: systemd-journald.service: Consumed 2.566s CPU time. Jan 23 01:06:32.426331 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 01:06:32.448209 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 01:06:32.454375 systemd[1]: Stopped verity-setup.service. Jan 23 01:06:32.494584 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:06:32.509327 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 01:06:32.519447 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 01:06:32.528323 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 01:06:32.536883 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 01:06:32.543760 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 01:06:32.566549 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 01:06:32.575833 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 01:06:32.589682 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 01:06:32.604355 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:06:32.619888 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 01:06:32.624830 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 01:06:32.645783 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:06:32.646525 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:06:32.658507 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 01:06:32.659415 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 01:06:32.675051 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:06:32.675809 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:06:32.689555 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 01:06:32.694283 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 01:06:32.705541 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 01:06:32.706058 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 01:06:32.717401 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 01:06:32.732020 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:06:32.745207 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 01:06:32.766557 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 01:06:32.776509 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:06:32.817922 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 01:06:32.831243 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 01:06:32.867383 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 01:06:32.877296 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 01:06:32.877359 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 01:06:32.889573 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 01:06:32.905539 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 01:06:32.917477 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:06:32.924810 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 01:06:32.937781 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 01:06:32.949283 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 01:06:32.952768 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 01:06:32.968278 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 01:06:32.978323 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:06:33.000404 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 01:06:33.022054 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 01:06:33.032780 systemd-journald[1198]: Time spent on flushing to /var/log/journal/03569212a3124c3690b14871405eac03 is 26.036ms for 1070 entries. Jan 23 01:06:33.032780 systemd-journald[1198]: System Journal (/var/log/journal/03569212a3124c3690b14871405eac03) is 8M, max 195.6M, 187.6M free. Jan 23 01:06:33.087756 systemd-journald[1198]: Received client request to flush runtime journal. Jan 23 01:06:33.035339 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 01:06:33.050568 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 01:06:33.069910 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 01:06:33.081977 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:06:33.095286 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 01:06:33.119188 kernel: loop0: detected capacity change from 0 to 128560 Jan 23 01:06:33.115281 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 01:06:33.129919 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 01:06:33.163026 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Jan 23 01:06:33.163553 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Jan 23 01:06:33.178381 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 01:06:33.212687 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 01:06:33.214544 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 01:06:33.236263 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 01:06:33.242780 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 01:06:33.274281 kernel: loop1: detected capacity change from 0 to 219144 Jan 23 01:06:33.328047 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 01:06:33.360499 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 01:06:33.435334 kernel: loop2: detected capacity change from 0 to 110984 Jan 23 01:06:33.484323 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Jan 23 01:06:33.484949 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Jan 23 01:06:33.721495 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:06:33.920838 kernel: loop3: detected capacity change from 0 to 128560 Jan 23 01:06:34.080348 kernel: loop4: detected capacity change from 0 to 219144 Jan 23 01:06:34.205343 kernel: loop5: detected capacity change from 0 to 110984 Jan 23 01:06:34.299981 (sd-merge)[1259]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 23 01:06:34.301427 (sd-merge)[1259]: Merged extensions into '/usr'. Jan 23 01:06:34.321589 systemd[1]: Reload requested from client PID 1233 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 01:06:34.321890 systemd[1]: Reloading... Jan 23 01:06:34.573343 zram_generator::config[1282]: No configuration found. Jan 23 01:06:35.661919 systemd[1]: Reloading finished in 1338 ms. Jan 23 01:06:35.755385 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 01:06:35.801359 systemd[1]: Starting ensure-sysext.service... Jan 23 01:06:35.816314 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 01:06:36.012605 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 01:06:36.039039 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:06:36.065338 systemd[1]: Reload requested from client PID 1321 ('systemctl') (unit ensure-sysext.service)... Jan 23 01:06:36.065369 systemd[1]: Reloading... Jan 23 01:06:36.106355 ldconfig[1228]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 01:06:36.132689 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 01:06:36.135211 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 01:06:36.135997 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 01:06:36.136728 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 01:06:36.138924 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 01:06:36.139679 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Jan 23 01:06:36.139896 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Jan 23 01:06:36.153790 systemd-tmpfiles[1322]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 01:06:36.153811 systemd-tmpfiles[1322]: Skipping /boot Jan 23 01:06:36.368290 systemd-tmpfiles[1322]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 01:06:36.368384 systemd-tmpfiles[1322]: Skipping /boot Jan 23 01:06:36.422394 systemd-udevd[1324]: Using default interface naming scheme 'v255'. Jan 23 01:06:36.482952 zram_generator::config[1352]: No configuration found. Jan 23 01:06:37.332222 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 01:06:37.570240 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 23 01:06:37.582211 kernel: ACPI: button: Power Button [PWRF] Jan 23 01:06:37.619430 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 01:06:37.620253 systemd[1]: Reloading finished in 1553 ms. Jan 23 01:06:37.648499 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:06:37.662030 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 01:06:37.692693 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:06:37.803364 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 23 01:06:37.803904 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 23 01:06:37.804346 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 23 01:06:37.801028 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 01:06:37.832364 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:06:37.840524 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 01:06:37.870914 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 01:06:37.883761 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:06:37.889461 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:06:38.054324 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:06:38.076904 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 01:06:38.086334 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:06:38.090470 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 01:06:38.109274 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:06:38.112843 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 01:06:38.135848 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 01:06:38.152292 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 01:06:38.179302 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 01:06:38.190373 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:06:38.196210 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:06:38.196714 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:06:38.211387 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 01:06:38.217881 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 01:06:38.266438 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:06:38.266970 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:06:38.271509 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:06:38.286942 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 01:06:38.309348 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 01:06:38.324436 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:06:38.324842 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:06:38.338588 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 01:06:38.637804 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:06:38.649786 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 01:06:38.669545 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 01:06:38.704750 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 01:06:38.705430 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 01:06:38.730707 systemd[1]: Finished ensure-sysext.service. Jan 23 01:06:38.742835 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:06:38.743467 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:06:38.771034 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 01:06:38.785240 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 01:06:38.798894 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:06:38.799786 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:06:38.807013 augenrules[1482]: No rules Jan 23 01:06:38.822532 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 01:06:38.824584 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 01:06:38.838694 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 01:06:38.840358 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 01:06:38.931721 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 01:06:39.603521 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 01:06:39.603768 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 01:06:39.626349 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 23 01:06:39.640394 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 01:06:39.652533 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:06:39.687024 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 01:06:40.085466 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 01:06:40.304744 kernel: kvm_amd: TSC scaling supported Jan 23 01:06:40.304843 kernel: kvm_amd: Nested Virtualization enabled Jan 23 01:06:40.304867 kernel: kvm_amd: Nested Paging enabled Jan 23 01:06:40.315593 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 23 01:06:40.327054 kernel: kvm_amd: PMU virtualization is disabled Jan 23 01:06:40.331956 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:06:40.357002 systemd-networkd[1450]: lo: Link UP Jan 23 01:06:40.357013 systemd-networkd[1450]: lo: Gained carrier Jan 23 01:06:40.366030 systemd-networkd[1450]: Enumeration completed Jan 23 01:06:40.367995 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 01:06:40.371055 systemd-networkd[1450]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:06:40.371235 systemd-networkd[1450]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 01:06:40.373850 systemd-networkd[1450]: eth0: Link UP Jan 23 01:06:40.374349 systemd-networkd[1450]: eth0: Gained carrier Jan 23 01:06:40.374369 systemd-networkd[1450]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:06:40.388272 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 01:06:40.408611 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 01:06:40.534924 systemd-networkd[1450]: eth0: DHCPv4 address 10.0.0.42/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 23 01:06:40.576237 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 23 01:06:40.585275 systemd-resolved[1454]: Positive Trust Anchors: Jan 23 01:06:40.585397 systemd-resolved[1454]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 01:06:40.585441 systemd-resolved[1454]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 01:06:40.595871 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 01:06:41.216504 systemd-timesyncd[1498]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 23 01:06:41.217522 systemd-timesyncd[1498]: Initial clock synchronization to Fri 2026-01-23 01:06:41.216190 UTC. Jan 23 01:06:41.222657 systemd-resolved[1454]: Defaulting to hostname 'linux'. Jan 23 01:06:41.233620 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 01:06:41.251896 systemd[1]: Reached target network.target - Network. Jan 23 01:06:41.260545 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:06:41.273626 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 01:06:41.289744 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 01:06:41.302381 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 01:06:41.312762 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 01:06:41.325880 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 01:06:41.340668 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 01:06:41.340942 systemd[1]: Reached target paths.target - Path Units. Jan 23 01:06:41.348369 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 01:06:41.363551 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 01:06:41.380701 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 01:06:41.398610 systemd[1]: Reached target timers.target - Timer Units. Jan 23 01:06:41.413645 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 01:06:41.433582 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 01:06:41.546623 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 01:06:41.636749 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 01:06:41.706537 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 01:06:41.935586 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 01:06:41.955576 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 01:06:42.013998 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 01:06:42.031636 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 01:06:42.042197 systemd[1]: Reached target basic.target - Basic System. Jan 23 01:06:42.052405 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 01:06:42.052648 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 01:06:42.060121 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 01:06:42.087091 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 01:06:42.190076 systemd-networkd[1450]: eth0: Gained IPv6LL Jan 23 01:06:42.208205 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 01:06:42.230412 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 01:06:42.246535 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 01:06:42.264200 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 01:06:42.271572 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 01:06:42.285104 jq[1519]: false Jan 23 01:06:42.289514 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 01:06:42.302165 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 01:06:42.317219 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 01:06:42.331441 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 01:06:42.347578 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 01:06:42.357900 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 01:06:42.362141 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 01:06:42.366747 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 01:06:42.387963 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 01:06:42.412769 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 01:06:42.426663 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 01:06:42.444120 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 01:06:42.447008 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 01:06:42.457502 kernel: EDAC MC: Ver: 3.0.0 Jan 23 01:06:42.504882 jq[1531]: true Jan 23 01:06:42.553493 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 01:06:42.554013 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 01:06:42.566856 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Refreshing passwd entry cache Jan 23 01:06:42.568740 oslogin_cache_refresh[1521]: Refreshing passwd entry cache Jan 23 01:06:42.588488 update_engine[1530]: I20260123 01:06:42.588007 1530 main.cc:92] Flatcar Update Engine starting Jan 23 01:06:42.600004 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Failure getting users, quitting Jan 23 01:06:42.600004 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 01:06:42.600004 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Refreshing group entry cache Jan 23 01:06:42.599186 oslogin_cache_refresh[1521]: Failure getting users, quitting Jan 23 01:06:42.599219 oslogin_cache_refresh[1521]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 01:06:42.599408 oslogin_cache_refresh[1521]: Refreshing group entry cache Jan 23 01:06:42.605537 extend-filesystems[1520]: Found /dev/vda6 Jan 23 01:06:42.635972 extend-filesystems[1520]: Found /dev/vda9 Jan 23 01:06:42.628048 oslogin_cache_refresh[1521]: Failure getting groups, quitting Jan 23 01:06:42.613388 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 01:06:42.652623 tar[1538]: linux-amd64/LICENSE Jan 23 01:06:42.652623 tar[1538]: linux-amd64/helm Jan 23 01:06:42.653217 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Failure getting groups, quitting Jan 23 01:06:42.653217 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 01:06:42.653482 extend-filesystems[1520]: Checking size of /dev/vda9 Jan 23 01:06:42.628070 oslogin_cache_refresh[1521]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 01:06:42.624221 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 23 01:06:42.633645 (ntainerd)[1546]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 01:06:42.639464 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:06:42.668392 jq[1545]: true Jan 23 01:06:42.669917 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 01:06:42.683006 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 01:06:42.683639 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 01:06:42.694146 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 01:06:42.695617 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 01:06:42.788569 dbus-daemon[1517]: [system] SELinux support is enabled Jan 23 01:06:42.789054 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 01:06:42.798043 extend-filesystems[1520]: Resized partition /dev/vda9 Jan 23 01:06:42.804471 update_engine[1530]: I20260123 01:06:42.801059 1530 update_check_scheduler.cc:74] Next update check in 10m54s Jan 23 01:06:42.822211 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 01:06:42.822600 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 01:06:42.833583 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 01:06:42.833691 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 01:06:42.927773 extend-filesystems[1579]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 01:06:42.956738 systemd[1]: Started update-engine.service - Update Engine. Jan 23 01:06:43.073567 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 01:06:43.084437 sshd_keygen[1543]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 01:06:43.109066 systemd-logind[1529]: Watching system buttons on /dev/input/event2 (Power Button) Jan 23 01:06:43.109189 systemd-logind[1529]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 01:06:43.111781 systemd-logind[1529]: New seat seat0. Jan 23 01:06:43.114998 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 01:06:43.135439 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 23 01:06:43.164878 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 23 01:06:43.165564 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 23 01:06:43.187670 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 01:06:43.212027 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 01:06:43.608661 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 01:06:43.666440 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 01:06:43.717650 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 23 01:06:43.811600 bash[1604]: Updated "/home/core/.ssh/authorized_keys" Jan 23 01:06:43.830207 extend-filesystems[1579]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 23 01:06:43.830207 extend-filesystems[1579]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 23 01:06:43.830207 extend-filesystems[1579]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 23 01:06:44.158016 extend-filesystems[1520]: Resized filesystem in /dev/vda9 Jan 23 01:06:44.198055 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 01:06:44.198699 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 01:06:44.216740 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 01:06:44.258906 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 23 01:06:44.337686 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 01:06:44.338221 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 01:06:44.626104 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 01:06:44.754749 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 01:06:44.755983 locksmithd[1586]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 01:06:45.198356 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 01:06:45.212159 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 01:06:45.227044 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 01:06:48.352678 containerd[1546]: time="2026-01-23T01:06:48Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 01:06:48.361076 containerd[1546]: time="2026-01-23T01:06:48.360947013Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 01:06:48.462417 containerd[1546]: time="2026-01-23T01:06:48.462029476Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="237.694µs" Jan 23 01:06:48.462417 containerd[1546]: time="2026-01-23T01:06:48.462157534Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 01:06:48.462417 containerd[1546]: time="2026-01-23T01:06:48.462190206Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 01:06:48.462619 containerd[1546]: time="2026-01-23T01:06:48.462578150Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 01:06:48.462619 containerd[1546]: time="2026-01-23T01:06:48.462603267Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 01:06:48.463101 containerd[1546]: time="2026-01-23T01:06:48.462973007Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 01:06:48.463101 containerd[1546]: time="2026-01-23T01:06:48.463091348Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 01:06:48.463179 containerd[1546]: time="2026-01-23T01:06:48.463114211Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 01:06:48.464217 containerd[1546]: time="2026-01-23T01:06:48.464031905Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 01:06:48.464217 containerd[1546]: time="2026-01-23T01:06:48.464120460Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 01:06:48.464217 containerd[1546]: time="2026-01-23T01:06:48.464144284Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 01:06:48.464217 containerd[1546]: time="2026-01-23T01:06:48.464160174Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 01:06:48.464619 containerd[1546]: time="2026-01-23T01:06:48.464508885Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 01:06:48.465950 containerd[1546]: time="2026-01-23T01:06:48.465672639Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 01:06:48.465950 containerd[1546]: time="2026-01-23T01:06:48.465884935Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 01:06:48.465950 containerd[1546]: time="2026-01-23T01:06:48.465910844Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 01:06:48.466148 containerd[1546]: time="2026-01-23T01:06:48.466027532Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 01:06:48.468445 containerd[1546]: time="2026-01-23T01:06:48.467713199Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 01:06:48.468445 containerd[1546]: time="2026-01-23T01:06:48.467957765Z" level=info msg="metadata content store policy set" policy=shared Jan 23 01:06:48.536890 containerd[1546]: time="2026-01-23T01:06:48.536741381Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 01:06:48.537745 containerd[1546]: time="2026-01-23T01:06:48.537717914Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 01:06:48.538915 containerd[1546]: time="2026-01-23T01:06:48.538742658Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 01:06:48.538970 containerd[1546]: time="2026-01-23T01:06:48.538924918Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 01:06:48.539700 containerd[1546]: time="2026-01-23T01:06:48.539091810Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 01:06:48.540395 containerd[1546]: time="2026-01-23T01:06:48.540180312Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 01:06:48.540395 containerd[1546]: time="2026-01-23T01:06:48.540352504Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 01:06:48.540395 containerd[1546]: time="2026-01-23T01:06:48.540371048Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 01:06:48.540395 containerd[1546]: time="2026-01-23T01:06:48.540383722Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 01:06:48.540395 containerd[1546]: time="2026-01-23T01:06:48.540394602Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 01:06:48.540780 containerd[1546]: time="2026-01-23T01:06:48.540405293Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 01:06:48.540780 containerd[1546]: time="2026-01-23T01:06:48.540417906Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 01:06:48.541994 containerd[1546]: time="2026-01-23T01:06:48.541460814Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 01:06:48.541994 containerd[1546]: time="2026-01-23T01:06:48.541706873Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 01:06:48.541994 containerd[1546]: time="2026-01-23T01:06:48.541737961Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 01:06:48.541994 containerd[1546]: time="2026-01-23T01:06:48.541753139Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 01:06:48.541994 containerd[1546]: time="2026-01-23T01:06:48.541772405Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 01:06:48.541994 containerd[1546]: time="2026-01-23T01:06:48.541793926Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 01:06:48.542180 containerd[1546]: time="2026-01-23T01:06:48.542041407Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 01:06:48.542180 containerd[1546]: time="2026-01-23T01:06:48.542064902Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 01:06:48.542180 containerd[1546]: time="2026-01-23T01:06:48.542131446Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 01:06:48.542180 containerd[1546]: time="2026-01-23T01:06:48.542150752Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 01:06:48.542180 containerd[1546]: time="2026-01-23T01:06:48.542165990Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 01:06:48.543613 containerd[1546]: time="2026-01-23T01:06:48.543331456Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 01:06:48.543613 containerd[1546]: time="2026-01-23T01:06:48.543529326Z" level=info msg="Start snapshots syncer" Jan 23 01:06:48.543779 containerd[1546]: time="2026-01-23T01:06:48.543742424Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 01:06:49.106415 containerd[1546]: time="2026-01-23T01:06:49.102656711Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 01:06:49.107578 containerd[1546]: time="2026-01-23T01:06:49.107538186Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 01:06:49.108473 containerd[1546]: time="2026-01-23T01:06:49.108441604Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 01:06:49.109637 containerd[1546]: time="2026-01-23T01:06:49.109606158Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 01:06:49.109939 containerd[1546]: time="2026-01-23T01:06:49.109908813Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 01:06:49.110035 containerd[1546]: time="2026-01-23T01:06:49.110010833Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 01:06:49.110198 containerd[1546]: time="2026-01-23T01:06:49.110171013Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 01:06:49.110494 containerd[1546]: time="2026-01-23T01:06:49.110467587Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 01:06:49.110578 containerd[1546]: time="2026-01-23T01:06:49.110558887Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 01:06:49.110655 containerd[1546]: time="2026-01-23T01:06:49.110635209Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 01:06:49.111030 containerd[1546]: time="2026-01-23T01:06:49.111003347Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 01:06:49.112016 containerd[1546]: time="2026-01-23T01:06:49.111989128Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 01:06:49.112112 containerd[1546]: time="2026-01-23T01:06:49.112087091Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 01:06:49.112664 containerd[1546]: time="2026-01-23T01:06:49.112638250Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 01:06:49.112766 containerd[1546]: time="2026-01-23T01:06:49.112741863Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 01:06:49.113104 containerd[1546]: time="2026-01-23T01:06:49.113078802Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 01:06:49.113189 containerd[1546]: time="2026-01-23T01:06:49.113168490Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 01:06:49.113425 containerd[1546]: time="2026-01-23T01:06:49.113402347Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 01:06:49.114389 containerd[1546]: time="2026-01-23T01:06:49.114220034Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 01:06:49.114487 containerd[1546]: time="2026-01-23T01:06:49.114467857Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 01:06:49.114594 containerd[1546]: time="2026-01-23T01:06:49.114571439Z" level=info msg="runtime interface created" Jan 23 01:06:49.115064 containerd[1546]: time="2026-01-23T01:06:49.115043151Z" level=info msg="created NRI interface" Jan 23 01:06:49.115136 containerd[1546]: time="2026-01-23T01:06:49.115118802Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 01:06:49.115217 containerd[1546]: time="2026-01-23T01:06:49.115200675Z" level=info msg="Connect containerd service" Jan 23 01:06:49.115553 containerd[1546]: time="2026-01-23T01:06:49.115528247Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 01:06:49.122391 containerd[1546]: time="2026-01-23T01:06:49.122211857Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 01:06:49.446472 tar[1538]: linux-amd64/README.md Jan 23 01:06:49.522985 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 01:06:50.936673 containerd[1546]: time="2026-01-23T01:06:50.935904984Z" level=info msg="Start subscribing containerd event" Jan 23 01:06:50.937715 containerd[1546]: time="2026-01-23T01:06:50.936721579Z" level=info msg="Start recovering state" Jan 23 01:06:50.941873 containerd[1546]: time="2026-01-23T01:06:50.941759316Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 01:06:50.943892 containerd[1546]: time="2026-01-23T01:06:50.942978873Z" level=info msg="Start event monitor" Jan 23 01:06:50.944502 containerd[1546]: time="2026-01-23T01:06:50.944476590Z" level=info msg="Start cni network conf syncer for default" Jan 23 01:06:50.945199 containerd[1546]: time="2026-01-23T01:06:50.945163933Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 01:06:50.946404 containerd[1546]: time="2026-01-23T01:06:50.946124517Z" level=info msg="Start streaming server" Jan 23 01:06:50.947743 containerd[1546]: time="2026-01-23T01:06:50.947717763Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 01:06:50.948637 containerd[1546]: time="2026-01-23T01:06:50.947903208Z" level=info msg="runtime interface starting up..." Jan 23 01:06:50.948637 containerd[1546]: time="2026-01-23T01:06:50.947922936Z" level=info msg="starting plugins..." Jan 23 01:06:50.948637 containerd[1546]: time="2026-01-23T01:06:50.947953332Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 01:06:50.952561 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 01:06:50.957461 containerd[1546]: time="2026-01-23T01:06:50.954608800Z" level=info msg="containerd successfully booted in 2.604859s" Jan 23 01:06:51.582907 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 01:06:51.587101 systemd[1]: Started sshd@0-10.0.0.42:22-10.0.0.1:36956.service - OpenSSH per-connection server daemon (10.0.0.1:36956). Jan 23 01:06:52.765798 sshd[1653]: Accepted publickey for core from 10.0.0.1 port 36956 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:06:53.002959 sshd-session[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:53.119712 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 01:06:53.122960 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 01:06:53.147475 systemd-logind[1529]: New session 1 of user core. Jan 23 01:06:53.586958 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 01:06:53.602484 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 01:06:53.759107 (systemd)[1658]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 01:06:53.802395 systemd-logind[1529]: New session c1 of user core. Jan 23 01:06:55.609143 systemd[1658]: Queued start job for default target default.target. Jan 23 01:06:55.710137 systemd[1658]: Created slice app.slice - User Application Slice. Jan 23 01:06:55.710183 systemd[1658]: Reached target paths.target - Paths. Jan 23 01:06:55.711772 systemd[1658]: Reached target timers.target - Timers. Jan 23 01:06:55.729206 systemd[1658]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 01:06:55.911657 systemd[1658]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 01:06:55.912108 systemd[1658]: Reached target sockets.target - Sockets. Jan 23 01:06:55.912406 systemd[1658]: Reached target basic.target - Basic System. Jan 23 01:06:55.912462 systemd[1658]: Reached target default.target - Main User Target. Jan 23 01:06:55.912502 systemd[1658]: Startup finished in 2.054s. Jan 23 01:06:55.912994 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 01:06:56.050448 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 01:06:56.427628 systemd[1]: Started sshd@1-10.0.0.42:22-10.0.0.1:57060.service - OpenSSH per-connection server daemon (10.0.0.1:57060). Jan 23 01:06:56.804667 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 57060 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:06:56.810156 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:56.828462 systemd-logind[1529]: New session 2 of user core. Jan 23 01:06:56.920385 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 01:06:57.862117 sshd[1672]: Connection closed by 10.0.0.1 port 57060 Jan 23 01:06:57.863997 sshd-session[1669]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:57.914012 systemd[1]: sshd@1-10.0.0.42:22-10.0.0.1:57060.service: Deactivated successfully. Jan 23 01:06:58.288448 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 01:06:58.349002 systemd-logind[1529]: Session 2 logged out. Waiting for processes to exit. Jan 23 01:06:58.391908 systemd[1]: Started sshd@2-10.0.0.42:22-10.0.0.1:57068.service - OpenSSH per-connection server daemon (10.0.0.1:57068). Jan 23 01:06:58.395372 systemd-logind[1529]: Removed session 2. Jan 23 01:06:58.553367 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 57068 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:06:58.556110 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:58.576803 systemd-logind[1529]: New session 3 of user core. Jan 23 01:06:58.609116 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 01:06:58.889567 sshd[1681]: Connection closed by 10.0.0.1 port 57068 Jan 23 01:06:58.893049 sshd-session[1678]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:58.901081 systemd[1]: sshd@2-10.0.0.42:22-10.0.0.1:57068.service: Deactivated successfully. Jan 23 01:06:58.906790 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 01:06:58.909710 systemd-logind[1529]: Session 3 logged out. Waiting for processes to exit. Jan 23 01:06:58.916816 systemd-logind[1529]: Removed session 3. Jan 23 01:07:00.564377 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:07:00.620481 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 01:07:00.627516 systemd[1]: Startup finished in 14.210s (kernel) + 25.445s (initrd) + 33.789s (userspace) = 1min 13.445s. Jan 23 01:07:00.669705 (kubelet)[1690]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:07:05.281333 kubelet[1690]: E0123 01:07:05.280786 1690 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:07:05.292101 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:07:05.292651 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:07:05.294018 systemd[1]: kubelet.service: Consumed 11.126s CPU time, 258.4M memory peak. Jan 23 01:07:08.952183 systemd[1]: Started sshd@3-10.0.0.42:22-10.0.0.1:47998.service - OpenSSH per-connection server daemon (10.0.0.1:47998). Jan 23 01:07:09.243039 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 47998 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:07:09.253501 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:07:09.330836 systemd-logind[1529]: New session 4 of user core. Jan 23 01:07:09.344458 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 01:07:09.487218 sshd[1704]: Connection closed by 10.0.0.1 port 47998 Jan 23 01:07:09.490132 sshd-session[1701]: pam_unix(sshd:session): session closed for user core Jan 23 01:07:09.522031 systemd[1]: sshd@3-10.0.0.42:22-10.0.0.1:47998.service: Deactivated successfully. Jan 23 01:07:09.538157 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 01:07:09.543790 systemd-logind[1529]: Session 4 logged out. Waiting for processes to exit. Jan 23 01:07:09.552838 systemd[1]: Started sshd@4-10.0.0.42:22-10.0.0.1:48002.service - OpenSSH per-connection server daemon (10.0.0.1:48002). Jan 23 01:07:09.565385 systemd-logind[1529]: Removed session 4. Jan 23 01:07:09.971088 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 48002 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:07:10.029359 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:07:10.107170 systemd-logind[1529]: New session 5 of user core. Jan 23 01:07:10.128386 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 01:07:10.271650 sshd[1713]: Connection closed by 10.0.0.1 port 48002 Jan 23 01:07:10.270668 sshd-session[1710]: pam_unix(sshd:session): session closed for user core Jan 23 01:07:10.326678 systemd[1]: sshd@4-10.0.0.42:22-10.0.0.1:48002.service: Deactivated successfully. Jan 23 01:07:10.349540 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 01:07:10.359023 systemd-logind[1529]: Session 5 logged out. Waiting for processes to exit. Jan 23 01:07:10.369788 systemd[1]: Started sshd@5-10.0.0.42:22-10.0.0.1:48004.service - OpenSSH per-connection server daemon (10.0.0.1:48004). Jan 23 01:07:10.399502 systemd-logind[1529]: Removed session 5. Jan 23 01:07:10.894500 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 48004 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:07:10.899201 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:07:10.951127 systemd-logind[1529]: New session 6 of user core. Jan 23 01:07:10.967171 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 01:07:11.247560 sshd[1722]: Connection closed by 10.0.0.1 port 48004 Jan 23 01:07:11.248744 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Jan 23 01:07:11.301463 systemd[1]: sshd@5-10.0.0.42:22-10.0.0.1:48004.service: Deactivated successfully. Jan 23 01:07:11.306453 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 01:07:11.309184 systemd-logind[1529]: Session 6 logged out. Waiting for processes to exit. Jan 23 01:07:11.321987 systemd[1]: Started sshd@6-10.0.0.42:22-10.0.0.1:48014.service - OpenSSH per-connection server daemon (10.0.0.1:48014). Jan 23 01:07:11.327006 systemd-logind[1529]: Removed session 6. Jan 23 01:07:11.552681 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 48014 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:07:11.558678 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:07:11.629532 systemd-logind[1529]: New session 7 of user core. Jan 23 01:07:11.645769 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 01:07:11.899664 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 01:07:11.900595 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:07:11.997207 sudo[1732]: pam_unix(sudo:session): session closed for user root Jan 23 01:07:12.005519 sshd[1731]: Connection closed by 10.0.0.1 port 48014 Jan 23 01:07:12.010586 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Jan 23 01:07:12.043097 systemd[1]: sshd@6-10.0.0.42:22-10.0.0.1:48014.service: Deactivated successfully. Jan 23 01:07:12.054670 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 01:07:12.066979 systemd-logind[1529]: Session 7 logged out. Waiting for processes to exit. Jan 23 01:07:12.083153 systemd[1]: Started sshd@7-10.0.0.42:22-10.0.0.1:48028.service - OpenSSH per-connection server daemon (10.0.0.1:48028). Jan 23 01:07:12.111496 systemd-logind[1529]: Removed session 7. Jan 23 01:07:12.423680 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 48028 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:07:12.446108 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:07:12.522118 systemd-logind[1529]: New session 8 of user core. Jan 23 01:07:12.532833 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 01:07:12.636821 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 01:07:12.638578 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:07:12.716548 sudo[1743]: pam_unix(sudo:session): session closed for user root Jan 23 01:07:12.733741 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 01:07:12.735824 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:07:12.801191 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 01:07:13.201140 augenrules[1765]: No rules Jan 23 01:07:13.210136 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 01:07:13.217086 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 01:07:13.226208 sudo[1742]: pam_unix(sudo:session): session closed for user root Jan 23 01:07:13.232557 sshd[1741]: Connection closed by 10.0.0.1 port 48028 Jan 23 01:07:13.236694 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Jan 23 01:07:13.257403 systemd[1]: sshd@7-10.0.0.42:22-10.0.0.1:48028.service: Deactivated successfully. Jan 23 01:07:13.261175 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 01:07:13.288441 systemd-logind[1529]: Session 8 logged out. Waiting for processes to exit. Jan 23 01:07:13.307028 systemd[1]: Started sshd@8-10.0.0.42:22-10.0.0.1:41534.service - OpenSSH per-connection server daemon (10.0.0.1:41534). Jan 23 01:07:13.316068 systemd-logind[1529]: Removed session 8. Jan 23 01:07:13.566072 sshd[1774]: Accepted publickey for core from 10.0.0.1 port 41534 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:07:13.568604 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:07:13.609740 systemd-logind[1529]: New session 9 of user core. Jan 23 01:07:13.621962 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 01:07:13.719778 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 01:07:13.722801 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:07:15.515433 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 01:07:15.536046 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:07:25.967058 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 01:07:26.101854 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:07:26.131790 (dockerd)[1806]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 01:07:26.307639 (kubelet)[1807]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:07:27.691509 kubelet[1807]: E0123 01:07:27.691028 1807 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:07:27.987462 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:07:27.988192 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:07:27.989905 systemd[1]: kubelet.service: Consumed 7.879s CPU time, 110.8M memory peak. Jan 23 01:07:28.218527 update_engine[1530]: I20260123 01:07:28.201002 1530 update_attempter.cc:509] Updating boot flags... Jan 23 01:07:35.101840 dockerd[1806]: time="2026-01-23T01:07:35.096079628Z" level=info msg="Starting up" Jan 23 01:07:35.110311 dockerd[1806]: time="2026-01-23T01:07:35.110190099Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 01:07:35.419384 dockerd[1806]: time="2026-01-23T01:07:35.418112103Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 01:07:36.007169 systemd[1]: var-lib-docker-metacopy\x2dcheck2432585554-merged.mount: Deactivated successfully. Jan 23 01:07:36.309880 dockerd[1806]: time="2026-01-23T01:07:36.306790377Z" level=info msg="Loading containers: start." Jan 23 01:07:36.424536 kernel: Initializing XFRM netlink socket Jan 23 01:07:38.003521 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 01:07:38.010750 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:07:39.939846 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:07:40.005823 (kubelet)[2015]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:07:40.035934 systemd-networkd[1450]: docker0: Link UP Jan 23 01:07:40.118401 dockerd[1806]: time="2026-01-23T01:07:40.117886988Z" level=info msg="Loading containers: done." Jan 23 01:07:40.351045 dockerd[1806]: time="2026-01-23T01:07:40.349500459Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 01:07:40.351045 dockerd[1806]: time="2026-01-23T01:07:40.352085172Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 01:07:40.355389 dockerd[1806]: time="2026-01-23T01:07:40.354509267Z" level=info msg="Initializing buildkit" Jan 23 01:07:40.619834 kubelet[2015]: E0123 01:07:40.619396 2015 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:07:40.625734 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:07:40.625962 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:07:40.631566 systemd[1]: kubelet.service: Consumed 1.583s CPU time, 111.3M memory peak. Jan 23 01:07:40.689078 dockerd[1806]: time="2026-01-23T01:07:40.685956923Z" level=info msg="Completed buildkit initialization" Jan 23 01:07:40.716790 dockerd[1806]: time="2026-01-23T01:07:40.716194619Z" level=info msg="Daemon has completed initialization" Jan 23 01:07:40.717565 dockerd[1806]: time="2026-01-23T01:07:40.717211148Z" level=info msg="API listen on /run/docker.sock" Jan 23 01:07:40.718699 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 01:07:46.138656 containerd[1546]: time="2026-01-23T01:07:46.137095547Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 23 01:07:50.707867 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 01:07:50.724127 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:07:50.841982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3327932004.mount: Deactivated successfully. Jan 23 01:07:53.289000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:07:53.342947 (kubelet)[2096]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:07:55.448793 kubelet[2096]: E0123 01:07:55.448115 2096 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:07:55.481101 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:07:55.482488 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:07:55.484194 systemd[1]: kubelet.service: Consumed 2.569s CPU time, 109.2M memory peak. Jan 23 01:08:05.505741 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 01:08:05.511913 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:08:06.149198 containerd[1546]: time="2026-01-23T01:08:06.148786299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:08:06.161130 containerd[1546]: time="2026-01-23T01:08:06.160839344Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068073" Jan 23 01:08:06.232557 containerd[1546]: time="2026-01-23T01:08:06.232400505Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:08:06.299448 containerd[1546]: time="2026-01-23T01:08:06.298474476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:08:06.306802 containerd[1546]: time="2026-01-23T01:08:06.305399293Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 20.167594956s" Jan 23 01:08:06.306802 containerd[1546]: time="2026-01-23T01:08:06.305556626Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 23 01:08:06.415995 containerd[1546]: time="2026-01-23T01:08:06.413687163Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 23 01:08:08.100902 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:08:08.138411 (kubelet)[2156]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:08:08.500172 kubelet[2156]: E0123 01:08:08.499475 2156 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:08:08.506555 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:08:08.507130 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:08:08.508198 systemd[1]: kubelet.service: Consumed 1.340s CPU time, 110.2M memory peak. Jan 23 01:08:11.838931 containerd[1546]: time="2026-01-23T01:08:11.838556133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:08:11.843493 containerd[1546]: time="2026-01-23T01:08:11.842962107Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162440" Jan 23 01:08:11.846502 containerd[1546]: time="2026-01-23T01:08:11.846411206Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:08:11.862040 containerd[1546]: time="2026-01-23T01:08:11.861543511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:08:11.863499 containerd[1546]: time="2026-01-23T01:08:11.863064826Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 5.447565224s" Jan 23 01:08:11.863499 containerd[1546]: time="2026-01-23T01:08:11.863191251Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 23 01:08:11.869421 containerd[1546]: time="2026-01-23T01:08:11.869076948Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 23 01:08:14.622900 containerd[1546]: time="2026-01-23T01:08:14.622719136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:08:14.626645 containerd[1546]: time="2026-01-23T01:08:14.626440175Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725927" Jan 23 01:08:14.629789 containerd[1546]: time="2026-01-23T01:08:14.629694675Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:08:14.635469 containerd[1546]: time="2026-01-23T01:08:14.635373199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:08:14.638497 containerd[1546]: time="2026-01-23T01:08:14.637159390Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 2.76795857s" Jan 23 01:08:14.638497 containerd[1546]: time="2026-01-23T01:08:14.637337783Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 23 01:08:14.640956 containerd[1546]: time="2026-01-23T01:08:14.640811651Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 23 01:08:18.528710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3645565214.mount: Deactivated successfully. Jan 23 01:08:18.531549 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 23 01:08:18.534805 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:08:19.423109 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:08:19.540332 (kubelet)[2190]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:08:21.256463 kubelet[2190]: E0123 01:08:21.255834 2190 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:08:21.265547 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:08:21.266448 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:08:21.268915 systemd[1]: kubelet.service: Consumed 1.924s CPU time, 111.1M memory peak. Jan 23 01:08:22.137582 containerd[1546]: time="2026-01-23T01:08:22.136904149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:08:22.140134 containerd[1546]: time="2026-01-23T01:08:22.140048710Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965293" Jan 23 01:08:22.144135 containerd[1546]: time="2026-01-23T01:08:22.144059545Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:08:22.160806 containerd[1546]: time="2026-01-23T01:08:22.158741261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:08:22.160806 containerd[1546]: time="2026-01-23T01:08:22.160080718Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 7.519177827s" Jan 23 01:08:22.160806 containerd[1546]: time="2026-01-23T01:08:22.160124068Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 23 01:08:22.198488 containerd[1546]: time="2026-01-23T01:08:22.198425804Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 23 01:08:23.651814 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4284406710.mount: Deactivated successfully. Jan 23 01:08:29.638867 containerd[1546]: time="2026-01-23T01:08:29.638689904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:08:29.641712 containerd[1546]: time="2026-01-23T01:08:29.641526025Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Jan 23 01:08:29.644632 containerd[1546]: time="2026-01-23T01:08:29.644475335Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:08:29.652271 containerd[1546]: time="2026-01-23T01:08:29.650833472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:08:29.655369 containerd[1546]: time="2026-01-23T01:08:29.655053931Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 7.456293644s" Jan 23 01:08:29.655369 containerd[1546]: time="2026-01-23T01:08:29.655169616Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 23 01:08:29.664272 containerd[1546]: time="2026-01-23T01:08:29.664145686Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 23 01:08:30.450751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1705300948.mount: Deactivated successfully. Jan 23 01:08:30.470962 containerd[1546]: time="2026-01-23T01:08:30.470851556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:08:30.473991 containerd[1546]: time="2026-01-23T01:08:30.473649504Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Jan 23 01:08:30.477841 containerd[1546]: time="2026-01-23T01:08:30.475872011Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:08:30.480076 containerd[1546]: time="2026-01-23T01:08:30.479935016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:08:30.481812 containerd[1546]: time="2026-01-23T01:08:30.481151375Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 816.93104ms" Jan 23 01:08:30.482157 containerd[1546]: time="2026-01-23T01:08:30.481941031Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 23 01:08:30.485546 containerd[1546]: time="2026-01-23T01:08:30.485493373Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 23 01:08:31.335825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3740235283.mount: Deactivated successfully. Jan 23 01:08:31.337913 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 23 01:08:31.341432 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:08:32.134940 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:08:32.179165 (kubelet)[2275]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:08:32.726835 kubelet[2275]: E0123 01:08:32.726753 2275 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:08:32.735435 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:08:32.735867 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:08:32.737177 systemd[1]: kubelet.service: Consumed 1.039s CPU time, 110.3M memory peak. Jan 23 01:08:40.682213 containerd[1546]: time="2026-01-23T01:08:40.681952726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:08:40.689350 containerd[1546]: time="2026-01-23T01:08:40.688088524Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166814" Jan 23 01:08:40.693799 containerd[1546]: time="2026-01-23T01:08:40.692768096Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:08:40.702167 containerd[1546]: time="2026-01-23T01:08:40.701944961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:08:40.710735 containerd[1546]: time="2026-01-23T01:08:40.710429820Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 10.22489498s" Jan 23 01:08:40.710735 containerd[1546]: time="2026-01-23T01:08:40.710493158Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 23 01:08:42.757699 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 23 01:08:42.765781 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:08:43.183154 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:08:43.209186 (kubelet)[2358]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:08:43.348682 kubelet[2358]: E0123 01:08:43.348162 2358 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:08:43.358685 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:08:43.359059 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:08:43.359858 systemd[1]: kubelet.service: Consumed 377ms CPU time, 110.5M memory peak. Jan 23 01:08:49.063551 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:08:49.065753 systemd[1]: kubelet.service: Consumed 377ms CPU time, 110.5M memory peak. Jan 23 01:08:49.094420 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:08:49.159947 systemd[1]: Reload requested from client PID 2374 ('systemctl') (unit session-9.scope)... Jan 23 01:08:49.160025 systemd[1]: Reloading... Jan 23 01:08:49.365507 zram_generator::config[2415]: No configuration found. Jan 23 01:08:50.447111 systemd[1]: Reloading finished in 1286 ms. Jan 23 01:08:50.688739 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 01:08:50.689023 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 01:08:50.690000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:08:50.690183 systemd[1]: kubelet.service: Consumed 288ms CPU time, 98.3M memory peak. Jan 23 01:08:50.696523 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:08:51.172594 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:08:51.195688 (kubelet)[2466]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 01:08:51.361666 kubelet[2466]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 01:08:51.361666 kubelet[2466]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:08:51.362470 kubelet[2466]: I0123 01:08:51.362152 2466 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 01:08:51.910191 kubelet[2466]: I0123 01:08:51.909956 2466 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 01:08:51.910191 kubelet[2466]: I0123 01:08:51.910050 2466 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 01:08:51.910191 kubelet[2466]: I0123 01:08:51.910091 2466 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 01:08:51.910191 kubelet[2466]: I0123 01:08:51.910108 2466 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 01:08:51.910853 kubelet[2466]: I0123 01:08:51.910756 2466 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 01:08:51.995616 kubelet[2466]: E0123 01:08:51.995465 2466 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 01:08:51.998006 kubelet[2466]: I0123 01:08:51.997932 2466 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 01:08:52.019207 kubelet[2466]: I0123 01:08:52.018930 2466 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 01:08:52.039083 kubelet[2466]: I0123 01:08:52.038865 2466 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 01:08:52.040080 kubelet[2466]: I0123 01:08:52.039859 2466 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 01:08:52.042137 kubelet[2466]: I0123 01:08:52.039965 2466 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 01:08:52.042137 kubelet[2466]: I0123 01:08:52.041653 2466 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 01:08:52.042137 kubelet[2466]: I0123 01:08:52.042070 2466 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 01:08:52.042623 kubelet[2466]: I0123 01:08:52.042549 2466 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 01:08:52.053451 kubelet[2466]: I0123 01:08:52.053130 2466 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:08:52.058360 kubelet[2466]: I0123 01:08:52.057818 2466 kubelet.go:475] "Attempting to sync node with API server" Jan 23 01:08:52.058360 kubelet[2466]: I0123 01:08:52.057891 2466 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 01:08:52.058360 kubelet[2466]: I0123 01:08:52.058121 2466 kubelet.go:387] "Adding apiserver pod source" Jan 23 01:08:52.058360 kubelet[2466]: I0123 01:08:52.058332 2466 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 01:08:52.060184 kubelet[2466]: E0123 01:08:52.060044 2466 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 01:08:52.063715 kubelet[2466]: E0123 01:08:52.062763 2466 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 01:08:52.076675 kubelet[2466]: I0123 01:08:52.076532 2466 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 01:08:52.079153 kubelet[2466]: I0123 01:08:52.078895 2466 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 01:08:52.080002 kubelet[2466]: I0123 01:08:52.079858 2466 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 01:08:52.080963 kubelet[2466]: W0123 01:08:52.080819 2466 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 01:08:52.098819 kubelet[2466]: I0123 01:08:52.098699 2466 server.go:1262] "Started kubelet" Jan 23 01:08:52.104030 kubelet[2466]: I0123 01:08:52.100641 2466 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 01:08:52.104030 kubelet[2466]: I0123 01:08:52.102683 2466 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 01:08:52.104030 kubelet[2466]: I0123 01:08:52.103912 2466 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 01:08:52.104473 kubelet[2466]: I0123 01:08:52.104053 2466 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 01:08:52.115591 kubelet[2466]: I0123 01:08:52.114585 2466 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 01:08:52.116537 kubelet[2466]: I0123 01:08:52.116506 2466 server.go:310] "Adding debug handlers to kubelet server" Jan 23 01:08:52.126465 kubelet[2466]: I0123 01:08:52.124740 2466 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 01:08:52.135077 kubelet[2466]: E0123 01:08:52.127952 2466 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.42:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.42:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188d36d87f08f40d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-23 01:08:52.098601997 +0000 UTC m=+0.887712484,LastTimestamp:2026-01-23 01:08:52.098601997 +0000 UTC m=+0.887712484,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 23 01:08:52.136136 kubelet[2466]: I0123 01:08:52.135969 2466 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 01:08:52.136631 kubelet[2466]: I0123 01:08:52.135828 2466 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 01:08:52.138470 kubelet[2466]: E0123 01:08:52.136689 2466 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:08:52.148739 kubelet[2466]: I0123 01:08:52.147514 2466 reconciler.go:29] "Reconciler: start to sync state" Jan 23 01:08:52.148739 kubelet[2466]: E0123 01:08:52.148037 2466 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.42:6443: connect: connection refused" interval="200ms" Jan 23 01:08:52.149563 kubelet[2466]: I0123 01:08:52.149332 2466 factory.go:223] Registration of the systemd container factory successfully Jan 23 01:08:52.153525 kubelet[2466]: E0123 01:08:52.152051 2466 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 01:08:52.153525 kubelet[2466]: I0123 01:08:52.152146 2466 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 01:08:52.153525 kubelet[2466]: E0123 01:08:52.152998 2466 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 01:08:52.158702 kubelet[2466]: I0123 01:08:52.158533 2466 factory.go:223] Registration of the containerd container factory successfully Jan 23 01:08:52.220829 kubelet[2466]: I0123 01:08:52.219892 2466 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 01:08:52.220829 kubelet[2466]: I0123 01:08:52.219923 2466 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 01:08:52.220829 kubelet[2466]: I0123 01:08:52.219946 2466 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:08:52.231081 kubelet[2466]: I0123 01:08:52.230535 2466 policy_none.go:49] "None policy: Start" Jan 23 01:08:52.231081 kubelet[2466]: I0123 01:08:52.230759 2466 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 01:08:52.231081 kubelet[2466]: I0123 01:08:52.230837 2466 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 01:08:52.240147 kubelet[2466]: I0123 01:08:52.240007 2466 policy_none.go:47] "Start" Jan 23 01:08:52.245568 kubelet[2466]: E0123 01:08:52.242892 2466 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:08:52.253730 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 01:08:52.257554 kubelet[2466]: I0123 01:08:52.257459 2466 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 01:08:52.262869 kubelet[2466]: I0123 01:08:52.262665 2466 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 01:08:52.263470 kubelet[2466]: I0123 01:08:52.262920 2466 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 01:08:52.263470 kubelet[2466]: I0123 01:08:52.263035 2466 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 01:08:52.264361 kubelet[2466]: E0123 01:08:52.264190 2466 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 01:08:52.266521 kubelet[2466]: E0123 01:08:52.263107 2466 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 01:08:52.285459 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 01:08:52.296083 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 01:08:52.308432 kubelet[2466]: E0123 01:08:52.307439 2466 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 01:08:52.308432 kubelet[2466]: I0123 01:08:52.307886 2466 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 01:08:52.308432 kubelet[2466]: I0123 01:08:52.307955 2466 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 01:08:52.309744 kubelet[2466]: I0123 01:08:52.308688 2466 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 01:08:52.311632 kubelet[2466]: E0123 01:08:52.311578 2466 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 01:08:52.311767 kubelet[2466]: E0123 01:08:52.311637 2466 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 23 01:08:52.350553 kubelet[2466]: E0123 01:08:52.350185 2466 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.42:6443: connect: connection refused" interval="400ms" Jan 23 01:08:52.395811 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Jan 23 01:08:52.411587 kubelet[2466]: I0123 01:08:52.411139 2466 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 01:08:52.413101 kubelet[2466]: E0123 01:08:52.412521 2466 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.42:6443/api/v1/nodes\": dial tcp 10.0.0.42:6443: connect: connection refused" node="localhost" Jan 23 01:08:52.416962 kubelet[2466]: E0123 01:08:52.416760 2466 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:08:52.426340 systemd[1]: Created slice kubepods-burstable-pod45760cc25a363525bdd2693f83bfd246.slice - libcontainer container kubepods-burstable-pod45760cc25a363525bdd2693f83bfd246.slice. Jan 23 01:08:52.433216 kubelet[2466]: E0123 01:08:52.433089 2466 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:08:52.440766 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Jan 23 01:08:52.445043 kubelet[2466]: E0123 01:08:52.444714 2466 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:08:52.452948 kubelet[2466]: I0123 01:08:52.452814 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:08:52.452948 kubelet[2466]: I0123 01:08:52.452920 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/45760cc25a363525bdd2693f83bfd246-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"45760cc25a363525bdd2693f83bfd246\") " pod="kube-system/kube-apiserver-localhost" Jan 23 01:08:52.452948 kubelet[2466]: I0123 01:08:52.452948 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/45760cc25a363525bdd2693f83bfd246-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"45760cc25a363525bdd2693f83bfd246\") " pod="kube-system/kube-apiserver-localhost" Jan 23 01:08:52.453152 kubelet[2466]: I0123 01:08:52.453035 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:08:52.453152 kubelet[2466]: I0123 01:08:52.453066 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:08:52.453152 kubelet[2466]: I0123 01:08:52.453107 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:08:52.453678 kubelet[2466]: I0123 01:08:52.453351 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 23 01:08:52.453678 kubelet[2466]: I0123 01:08:52.453458 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/45760cc25a363525bdd2693f83bfd246-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"45760cc25a363525bdd2693f83bfd246\") " pod="kube-system/kube-apiserver-localhost" Jan 23 01:08:52.453678 kubelet[2466]: I0123 01:08:52.453488 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:08:52.616156 kubelet[2466]: I0123 01:08:52.615583 2466 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 01:08:52.616485 kubelet[2466]: E0123 01:08:52.616347 2466 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.42:6443/api/v1/nodes\": dial tcp 10.0.0.42:6443: connect: connection refused" node="localhost" Jan 23 01:08:52.733033 kubelet[2466]: E0123 01:08:52.730870 2466 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:08:52.737969 containerd[1546]: time="2026-01-23T01:08:52.737803283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Jan 23 01:08:52.744843 kubelet[2466]: E0123 01:08:52.744519 2466 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:08:52.745577 containerd[1546]: time="2026-01-23T01:08:52.745471616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:45760cc25a363525bdd2693f83bfd246,Namespace:kube-system,Attempt:0,}" Jan 23 01:08:52.756100 kubelet[2466]: E0123 01:08:52.756063 2466 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:08:52.756950 containerd[1546]: time="2026-01-23T01:08:52.756901655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Jan 23 01:08:52.784884 kubelet[2466]: E0123 01:08:52.784658 2466 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.42:6443: connect: connection refused" interval="800ms" Jan 23 01:08:53.019090 kubelet[2466]: I0123 01:08:53.018903 2466 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 01:08:53.019685 kubelet[2466]: E0123 01:08:53.019577 2466 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.42:6443/api/v1/nodes\": dial tcp 10.0.0.42:6443: connect: connection refused" node="localhost" Jan 23 01:08:53.040906 kubelet[2466]: E0123 01:08:53.040809 2466 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 01:08:53.127302 kubelet[2466]: E0123 01:08:53.127102 2466 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 01:08:53.519200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4173228298.mount: Deactivated successfully. Jan 23 01:08:53.556672 containerd[1546]: time="2026-01-23T01:08:53.556428547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:08:53.565658 containerd[1546]: time="2026-01-23T01:08:53.564516821Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 23 01:08:53.571596 containerd[1546]: time="2026-01-23T01:08:53.570906183Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:08:53.579602 containerd[1546]: time="2026-01-23T01:08:53.578907211Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:08:53.581877 containerd[1546]: time="2026-01-23T01:08:53.581337034Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 01:08:53.586456 kubelet[2466]: E0123 01:08:53.585936 2466 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.42:6443: connect: connection refused" interval="1.6s" Jan 23 01:08:53.589617 containerd[1546]: time="2026-01-23T01:08:53.588984032Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:08:53.591076 containerd[1546]: time="2026-01-23T01:08:53.590824522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:08:53.592356 containerd[1546]: time="2026-01-23T01:08:53.591908692Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 837.72515ms" Jan 23 01:08:53.592827 containerd[1546]: time="2026-01-23T01:08:53.592775096Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 01:08:53.597621 containerd[1546]: time="2026-01-23T01:08:53.597460944Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 833.972808ms" Jan 23 01:08:53.599545 containerd[1546]: time="2026-01-23T01:08:53.598904339Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 840.873897ms" Jan 23 01:08:53.620753 kubelet[2466]: E0123 01:08:53.620688 2466 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 01:08:53.711044 containerd[1546]: time="2026-01-23T01:08:53.710641481Z" level=info msg="connecting to shim 086073ed754cb6500dbd81b2ba6c01cf9abc8d7088fb76b2250c6f8c5130a195" address="unix:///run/containerd/s/2e862137533a0b304582ad696fd012c412bc88ca2c8caedd7cee989fdd9356d7" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:08:53.714358 containerd[1546]: time="2026-01-23T01:08:53.714315086Z" level=info msg="connecting to shim aede2888962bfd7871a6d9bce88e5d95bf111ffec54c2d4f838254e1e19d4cff" address="unix:///run/containerd/s/014976d2f7120a6c00f1439cf653903b40d1c81d0072f5705090a18759efc8e7" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:08:53.714967 containerd[1546]: time="2026-01-23T01:08:53.714822820Z" level=info msg="connecting to shim 6bac153a415ff5a55b88ef25029b82b52e2dfcab536997a00c03aeb0bf8794c6" address="unix:///run/containerd/s/dcff7dd5bb8fb1ea0008c2ac4728d604933d42190cdf76fb64a1ccf357fd6fa2" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:08:53.775884 systemd[1]: Started cri-containerd-086073ed754cb6500dbd81b2ba6c01cf9abc8d7088fb76b2250c6f8c5130a195.scope - libcontainer container 086073ed754cb6500dbd81b2ba6c01cf9abc8d7088fb76b2250c6f8c5130a195. Jan 23 01:08:53.777892 kubelet[2466]: E0123 01:08:53.776057 2466 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 01:08:53.808602 systemd[1]: Started cri-containerd-aede2888962bfd7871a6d9bce88e5d95bf111ffec54c2d4f838254e1e19d4cff.scope - libcontainer container aede2888962bfd7871a6d9bce88e5d95bf111ffec54c2d4f838254e1e19d4cff. Jan 23 01:08:53.817836 systemd[1]: Started cri-containerd-6bac153a415ff5a55b88ef25029b82b52e2dfcab536997a00c03aeb0bf8794c6.scope - libcontainer container 6bac153a415ff5a55b88ef25029b82b52e2dfcab536997a00c03aeb0bf8794c6. Jan 23 01:08:53.822934 kubelet[2466]: I0123 01:08:53.822648 2466 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 01:08:53.826347 kubelet[2466]: E0123 01:08:53.825935 2466 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.42:6443/api/v1/nodes\": dial tcp 10.0.0.42:6443: connect: connection refused" node="localhost" Jan 23 01:08:53.975556 containerd[1546]: time="2026-01-23T01:08:53.975098777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"aede2888962bfd7871a6d9bce88e5d95bf111ffec54c2d4f838254e1e19d4cff\"" Jan 23 01:08:53.980724 kubelet[2466]: E0123 01:08:53.980572 2466 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:08:53.982583 containerd[1546]: time="2026-01-23T01:08:53.981998703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:45760cc25a363525bdd2693f83bfd246,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bac153a415ff5a55b88ef25029b82b52e2dfcab536997a00c03aeb0bf8794c6\"" Jan 23 01:08:53.984791 kubelet[2466]: E0123 01:08:53.984630 2466 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:08:53.996849 containerd[1546]: time="2026-01-23T01:08:53.991201741Z" level=info msg="CreateContainer within sandbox \"aede2888962bfd7871a6d9bce88e5d95bf111ffec54c2d4f838254e1e19d4cff\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 01:08:53.996849 containerd[1546]: time="2026-01-23T01:08:53.996485077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"086073ed754cb6500dbd81b2ba6c01cf9abc8d7088fb76b2250c6f8c5130a195\"" Jan 23 01:08:53.999518 kubelet[2466]: E0123 01:08:53.998475 2466 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:08:54.000722 containerd[1546]: time="2026-01-23T01:08:54.000205289Z" level=info msg="CreateContainer within sandbox \"6bac153a415ff5a55b88ef25029b82b52e2dfcab536997a00c03aeb0bf8794c6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 01:08:54.016851 containerd[1546]: time="2026-01-23T01:08:54.016531945Z" level=info msg="CreateContainer within sandbox \"086073ed754cb6500dbd81b2ba6c01cf9abc8d7088fb76b2250c6f8c5130a195\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 01:08:54.051639 containerd[1546]: time="2026-01-23T01:08:54.049222322Z" level=info msg="Container 54ff2fe23986bb6e5d1a1188fcd6d34c3d67d01ff613e2423baeda10ca8e8b09: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:08:54.058808 containerd[1546]: time="2026-01-23T01:08:54.058618522Z" level=info msg="Container b3b74848d01d001f5f811e2fe5f94fcb2b97cdb05d836e7084d8d6083686dbb0: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:08:54.070732 containerd[1546]: time="2026-01-23T01:08:54.070432546Z" level=info msg="Container 743be9c9ad518476c537f4ea17e5ff61921d118ad3017a3040af09b78af7868f: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:08:54.084874 containerd[1546]: time="2026-01-23T01:08:54.084730313Z" level=info msg="CreateContainer within sandbox \"aede2888962bfd7871a6d9bce88e5d95bf111ffec54c2d4f838254e1e19d4cff\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"54ff2fe23986bb6e5d1a1188fcd6d34c3d67d01ff613e2423baeda10ca8e8b09\"" Jan 23 01:08:54.086791 containerd[1546]: time="2026-01-23T01:08:54.086749140Z" level=info msg="StartContainer for \"54ff2fe23986bb6e5d1a1188fcd6d34c3d67d01ff613e2423baeda10ca8e8b09\"" Jan 23 01:08:54.091179 containerd[1546]: time="2026-01-23T01:08:54.090928469Z" level=info msg="connecting to shim 54ff2fe23986bb6e5d1a1188fcd6d34c3d67d01ff613e2423baeda10ca8e8b09" address="unix:///run/containerd/s/014976d2f7120a6c00f1439cf653903b40d1c81d0072f5705090a18759efc8e7" protocol=ttrpc version=3 Jan 23 01:08:54.093287 containerd[1546]: time="2026-01-23T01:08:54.092956102Z" level=info msg="CreateContainer within sandbox \"6bac153a415ff5a55b88ef25029b82b52e2dfcab536997a00c03aeb0bf8794c6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b3b74848d01d001f5f811e2fe5f94fcb2b97cdb05d836e7084d8d6083686dbb0\"" Jan 23 01:08:54.094130 containerd[1546]: time="2026-01-23T01:08:54.094036549Z" level=info msg="StartContainer for \"b3b74848d01d001f5f811e2fe5f94fcb2b97cdb05d836e7084d8d6083686dbb0\"" Jan 23 01:08:54.096062 containerd[1546]: time="2026-01-23T01:08:54.095966540Z" level=info msg="connecting to shim b3b74848d01d001f5f811e2fe5f94fcb2b97cdb05d836e7084d8d6083686dbb0" address="unix:///run/containerd/s/dcff7dd5bb8fb1ea0008c2ac4728d604933d42190cdf76fb64a1ccf357fd6fa2" protocol=ttrpc version=3 Jan 23 01:08:54.104363 containerd[1546]: time="2026-01-23T01:08:54.103955958Z" level=info msg="CreateContainer within sandbox \"086073ed754cb6500dbd81b2ba6c01cf9abc8d7088fb76b2250c6f8c5130a195\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"743be9c9ad518476c537f4ea17e5ff61921d118ad3017a3040af09b78af7868f\"" Jan 23 01:08:54.105540 containerd[1546]: time="2026-01-23T01:08:54.105088341Z" level=info msg="StartContainer for \"743be9c9ad518476c537f4ea17e5ff61921d118ad3017a3040af09b78af7868f\"" Jan 23 01:08:54.112990 containerd[1546]: time="2026-01-23T01:08:54.112921839Z" level=info msg="connecting to shim 743be9c9ad518476c537f4ea17e5ff61921d118ad3017a3040af09b78af7868f" address="unix:///run/containerd/s/2e862137533a0b304582ad696fd012c412bc88ca2c8caedd7cee989fdd9356d7" protocol=ttrpc version=3 Jan 23 01:08:54.138813 systemd[1]: Started cri-containerd-54ff2fe23986bb6e5d1a1188fcd6d34c3d67d01ff613e2423baeda10ca8e8b09.scope - libcontainer container 54ff2fe23986bb6e5d1a1188fcd6d34c3d67d01ff613e2423baeda10ca8e8b09. Jan 23 01:08:54.140698 kubelet[2466]: E0123 01:08:54.140646 2466 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 01:08:54.150906 systemd[1]: Started cri-containerd-b3b74848d01d001f5f811e2fe5f94fcb2b97cdb05d836e7084d8d6083686dbb0.scope - libcontainer container b3b74848d01d001f5f811e2fe5f94fcb2b97cdb05d836e7084d8d6083686dbb0. Jan 23 01:08:54.183794 systemd[1]: Started cri-containerd-743be9c9ad518476c537f4ea17e5ff61921d118ad3017a3040af09b78af7868f.scope - libcontainer container 743be9c9ad518476c537f4ea17e5ff61921d118ad3017a3040af09b78af7868f. Jan 23 01:08:54.332082 containerd[1546]: time="2026-01-23T01:08:54.330853350Z" level=info msg="StartContainer for \"b3b74848d01d001f5f811e2fe5f94fcb2b97cdb05d836e7084d8d6083686dbb0\" returns successfully" Jan 23 01:08:54.369555 containerd[1546]: time="2026-01-23T01:08:54.367897228Z" level=info msg="StartContainer for \"54ff2fe23986bb6e5d1a1188fcd6d34c3d67d01ff613e2423baeda10ca8e8b09\" returns successfully" Jan 23 01:08:54.379098 containerd[1546]: time="2026-01-23T01:08:54.378984762Z" level=info msg="StartContainer for \"743be9c9ad518476c537f4ea17e5ff61921d118ad3017a3040af09b78af7868f\" returns successfully" Jan 23 01:08:55.368631 kubelet[2466]: E0123 01:08:55.368520 2466 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:08:55.369507 kubelet[2466]: E0123 01:08:55.368707 2466 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:08:55.385660 kubelet[2466]: E0123 01:08:55.385543 2466 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:08:55.385826 kubelet[2466]: E0123 01:08:55.385807 2466 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:08:55.386486 kubelet[2466]: E0123 01:08:55.386202 2466 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:08:55.386805 kubelet[2466]: E0123 01:08:55.386708 2466 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:08:55.431647 kubelet[2466]: I0123 01:08:55.431534 2466 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 01:09:05.674179 kubelet[2466]: E0123 01:09:05.669806 2466 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.42:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 23 01:09:05.692641 kubelet[2466]: E0123 01:09:05.692546 2466 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:09:05.694606 kubelet[2466]: E0123 01:09:05.694577 2466 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:09:05.696118 kubelet[2466]: E0123 01:09:05.695687 2466 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 23 01:09:05.697361 kubelet[2466]: E0123 01:09:05.697138 2466 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 01:09:05.698233 kubelet[2466]: E0123 01:09:05.698110 2466 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 23 01:09:05.724360 kubelet[2466]: E0123 01:09:05.723931 2466 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:09:05.724697 kubelet[2466]: E0123 01:09:05.724671 2466 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:09:05.730369 kubelet[2466]: E0123 01:09:05.727069 2466 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:09:05.730928 kubelet[2466]: E0123 01:09:05.730899 2466 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:09:05.927970 kubelet[2466]: E0123 01:09:05.927806 2466 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 01:09:05.970926 kubelet[2466]: E0123 01:09:05.969393 2466 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 01:09:05.989626 kubelet[2466]: E0123 01:09:05.989534 2466 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 01:09:07.434539 kubelet[2466]: E0123 01:09:07.434502 2466 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:09:07.436703 kubelet[2466]: E0123 01:09:07.436676 2466 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:09:07.438913 kubelet[2466]: E0123 01:09:07.438668 2466 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:09:07.442111 kubelet[2466]: E0123 01:09:07.441876 2466 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:09:07.449757 kubelet[2466]: E0123 01:09:07.449533 2466 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:09:07.449757 kubelet[2466]: E0123 01:09:07.449671 2466 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:09:08.517046 kubelet[2466]: E0123 01:09:08.516918 2466 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:09:08.519069 kubelet[2466]: E0123 01:09:08.517750 2466 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:09:08.922537 kubelet[2466]: I0123 01:09:08.921037 2466 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 01:09:13.235850 kubelet[2466]: E0123 01:09:13.234570 2466 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.42:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.188d36d87f08f40d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-23 01:08:52.098601997 +0000 UTC m=+0.887712484,LastTimestamp:2026-01-23 01:08:52.098601997 +0000 UTC m=+0.887712484,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 23 01:09:15.706369 kubelet[2466]: E0123 01:09:15.705043 2466 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 23 01:09:15.711793 kubelet[2466]: E0123 01:09:15.709103 2466 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 01:09:16.109355 kubelet[2466]: E0123 01:09:16.106902 2466 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:09:16.131462 kubelet[2466]: E0123 01:09:16.121998 2466 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:09:20.165824 kubelet[2466]: I0123 01:09:20.063467 2466 apiserver.go:52] "Watching apiserver" Jan 23 01:09:25.916067 kubelet[2466]: E0123 01:09:25.906870 2466 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 23 01:09:27.611049 kubelet[2466]: I0123 01:09:27.610190 2466 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 01:09:27.650785 kubelet[2466]: E0123 01:09:27.649990 2466 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="Get \"https://10.0.0.42:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" node="localhost" Jan 23 01:09:28.727947 kubelet[2466]: E0123 01:09:28.727111 2466 kubelet_node_status.go:113] "Unable to register node with API server, error getting existing node" err="nodes \"localhost\" not found" node="localhost" Jan 23 01:09:28.895910 kubelet[2466]: E0123 01:09:28.874786 2466 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188d36d87f08f40d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-23 01:08:52.098601997 +0000 UTC m=+0.887712484,LastTimestamp:2026-01-23 01:08:52.098601997 +0000 UTC m=+0.887712484,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 23 01:09:29.478496 kubelet[2466]: E0123 01:09:29.477953 2466 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 23 01:09:30.056135 kubelet[2466]: E0123 01:09:30.052016 2466 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 23 01:09:30.660218 kubelet[2466]: E0123 01:09:30.659758 2466 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 23 01:09:31.703132 kubelet[2466]: E0123 01:09:31.702775 2466 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 23 01:09:35.253593 kubelet[2466]: I0123 01:09:35.251008 2466 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 01:09:35.336354 kubelet[2466]: I0123 01:09:35.335821 2466 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 23 01:09:35.341075 kubelet[2466]: I0123 01:09:35.340535 2466 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 23 01:09:35.434703 kubelet[2466]: I0123 01:09:35.434657 2466 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 23 01:09:35.438782 kubelet[2466]: E0123 01:09:35.438416 2466 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:09:35.471831 kubelet[2466]: I0123 01:09:35.471782 2466 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 23 01:09:35.471831 kubelet[2466]: E0123 01:09:35.490162 2466 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:09:35.515704 kubelet[2466]: E0123 01:09:35.514924 2466 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:09:35.923661 kubelet[2466]: I0123 01:09:35.914638 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.914347776 podStartE2EDuration="914.347776ms" podCreationTimestamp="2026-01-23 01:09:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:09:35.907616064 +0000 UTC m=+44.696726541" watchObservedRunningTime="2026-01-23 01:09:35.914347776 +0000 UTC m=+44.703458254" Jan 23 01:09:35.968496 kubelet[2466]: I0123 01:09:35.968035 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.968012912 podStartE2EDuration="968.012912ms" podCreationTimestamp="2026-01-23 01:09:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:09:35.967558035 +0000 UTC m=+44.756668522" watchObservedRunningTime="2026-01-23 01:09:35.968012912 +0000 UTC m=+44.757123429" Jan 23 01:09:36.017714 kubelet[2466]: I0123 01:09:36.017551 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.017528486 podStartE2EDuration="1.017528486s" podCreationTimestamp="2026-01-23 01:09:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:09:36.017221734 +0000 UTC m=+44.806332231" watchObservedRunningTime="2026-01-23 01:09:36.017528486 +0000 UTC m=+44.806638964" Jan 23 01:09:36.307399 kubelet[2466]: E0123 01:09:36.304854 2466 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:09:36.494867 systemd[1]: Reload requested from client PID 2757 ('systemctl') (unit session-9.scope)... Jan 23 01:09:36.496090 systemd[1]: Reloading... Jan 23 01:09:36.829362 zram_generator::config[2797]: No configuration found. Jan 23 01:09:37.308642 kubelet[2466]: E0123 01:09:37.307773 2466 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:09:37.952910 systemd[1]: Reloading finished in 1455 ms. Jan 23 01:09:38.143627 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:09:38.148933 kubelet[2466]: I0123 01:09:38.146201 2466 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 01:09:38.258000 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 01:09:38.343797 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:09:38.344424 systemd[1]: kubelet.service: Consumed 12.507s CPU time, 128.2M memory peak. Jan 23 01:09:38.487440 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:09:42.321744 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:09:42.369577 (kubelet)[2845]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 01:09:43.420200 kubelet[2845]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 01:09:43.420200 kubelet[2845]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:09:43.434086 kubelet[2845]: I0123 01:09:43.423112 2845 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 01:09:43.518313 kubelet[2845]: I0123 01:09:43.518186 2845 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 01:09:43.518911 kubelet[2845]: I0123 01:09:43.518383 2845 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 01:09:43.518911 kubelet[2845]: I0123 01:09:43.518428 2845 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 01:09:43.518911 kubelet[2845]: I0123 01:09:43.518597 2845 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 01:09:43.519738 kubelet[2845]: I0123 01:09:43.519708 2845 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 01:09:43.526077 kubelet[2845]: I0123 01:09:43.525990 2845 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 01:09:43.548645 kubelet[2845]: I0123 01:09:43.548091 2845 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 01:09:43.647837 kubelet[2845]: I0123 01:09:43.647463 2845 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 01:09:43.661330 kubelet[2845]: I0123 01:09:43.660920 2845 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 01:09:43.661814 kubelet[2845]: I0123 01:09:43.661721 2845 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 01:09:43.662995 kubelet[2845]: I0123 01:09:43.661771 2845 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 01:09:43.662995 kubelet[2845]: I0123 01:09:43.662821 2845 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 01:09:43.662995 kubelet[2845]: I0123 01:09:43.662839 2845 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 01:09:43.662995 kubelet[2845]: I0123 01:09:43.662989 2845 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 01:09:43.668891 kubelet[2845]: I0123 01:09:43.667051 2845 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:09:43.677447 kubelet[2845]: I0123 01:09:43.669123 2845 kubelet.go:475] "Attempting to sync node with API server" Jan 23 01:09:43.677447 kubelet[2845]: I0123 01:09:43.669137 2845 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 01:09:43.677447 kubelet[2845]: I0123 01:09:43.669160 2845 kubelet.go:387] "Adding apiserver pod source" Jan 23 01:09:43.677447 kubelet[2845]: I0123 01:09:43.669181 2845 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 01:09:43.685603 kubelet[2845]: I0123 01:09:43.680010 2845 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 01:09:43.787401 kubelet[2845]: I0123 01:09:43.786618 2845 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 01:09:43.787401 kubelet[2845]: I0123 01:09:43.787145 2845 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 01:09:44.045968 kubelet[2845]: I0123 01:09:44.045432 2845 server.go:1262] "Started kubelet" Jan 23 01:09:44.060191 kubelet[2845]: I0123 01:09:44.052413 2845 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 01:09:44.103376 kubelet[2845]: I0123 01:09:44.101858 2845 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 01:09:44.103376 kubelet[2845]: I0123 01:09:44.103125 2845 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 01:09:44.104796 kubelet[2845]: I0123 01:09:44.103830 2845 server.go:310] "Adding debug handlers to kubelet server" Jan 23 01:09:44.106724 kubelet[2845]: I0123 01:09:44.106050 2845 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 01:09:44.147048 kubelet[2845]: I0123 01:09:44.144432 2845 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 01:09:44.153362 kubelet[2845]: I0123 01:09:44.152214 2845 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 01:09:44.337211 kubelet[2845]: I0123 01:09:44.336612 2845 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 01:09:44.341493 kubelet[2845]: I0123 01:09:44.341056 2845 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 01:09:44.351699 kubelet[2845]: I0123 01:09:44.351481 2845 reconciler.go:29] "Reconciler: start to sync state" Jan 23 01:09:44.401151 kubelet[2845]: E0123 01:09:44.400796 2845 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 01:09:44.401904 kubelet[2845]: I0123 01:09:44.401698 2845 factory.go:223] Registration of the containerd container factory successfully Jan 23 01:09:44.401904 kubelet[2845]: I0123 01:09:44.401728 2845 factory.go:223] Registration of the systemd container factory successfully Jan 23 01:09:44.405036 kubelet[2845]: I0123 01:09:44.404986 2845 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 01:09:44.432624 kubelet[2845]: I0123 01:09:44.432461 2845 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 01:09:44.541603 kubelet[2845]: I0123 01:09:44.536439 2845 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 01:09:44.541603 kubelet[2845]: I0123 01:09:44.536595 2845 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 01:09:44.541603 kubelet[2845]: I0123 01:09:44.536633 2845 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 01:09:44.541603 kubelet[2845]: E0123 01:09:44.536758 2845 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 01:09:44.735908 kubelet[2845]: E0123 01:09:44.658205 2845 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 01:09:44.735908 kubelet[2845]: I0123 01:09:44.734379 2845 apiserver.go:52] "Watching apiserver" Jan 23 01:09:44.948209 kubelet[2845]: E0123 01:09:44.945828 2845 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 01:09:45.444940 kubelet[2845]: E0123 01:09:45.443487 2845 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 01:09:45.519057 kubelet[2845]: I0123 01:09:45.518028 2845 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 01:09:45.519057 kubelet[2845]: I0123 01:09:45.518126 2845 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 01:09:45.519057 kubelet[2845]: I0123 01:09:45.518212 2845 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:09:45.525203 kubelet[2845]: I0123 01:09:45.520484 2845 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 01:09:45.525203 kubelet[2845]: I0123 01:09:45.520585 2845 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 01:09:45.525203 kubelet[2845]: I0123 01:09:45.520712 2845 policy_none.go:49] "None policy: Start" Jan 23 01:09:45.525203 kubelet[2845]: I0123 01:09:45.520811 2845 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 01:09:45.525203 kubelet[2845]: I0123 01:09:45.520880 2845 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 01:09:45.525203 kubelet[2845]: I0123 01:09:45.521602 2845 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 23 01:09:45.525203 kubelet[2845]: I0123 01:09:45.521712 2845 policy_none.go:47] "Start" Jan 23 01:09:45.596488 kubelet[2845]: E0123 01:09:45.595479 2845 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 01:09:45.596488 kubelet[2845]: I0123 01:09:45.596496 2845 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 01:09:45.596788 kubelet[2845]: I0123 01:09:45.596626 2845 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 01:09:45.607660 kubelet[2845]: I0123 01:09:45.606668 2845 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 01:09:46.521593 kubelet[2845]: E0123 01:09:46.521481 2845 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 01:09:46.525372 kubelet[2845]: I0123 01:09:46.525123 2845 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 01:09:46.545036 containerd[1546]: time="2026-01-23T01:09:46.539144693Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 01:09:46.552657 kubelet[2845]: I0123 01:09:46.552626 2845 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 01:09:46.568783 kubelet[2845]: I0123 01:09:46.567870 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/45760cc25a363525bdd2693f83bfd246-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"45760cc25a363525bdd2693f83bfd246\") " pod="kube-system/kube-apiserver-localhost" Jan 23 01:09:46.576977 kubelet[2845]: I0123 01:09:46.576717 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:09:46.580735 kubelet[2845]: I0123 01:09:46.579739 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:09:46.590806 kubelet[2845]: I0123 01:09:46.590775 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:09:46.596392 kubelet[2845]: I0123 01:09:46.595191 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6jgw\" (UniqueName: \"kubernetes.io/projected/d6bf6455-a8cd-473e-8ba5-cecce307d4fc-kube-api-access-g6jgw\") pod \"kube-proxy-v7nks\" (UID: \"d6bf6455-a8cd-473e-8ba5-cecce307d4fc\") " pod="kube-system/kube-proxy-v7nks" Jan 23 01:09:46.596392 kubelet[2845]: I0123 01:09:46.595719 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/45760cc25a363525bdd2693f83bfd246-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"45760cc25a363525bdd2693f83bfd246\") " pod="kube-system/kube-apiserver-localhost" Jan 23 01:09:46.596392 kubelet[2845]: I0123 01:09:46.595766 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:09:46.596392 kubelet[2845]: I0123 01:09:46.595906 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:09:46.596392 kubelet[2845]: I0123 01:09:46.595940 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 23 01:09:46.596734 kubelet[2845]: I0123 01:09:46.596041 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d6bf6455-a8cd-473e-8ba5-cecce307d4fc-kube-proxy\") pod \"kube-proxy-v7nks\" (UID: \"d6bf6455-a8cd-473e-8ba5-cecce307d4fc\") " pod="kube-system/kube-proxy-v7nks" Jan 23 01:09:46.604746 kubelet[2845]: I0123 01:09:46.596190 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d6bf6455-a8cd-473e-8ba5-cecce307d4fc-xtables-lock\") pod \"kube-proxy-v7nks\" (UID: \"d6bf6455-a8cd-473e-8ba5-cecce307d4fc\") " pod="kube-system/kube-proxy-v7nks" Jan 23 01:09:46.604878 kubelet[2845]: I0123 01:09:46.604781 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6bf6455-a8cd-473e-8ba5-cecce307d4fc-lib-modules\") pod \"kube-proxy-v7nks\" (UID: \"d6bf6455-a8cd-473e-8ba5-cecce307d4fc\") " pod="kube-system/kube-proxy-v7nks" Jan 23 01:09:46.604878 kubelet[2845]: I0123 01:09:46.604819 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/45760cc25a363525bdd2693f83bfd246-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"45760cc25a363525bdd2693f83bfd246\") " pod="kube-system/kube-apiserver-localhost" Jan 23 01:09:46.605454 kubelet[2845]: I0123 01:09:46.558901 2845 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 01:09:46.731741 systemd[1]: Created slice kubepods-besteffort-podd6bf6455_a8cd_473e_8ba5_cecce307d4fc.slice - libcontainer container kubepods-besteffort-podd6bf6455_a8cd_473e_8ba5_cecce307d4fc.slice. Jan 23 01:09:47.010783 kubelet[2845]: E0123 01:09:47.008399 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:09:47.017895 kubelet[2845]: E0123 01:09:47.016783 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:09:47.027835 kubelet[2845]: E0123 01:09:47.027040 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:09:47.032839 kubelet[2845]: I0123 01:09:47.032810 2845 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 01:09:47.108037 kubelet[2845]: E0123 01:09:47.108003 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:09:47.113433 kubelet[2845]: I0123 01:09:47.113407 2845 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 23 01:09:47.113710 kubelet[2845]: I0123 01:09:47.113684 2845 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 23 01:09:47.114479 containerd[1546]: time="2026-01-23T01:09:47.114333985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v7nks,Uid:d6bf6455-a8cd-473e-8ba5-cecce307d4fc,Namespace:kube-system,Attempt:0,}" Jan 23 01:09:47.800429 containerd[1546]: time="2026-01-23T01:09:47.800016559Z" level=info msg="connecting to shim 40c9d79af40979586c3aedf2bef95feb2bcd4180f6c93867b6f6b8d52941f863" address="unix:///run/containerd/s/a26671eae1cb17a774074c198115269a64d91b6aac28615c74f755dea8339ee6" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:09:47.804503 kubelet[2845]: E0123 01:09:47.804152 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:09:47.809883 kubelet[2845]: E0123 01:09:47.806636 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:09:47.810333 kubelet[2845]: E0123 01:09:47.809516 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:09:49.543635 kubelet[2845]: E0123 01:09:49.522735 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:09:49.543635 kubelet[2845]: E0123 01:09:49.522761 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:09:49.543635 kubelet[2845]: E0123 01:09:49.525800 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:09:49.710133 systemd[1]: Started cri-containerd-40c9d79af40979586c3aedf2bef95feb2bcd4180f6c93867b6f6b8d52941f863.scope - libcontainer container 40c9d79af40979586c3aedf2bef95feb2bcd4180f6c93867b6f6b8d52941f863. Jan 23 01:09:50.533026 kubelet[2845]: E0123 01:09:50.532985 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:09:50.538492 kubelet[2845]: E0123 01:09:50.538394 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:09:51.412912 containerd[1546]: time="2026-01-23T01:09:51.412787002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v7nks,Uid:d6bf6455-a8cd-473e-8ba5-cecce307d4fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"40c9d79af40979586c3aedf2bef95feb2bcd4180f6c93867b6f6b8d52941f863\"" Jan 23 01:09:51.416934 kubelet[2845]: E0123 01:09:51.416117 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:09:51.441897 containerd[1546]: time="2026-01-23T01:09:51.441722084Z" level=info msg="CreateContainer within sandbox \"40c9d79af40979586c3aedf2bef95feb2bcd4180f6c93867b6f6b8d52941f863\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 01:09:51.534437 containerd[1546]: time="2026-01-23T01:09:51.533989871Z" level=info msg="Container ad55758ee5bf71646fc2d942a2084e4d78e3154d0d4e98b006c62b5046ddb15c: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:09:51.540738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3697500605.mount: Deactivated successfully. Jan 23 01:09:51.544211 kubelet[2845]: E0123 01:09:51.544046 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:09:51.601587 containerd[1546]: time="2026-01-23T01:09:51.601381230Z" level=info msg="CreateContainer within sandbox \"40c9d79af40979586c3aedf2bef95feb2bcd4180f6c93867b6f6b8d52941f863\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ad55758ee5bf71646fc2d942a2084e4d78e3154d0d4e98b006c62b5046ddb15c\"" Jan 23 01:09:51.608152 containerd[1546]: time="2026-01-23T01:09:51.607722361Z" level=info msg="StartContainer for \"ad55758ee5bf71646fc2d942a2084e4d78e3154d0d4e98b006c62b5046ddb15c\"" Jan 23 01:09:51.631169 containerd[1546]: time="2026-01-23T01:09:51.630995657Z" level=info msg="connecting to shim ad55758ee5bf71646fc2d942a2084e4d78e3154d0d4e98b006c62b5046ddb15c" address="unix:///run/containerd/s/a26671eae1cb17a774074c198115269a64d91b6aac28615c74f755dea8339ee6" protocol=ttrpc version=3 Jan 23 01:09:52.130123 systemd[1]: Started cri-containerd-ad55758ee5bf71646fc2d942a2084e4d78e3154d0d4e98b006c62b5046ddb15c.scope - libcontainer container ad55758ee5bf71646fc2d942a2084e4d78e3154d0d4e98b006c62b5046ddb15c. Jan 23 01:09:53.110864 containerd[1546]: time="2026-01-23T01:09:53.110801158Z" level=info msg="StartContainer for \"ad55758ee5bf71646fc2d942a2084e4d78e3154d0d4e98b006c62b5046ddb15c\" returns successfully" Jan 23 01:09:53.960021 kubelet[2845]: E0123 01:09:53.959981 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:09:53.966926 systemd[1]: Created slice kubepods-besteffort-podab6a424b_6153_497f_bc21_7da0e8148b19.slice - libcontainer container kubepods-besteffort-podab6a424b_6153_497f_bc21_7da0e8148b19.slice. Jan 23 01:09:54.039388 kubelet[2845]: I0123 01:09:54.038898 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ab6a424b-6153-497f-bc21-7da0e8148b19-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-crfjl\" (UID: \"ab6a424b-6153-497f-bc21-7da0e8148b19\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-crfjl" Jan 23 01:09:54.039388 kubelet[2845]: I0123 01:09:54.039018 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk629\" (UniqueName: \"kubernetes.io/projected/ab6a424b-6153-497f-bc21-7da0e8148b19-kube-api-access-rk629\") pod \"tigera-operator-65cdcdfd6d-crfjl\" (UID: \"ab6a424b-6153-497f-bc21-7da0e8148b19\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-crfjl" Jan 23 01:09:54.325134 containerd[1546]: time="2026-01-23T01:09:54.324968554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-crfjl,Uid:ab6a424b-6153-497f-bc21-7da0e8148b19,Namespace:tigera-operator,Attempt:0,}" Jan 23 01:09:54.467835 containerd[1546]: time="2026-01-23T01:09:54.466999096Z" level=info msg="connecting to shim cf2ccfdd20b75e2b588b8afa8d54a9aef20fd9a5f09aa352b75ba7cbc8e7a645" address="unix:///run/containerd/s/987c951d1bf069be343e6333e89c105328d6abf48fe2542ab7e2df55e49485a6" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:09:54.661428 systemd[1]: Started cri-containerd-cf2ccfdd20b75e2b588b8afa8d54a9aef20fd9a5f09aa352b75ba7cbc8e7a645.scope - libcontainer container cf2ccfdd20b75e2b588b8afa8d54a9aef20fd9a5f09aa352b75ba7cbc8e7a645. Jan 23 01:09:54.994778 kubelet[2845]: E0123 01:09:54.992490 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:09:55.157394 containerd[1546]: time="2026-01-23T01:09:55.157171162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-crfjl,Uid:ab6a424b-6153-497f-bc21-7da0e8148b19,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"cf2ccfdd20b75e2b588b8afa8d54a9aef20fd9a5f09aa352b75ba7cbc8e7a645\"" Jan 23 01:09:55.182944 containerd[1546]: time="2026-01-23T01:09:55.180877027Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 01:09:55.490706 kubelet[2845]: E0123 01:09:55.482835 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:09:55.769445 kubelet[2845]: I0123 01:09:55.767208 2845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v7nks" podStartSLOduration=12.767184683 podStartE2EDuration="12.767184683s" podCreationTimestamp="2026-01-23 01:09:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:09:54.122431962 +0000 UTC m=+11.602516627" watchObservedRunningTime="2026-01-23 01:09:55.767184683 +0000 UTC m=+13.247269329" Jan 23 01:09:56.028086 kubelet[2845]: E0123 01:09:56.027431 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:09:57.107544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount965837004.mount: Deactivated successfully. Jan 23 01:10:04.008162 containerd[1546]: time="2026-01-23T01:10:04.006806484Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 23 01:10:04.008162 containerd[1546]: time="2026-01-23T01:10:04.007567715Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:04.021953 containerd[1546]: time="2026-01-23T01:10:04.021889392Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:04.107129 containerd[1546]: time="2026-01-23T01:10:04.100603567Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 8.919270357s" Jan 23 01:10:04.123672 containerd[1546]: time="2026-01-23T01:10:04.115985012Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 23 01:10:04.129574 containerd[1546]: time="2026-01-23T01:10:04.125726247Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:04.210087 containerd[1546]: time="2026-01-23T01:10:04.206706998Z" level=info msg="CreateContainer within sandbox \"cf2ccfdd20b75e2b588b8afa8d54a9aef20fd9a5f09aa352b75ba7cbc8e7a645\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 01:10:04.285149 containerd[1546]: time="2026-01-23T01:10:04.277045911Z" level=info msg="Container f3acc6e56fd7556432c550bdc66622a1ed4283d8acca0b6fe0d27daf508b89df: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:10:04.342592 containerd[1546]: time="2026-01-23T01:10:04.339587520Z" level=info msg="CreateContainer within sandbox \"cf2ccfdd20b75e2b588b8afa8d54a9aef20fd9a5f09aa352b75ba7cbc8e7a645\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f3acc6e56fd7556432c550bdc66622a1ed4283d8acca0b6fe0d27daf508b89df\"" Jan 23 01:10:04.347871 containerd[1546]: time="2026-01-23T01:10:04.345028027Z" level=info msg="StartContainer for \"f3acc6e56fd7556432c550bdc66622a1ed4283d8acca0b6fe0d27daf508b89df\"" Jan 23 01:10:04.354565 containerd[1546]: time="2026-01-23T01:10:04.353574472Z" level=info msg="connecting to shim f3acc6e56fd7556432c550bdc66622a1ed4283d8acca0b6fe0d27daf508b89df" address="unix:///run/containerd/s/987c951d1bf069be343e6333e89c105328d6abf48fe2542ab7e2df55e49485a6" protocol=ttrpc version=3 Jan 23 01:10:04.716993 systemd[1]: Started cri-containerd-f3acc6e56fd7556432c550bdc66622a1ed4283d8acca0b6fe0d27daf508b89df.scope - libcontainer container f3acc6e56fd7556432c550bdc66622a1ed4283d8acca0b6fe0d27daf508b89df. Jan 23 01:10:05.024135 containerd[1546]: time="2026-01-23T01:10:05.023792842Z" level=info msg="StartContainer for \"f3acc6e56fd7556432c550bdc66622a1ed4283d8acca0b6fe0d27daf508b89df\" returns successfully" Jan 23 01:10:05.415032 kubelet[2845]: I0123 01:10:05.413904 2845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-crfjl" podStartSLOduration=3.43093575 podStartE2EDuration="12.413824844s" podCreationTimestamp="2026-01-23 01:09:53 +0000 UTC" firstStartedPulling="2026-01-23 01:09:55.169562295 +0000 UTC m=+12.649646920" lastFinishedPulling="2026-01-23 01:10:04.15245139 +0000 UTC m=+21.632536014" observedRunningTime="2026-01-23 01:10:05.409715773 +0000 UTC m=+22.889800418" watchObservedRunningTime="2026-01-23 01:10:05.413824844 +0000 UTC m=+22.893909469" Jan 23 01:10:17.212140 sudo[1778]: pam_unix(sudo:session): session closed for user root Jan 23 01:10:17.225383 sshd[1777]: Connection closed by 10.0.0.1 port 41534 Jan 23 01:10:17.225137 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Jan 23 01:10:17.235723 systemd[1]: sshd@8-10.0.0.42:22-10.0.0.1:41534.service: Deactivated successfully. Jan 23 01:10:17.240525 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 01:10:17.241082 systemd[1]: session-9.scope: Consumed 26.773s CPU time, 229.2M memory peak. Jan 23 01:10:17.247748 systemd-logind[1529]: Session 9 logged out. Waiting for processes to exit. Jan 23 01:10:17.250833 systemd-logind[1529]: Removed session 9. Jan 23 01:10:46.633169 kubelet[2845]: E0123 01:10:46.632976 2845 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.063s" Jan 23 01:10:49.196469 systemd[1]: Created slice kubepods-besteffort-pod178f2d11_6bd9_472d_addf_71e226e20f93.slice - libcontainer container kubepods-besteffort-pod178f2d11_6bd9_472d_addf_71e226e20f93.slice. Jan 23 01:10:49.309899 kubelet[2845]: I0123 01:10:49.308912 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/178f2d11-6bd9-472d-addf-71e226e20f93-tigera-ca-bundle\") pod \"calico-typha-5676875c9c-cfqbx\" (UID: \"178f2d11-6bd9-472d-addf-71e226e20f93\") " pod="calico-system/calico-typha-5676875c9c-cfqbx" Jan 23 01:10:49.309899 kubelet[2845]: I0123 01:10:49.309194 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/178f2d11-6bd9-472d-addf-71e226e20f93-typha-certs\") pod \"calico-typha-5676875c9c-cfqbx\" (UID: \"178f2d11-6bd9-472d-addf-71e226e20f93\") " pod="calico-system/calico-typha-5676875c9c-cfqbx" Jan 23 01:10:49.309899 kubelet[2845]: I0123 01:10:49.309385 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpqfb\" (UniqueName: \"kubernetes.io/projected/178f2d11-6bd9-472d-addf-71e226e20f93-kube-api-access-bpqfb\") pod \"calico-typha-5676875c9c-cfqbx\" (UID: \"178f2d11-6bd9-472d-addf-71e226e20f93\") " pod="calico-system/calico-typha-5676875c9c-cfqbx" Jan 23 01:10:49.378204 systemd[1]: Created slice kubepods-besteffort-pod26dc301f_bc33_47ea_80df_b64cd4de7c2d.slice - libcontainer container kubepods-besteffort-pod26dc301f_bc33_47ea_80df_b64cd4de7c2d.slice. Jan 23 01:10:49.511669 kubelet[2845]: I0123 01:10:49.510991 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/26dc301f-bc33-47ea-80df-b64cd4de7c2d-flexvol-driver-host\") pod \"calico-node-mgtxl\" (UID: \"26dc301f-bc33-47ea-80df-b64cd4de7c2d\") " pod="calico-system/calico-node-mgtxl" Jan 23 01:10:49.511669 kubelet[2845]: I0123 01:10:49.511030 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/26dc301f-bc33-47ea-80df-b64cd4de7c2d-policysync\") pod \"calico-node-mgtxl\" (UID: \"26dc301f-bc33-47ea-80df-b64cd4de7c2d\") " pod="calico-system/calico-node-mgtxl" Jan 23 01:10:49.511669 kubelet[2845]: I0123 01:10:49.511047 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/26dc301f-bc33-47ea-80df-b64cd4de7c2d-var-lib-calico\") pod \"calico-node-mgtxl\" (UID: \"26dc301f-bc33-47ea-80df-b64cd4de7c2d\") " pod="calico-system/calico-node-mgtxl" Jan 23 01:10:49.511669 kubelet[2845]: I0123 01:10:49.511128 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/26dc301f-bc33-47ea-80df-b64cd4de7c2d-var-run-calico\") pod \"calico-node-mgtxl\" (UID: \"26dc301f-bc33-47ea-80df-b64cd4de7c2d\") " pod="calico-system/calico-node-mgtxl" Jan 23 01:10:49.514410 kubelet[2845]: I0123 01:10:49.511198 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/26dc301f-bc33-47ea-80df-b64cd4de7c2d-cni-log-dir\") pod \"calico-node-mgtxl\" (UID: \"26dc301f-bc33-47ea-80df-b64cd4de7c2d\") " pod="calico-system/calico-node-mgtxl" Jan 23 01:10:49.514410 kubelet[2845]: I0123 01:10:49.514049 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/26dc301f-bc33-47ea-80df-b64cd4de7c2d-cni-net-dir\") pod \"calico-node-mgtxl\" (UID: \"26dc301f-bc33-47ea-80df-b64cd4de7c2d\") " pod="calico-system/calico-node-mgtxl" Jan 23 01:10:49.514410 kubelet[2845]: I0123 01:10:49.514084 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/26dc301f-bc33-47ea-80df-b64cd4de7c2d-xtables-lock\") pod \"calico-node-mgtxl\" (UID: \"26dc301f-bc33-47ea-80df-b64cd4de7c2d\") " pod="calico-system/calico-node-mgtxl" Jan 23 01:10:49.514410 kubelet[2845]: I0123 01:10:49.514107 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26dc301f-bc33-47ea-80df-b64cd4de7c2d-tigera-ca-bundle\") pod \"calico-node-mgtxl\" (UID: \"26dc301f-bc33-47ea-80df-b64cd4de7c2d\") " pod="calico-system/calico-node-mgtxl" Jan 23 01:10:49.514410 kubelet[2845]: I0123 01:10:49.514187 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mscj8\" (UniqueName: \"kubernetes.io/projected/26dc301f-bc33-47ea-80df-b64cd4de7c2d-kube-api-access-mscj8\") pod \"calico-node-mgtxl\" (UID: \"26dc301f-bc33-47ea-80df-b64cd4de7c2d\") " pod="calico-system/calico-node-mgtxl" Jan 23 01:10:49.515978 kubelet[2845]: I0123 01:10:49.515957 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/26dc301f-bc33-47ea-80df-b64cd4de7c2d-node-certs\") pod \"calico-node-mgtxl\" (UID: \"26dc301f-bc33-47ea-80df-b64cd4de7c2d\") " pod="calico-system/calico-node-mgtxl" Jan 23 01:10:49.517727 kubelet[2845]: I0123 01:10:49.516372 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/26dc301f-bc33-47ea-80df-b64cd4de7c2d-lib-modules\") pod \"calico-node-mgtxl\" (UID: \"26dc301f-bc33-47ea-80df-b64cd4de7c2d\") " pod="calico-system/calico-node-mgtxl" Jan 23 01:10:49.518996 kubelet[2845]: I0123 01:10:49.518970 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/26dc301f-bc33-47ea-80df-b64cd4de7c2d-cni-bin-dir\") pod \"calico-node-mgtxl\" (UID: \"26dc301f-bc33-47ea-80df-b64cd4de7c2d\") " pod="calico-system/calico-node-mgtxl" Jan 23 01:10:49.523590 kubelet[2845]: E0123 01:10:49.523382 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:10:49.527678 containerd[1546]: time="2026-01-23T01:10:49.527020258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5676875c9c-cfqbx,Uid:178f2d11-6bd9-472d-addf-71e226e20f93,Namespace:calico-system,Attempt:0,}" Jan 23 01:10:49.557601 kubelet[2845]: E0123 01:10:49.556335 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7r7jt" podUID="e2553e8f-fa3b-4995-9072-1f7cce3ee2c8" Jan 23 01:10:49.622412 kubelet[2845]: I0123 01:10:49.622097 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e2553e8f-fa3b-4995-9072-1f7cce3ee2c8-varrun\") pod \"csi-node-driver-7r7jt\" (UID: \"e2553e8f-fa3b-4995-9072-1f7cce3ee2c8\") " pod="calico-system/csi-node-driver-7r7jt" Jan 23 01:10:49.622412 kubelet[2845]: I0123 01:10:49.622149 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e2553e8f-fa3b-4995-9072-1f7cce3ee2c8-registration-dir\") pod \"csi-node-driver-7r7jt\" (UID: \"e2553e8f-fa3b-4995-9072-1f7cce3ee2c8\") " pod="calico-system/csi-node-driver-7r7jt" Jan 23 01:10:49.627398 kubelet[2845]: I0123 01:10:49.624797 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvsxb\" (UniqueName: \"kubernetes.io/projected/e2553e8f-fa3b-4995-9072-1f7cce3ee2c8-kube-api-access-cvsxb\") pod \"csi-node-driver-7r7jt\" (UID: \"e2553e8f-fa3b-4995-9072-1f7cce3ee2c8\") " pod="calico-system/csi-node-driver-7r7jt" Jan 23 01:10:49.627398 kubelet[2845]: I0123 01:10:49.625069 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2553e8f-fa3b-4995-9072-1f7cce3ee2c8-kubelet-dir\") pod \"csi-node-driver-7r7jt\" (UID: \"e2553e8f-fa3b-4995-9072-1f7cce3ee2c8\") " pod="calico-system/csi-node-driver-7r7jt" Jan 23 01:10:49.627398 kubelet[2845]: I0123 01:10:49.625097 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e2553e8f-fa3b-4995-9072-1f7cce3ee2c8-socket-dir\") pod \"csi-node-driver-7r7jt\" (UID: \"e2553e8f-fa3b-4995-9072-1f7cce3ee2c8\") " pod="calico-system/csi-node-driver-7r7jt" Jan 23 01:10:49.642579 kubelet[2845]: E0123 01:10:49.641683 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:49.642579 kubelet[2845]: W0123 01:10:49.641785 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:49.642579 kubelet[2845]: E0123 01:10:49.642046 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:49.666055 kubelet[2845]: E0123 01:10:49.665918 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:49.666055 kubelet[2845]: W0123 01:10:49.666024 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:49.666055 kubelet[2845]: E0123 01:10:49.666058 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:49.707832 kubelet[2845]: E0123 01:10:49.707794 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:49.707832 kubelet[2845]: W0123 01:10:49.707820 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:49.708028 kubelet[2845]: E0123 01:10:49.707850 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:49.728595 containerd[1546]: time="2026-01-23T01:10:49.728366841Z" level=info msg="connecting to shim 24844fa85e9db28009ada31c3d15b67d7032289a2e0e2e37f55eb9c726a3406f" address="unix:///run/containerd/s/ada6df1169c6559e99747c2677d958b06f48395dcbc5d361099495f85259623b" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:10:49.745778 kubelet[2845]: E0123 01:10:49.743217 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:49.745778 kubelet[2845]: W0123 01:10:49.745695 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:49.745778 kubelet[2845]: E0123 01:10:49.745725 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:49.748752 kubelet[2845]: E0123 01:10:49.748608 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:49.748752 kubelet[2845]: W0123 01:10:49.748627 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:49.748752 kubelet[2845]: E0123 01:10:49.748642 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:49.754702 kubelet[2845]: E0123 01:10:49.754614 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:49.754702 kubelet[2845]: W0123 01:10:49.754637 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:49.754702 kubelet[2845]: E0123 01:10:49.754652 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:49.758143 kubelet[2845]: E0123 01:10:49.758021 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:49.758143 kubelet[2845]: W0123 01:10:49.758108 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:49.758143 kubelet[2845]: E0123 01:10:49.758139 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:49.765973 kubelet[2845]: E0123 01:10:49.765403 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:49.765973 kubelet[2845]: W0123 01:10:49.765428 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:49.765973 kubelet[2845]: E0123 01:10:49.765455 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:49.768129 kubelet[2845]: E0123 01:10:49.767636 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:49.769582 kubelet[2845]: W0123 01:10:49.768989 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:49.769582 kubelet[2845]: E0123 01:10:49.769016 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:49.769872 kubelet[2845]: E0123 01:10:49.769851 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:49.769961 kubelet[2845]: W0123 01:10:49.769943 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:49.770058 kubelet[2845]: E0123 01:10:49.770041 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:49.771179 kubelet[2845]: E0123 01:10:49.771159 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:49.771832 kubelet[2845]: W0123 01:10:49.771812 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:49.772722 kubelet[2845]: E0123 01:10:49.772704 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:49.775791 kubelet[2845]: E0123 01:10:49.775767 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:49.775901 kubelet[2845]: W0123 01:10:49.775879 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:49.776003 kubelet[2845]: E0123 01:10:49.775983 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:49.777066 kubelet[2845]: E0123 01:10:49.777046 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:49.777708 kubelet[2845]: W0123 01:10:49.777458 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:49.777809 kubelet[2845]: E0123 01:10:49.777790 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:49.778827 kubelet[2845]: E0123 01:10:49.778809 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:49.778928 kubelet[2845]: W0123 01:10:49.778911 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:49.779113 kubelet[2845]: E0123 01:10:49.779088 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:49.781400 kubelet[2845]: E0123 01:10:49.781381 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:49.781837 kubelet[2845]: W0123 01:10:49.781817 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:49.781933 kubelet[2845]: E0123 01:10:49.781916 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:49.782666 kubelet[2845]: E0123 01:10:49.782645 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:49.782786 kubelet[2845]: W0123 01:10:49.782745 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:49.782786 kubelet[2845]: E0123 01:10:49.782766 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:49.784151 kubelet[2845]: E0123 01:10:49.784127 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:49.785030 kubelet[2845]: W0123 01:10:49.784394 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:49.785030 kubelet[2845]: E0123 01:10:49.784415 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:49.785837 kubelet[2845]: E0123 01:10:49.785813 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:49.785928 kubelet[2845]: W0123 01:10:49.785905 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:49.786008 kubelet[2845]: E0123 01:10:49.785990 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:49.789833 kubelet[2845]: E0123 01:10:49.789809 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:49.789951 kubelet[2845]: W0123 01:10:49.789931 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:49.790045 kubelet[2845]: E0123 01:10:49.790028 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:49.791771 kubelet[2845]: E0123 01:10:49.791618 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:49.792046 kubelet[2845]: W0123 01:10:49.792027 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:49.792209 kubelet[2845]: E0123 01:10:49.792193 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:49.796422 kubelet[2845]: E0123 01:10:49.796038 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:49.796422 kubelet[2845]: W0123 01:10:49.796056 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:49.796422 kubelet[2845]: E0123 01:10:49.796073 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:49.797072 kubelet[2845]: E0123 01:10:49.796914 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:49.797072 kubelet[2845]: W0123 01:10:49.796937 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:49.797072 kubelet[2845]: E0123 01:10:49.796951 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:49.798941 kubelet[2845]: E0123 01:10:49.798918 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:49.799382 kubelet[2845]: W0123 01:10:49.799021 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:49.799382 kubelet[2845]: E0123 01:10:49.799044 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:49.800877 kubelet[2845]: E0123 01:10:49.800858 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:49.802420 kubelet[2845]: W0123 01:10:49.802396 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:49.802611 kubelet[2845]: E0123 01:10:49.802589 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:49.803573 kubelet[2845]: E0123 01:10:49.803472 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:49.803668 kubelet[2845]: W0123 01:10:49.803647 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:49.803734 kubelet[2845]: E0123 01:10:49.803721 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:49.804863 kubelet[2845]: E0123 01:10:49.804845 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:49.804976 kubelet[2845]: W0123 01:10:49.804955 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:49.805074 kubelet[2845]: E0123 01:10:49.805051 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:49.807627 kubelet[2845]: E0123 01:10:49.807608 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:49.807735 kubelet[2845]: W0123 01:10:49.807716 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:49.807825 kubelet[2845]: E0123 01:10:49.807807 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:49.809371 kubelet[2845]: E0123 01:10:49.808704 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:49.809371 kubelet[2845]: W0123 01:10:49.808728 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:49.809371 kubelet[2845]: E0123 01:10:49.808742 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:49.844754 kubelet[2845]: E0123 01:10:49.843872 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:49.844754 kubelet[2845]: W0123 01:10:49.843900 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:49.844754 kubelet[2845]: E0123 01:10:49.843930 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:49.859939 systemd[1]: Started cri-containerd-24844fa85e9db28009ada31c3d15b67d7032289a2e0e2e37f55eb9c726a3406f.scope - libcontainer container 24844fa85e9db28009ada31c3d15b67d7032289a2e0e2e37f55eb9c726a3406f. Jan 23 01:10:49.991023 kubelet[2845]: E0123 01:10:49.990762 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:10:49.992048 containerd[1546]: time="2026-01-23T01:10:49.991702055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mgtxl,Uid:26dc301f-bc33-47ea-80df-b64cd4de7c2d,Namespace:calico-system,Attempt:0,}" Jan 23 01:10:50.049121 containerd[1546]: time="2026-01-23T01:10:50.047610265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5676875c9c-cfqbx,Uid:178f2d11-6bd9-472d-addf-71e226e20f93,Namespace:calico-system,Attempt:0,} returns sandbox id \"24844fa85e9db28009ada31c3d15b67d7032289a2e0e2e37f55eb9c726a3406f\"" Jan 23 01:10:50.054692 kubelet[2845]: E0123 01:10:50.052217 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:10:50.062665 containerd[1546]: time="2026-01-23T01:10:50.062627121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 01:10:50.107055 containerd[1546]: time="2026-01-23T01:10:50.106999504Z" level=info msg="connecting to shim f03f41d94137f2a0fcf55e17d0af74d7f519393aa99bbf23c9d07ade346b86f3" address="unix:///run/containerd/s/e6d17e0487980d681795df58a97b954f3193a193bb5dd07dbfeed38a97c1c13c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:10:50.257774 systemd[1]: Started cri-containerd-f03f41d94137f2a0fcf55e17d0af74d7f519393aa99bbf23c9d07ade346b86f3.scope - libcontainer container f03f41d94137f2a0fcf55e17d0af74d7f519393aa99bbf23c9d07ade346b86f3. Jan 23 01:10:50.427597 containerd[1546]: time="2026-01-23T01:10:50.427100390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mgtxl,Uid:26dc301f-bc33-47ea-80df-b64cd4de7c2d,Namespace:calico-system,Attempt:0,} returns sandbox id \"f03f41d94137f2a0fcf55e17d0af74d7f519393aa99bbf23c9d07ade346b86f3\"" Jan 23 01:10:50.433315 kubelet[2845]: E0123 01:10:50.432746 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:10:50.891434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2310620116.mount: Deactivated successfully. Jan 23 01:10:51.538438 kubelet[2845]: E0123 01:10:51.537985 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7r7jt" podUID="e2553e8f-fa3b-4995-9072-1f7cce3ee2c8" Jan 23 01:10:53.538207 kubelet[2845]: E0123 01:10:53.537865 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7r7jt" podUID="e2553e8f-fa3b-4995-9072-1f7cce3ee2c8" Jan 23 01:10:54.862627 containerd[1546]: time="2026-01-23T01:10:54.862178476Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:54.865177 containerd[1546]: time="2026-01-23T01:10:54.864784564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 23 01:10:54.868650 containerd[1546]: time="2026-01-23T01:10:54.868441028Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:54.880705 containerd[1546]: time="2026-01-23T01:10:54.880654091Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:54.882387 containerd[1546]: time="2026-01-23T01:10:54.881967675Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 4.819147625s" Jan 23 01:10:54.882387 containerd[1546]: time="2026-01-23T01:10:54.882078513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 23 01:10:54.891622 containerd[1546]: time="2026-01-23T01:10:54.889618425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 01:10:54.933369 containerd[1546]: time="2026-01-23T01:10:54.931939772Z" level=info msg="CreateContainer within sandbox \"24844fa85e9db28009ada31c3d15b67d7032289a2e0e2e37f55eb9c726a3406f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 01:10:54.963367 containerd[1546]: time="2026-01-23T01:10:54.962663688Z" level=info msg="Container 855251e77a6eb1c0a4ac0705ea1a3cfb28bea1fb7fe7044725915605c4466a9c: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:10:54.999417 containerd[1546]: time="2026-01-23T01:10:54.999079374Z" level=info msg="CreateContainer within sandbox \"24844fa85e9db28009ada31c3d15b67d7032289a2e0e2e37f55eb9c726a3406f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"855251e77a6eb1c0a4ac0705ea1a3cfb28bea1fb7fe7044725915605c4466a9c\"" Jan 23 01:10:55.019691 containerd[1546]: time="2026-01-23T01:10:55.018147576Z" level=info msg="StartContainer for \"855251e77a6eb1c0a4ac0705ea1a3cfb28bea1fb7fe7044725915605c4466a9c\"" Jan 23 01:10:55.035720 containerd[1546]: time="2026-01-23T01:10:55.035471088Z" level=info msg="connecting to shim 855251e77a6eb1c0a4ac0705ea1a3cfb28bea1fb7fe7044725915605c4466a9c" address="unix:///run/containerd/s/ada6df1169c6559e99747c2677d958b06f48395dcbc5d361099495f85259623b" protocol=ttrpc version=3 Jan 23 01:10:55.161643 systemd[1]: Started cri-containerd-855251e77a6eb1c0a4ac0705ea1a3cfb28bea1fb7fe7044725915605c4466a9c.scope - libcontainer container 855251e77a6eb1c0a4ac0705ea1a3cfb28bea1fb7fe7044725915605c4466a9c. Jan 23 01:10:55.537758 kubelet[2845]: E0123 01:10:55.537691 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7r7jt" podUID="e2553e8f-fa3b-4995-9072-1f7cce3ee2c8" Jan 23 01:10:55.633201 containerd[1546]: time="2026-01-23T01:10:55.632397467Z" level=info msg="StartContainer for \"855251e77a6eb1c0a4ac0705ea1a3cfb28bea1fb7fe7044725915605c4466a9c\" returns successfully" Jan 23 01:10:55.863055 kubelet[2845]: E0123 01:10:55.861972 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:10:55.937725 kubelet[2845]: E0123 01:10:55.937019 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:55.940440 kubelet[2845]: W0123 01:10:55.940189 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:55.943207 kubelet[2845]: E0123 01:10:55.942698 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:55.947479 kubelet[2845]: I0123 01:10:55.944095 2845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5676875c9c-cfqbx" podStartSLOduration=2.115968869 podStartE2EDuration="6.944074663s" podCreationTimestamp="2026-01-23 01:10:49 +0000 UTC" firstStartedPulling="2026-01-23 01:10:50.058059724 +0000 UTC m=+67.538144350" lastFinishedPulling="2026-01-23 01:10:54.886165519 +0000 UTC m=+72.366250144" observedRunningTime="2026-01-23 01:10:55.941751193 +0000 UTC m=+73.421835837" watchObservedRunningTime="2026-01-23 01:10:55.944074663 +0000 UTC m=+73.424159317" Jan 23 01:10:55.951032 kubelet[2845]: E0123 01:10:55.945782 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:55.951032 kubelet[2845]: W0123 01:10:55.950843 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:55.951032 kubelet[2845]: E0123 01:10:55.950879 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:55.955004 kubelet[2845]: E0123 01:10:55.954734 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:55.955004 kubelet[2845]: W0123 01:10:55.954829 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:55.955004 kubelet[2845]: E0123 01:10:55.954855 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:55.956570 kubelet[2845]: E0123 01:10:55.956398 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:55.956570 kubelet[2845]: W0123 01:10:55.956416 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:55.956570 kubelet[2845]: E0123 01:10:55.956434 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:55.962393 kubelet[2845]: E0123 01:10:55.961606 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:55.962393 kubelet[2845]: W0123 01:10:55.961627 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:55.962393 kubelet[2845]: E0123 01:10:55.961646 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:55.963949 kubelet[2845]: E0123 01:10:55.962861 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:55.963949 kubelet[2845]: W0123 01:10:55.962881 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:55.963949 kubelet[2845]: E0123 01:10:55.962899 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:55.974722 kubelet[2845]: E0123 01:10:55.973007 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:55.974722 kubelet[2845]: W0123 01:10:55.973046 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:55.974722 kubelet[2845]: E0123 01:10:55.974220 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:55.991343 kubelet[2845]: E0123 01:10:55.991188 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:55.991480 kubelet[2845]: W0123 01:10:55.991222 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:55.991480 kubelet[2845]: E0123 01:10:55.991408 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.000873 kubelet[2845]: E0123 01:10:56.000822 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.001363 kubelet[2845]: W0123 01:10:56.000862 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.001363 kubelet[2845]: E0123 01:10:56.001077 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.008725 kubelet[2845]: E0123 01:10:56.008427 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.008725 kubelet[2845]: W0123 01:10:56.008445 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.008725 kubelet[2845]: E0123 01:10:56.008463 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.027766 kubelet[2845]: E0123 01:10:56.026974 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.027766 kubelet[2845]: W0123 01:10:56.027186 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.036895 kubelet[2845]: E0123 01:10:56.027216 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.044432 kubelet[2845]: E0123 01:10:56.044015 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.044432 kubelet[2845]: W0123 01:10:56.044108 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.044432 kubelet[2845]: E0123 01:10:56.044137 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.046987 kubelet[2845]: E0123 01:10:56.046589 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.046987 kubelet[2845]: W0123 01:10:56.046683 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.046987 kubelet[2845]: E0123 01:10:56.046703 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.049349 kubelet[2845]: E0123 01:10:56.047693 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.049349 kubelet[2845]: W0123 01:10:56.047777 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.049349 kubelet[2845]: E0123 01:10:56.047796 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.070645 kubelet[2845]: E0123 01:10:56.070116 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.071215 kubelet[2845]: W0123 01:10:56.070983 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.071215 kubelet[2845]: E0123 01:10:56.071108 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.081841 kubelet[2845]: E0123 01:10:56.081617 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.081841 kubelet[2845]: W0123 01:10:56.081714 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.081841 kubelet[2845]: E0123 01:10:56.081751 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.086827 kubelet[2845]: E0123 01:10:56.084660 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.094933 kubelet[2845]: W0123 01:10:56.092426 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.100075 kubelet[2845]: E0123 01:10:56.099991 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.103417 kubelet[2845]: E0123 01:10:56.102853 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.105436 kubelet[2845]: W0123 01:10:56.104881 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.105436 kubelet[2845]: E0123 01:10:56.104919 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.124818 kubelet[2845]: E0123 01:10:56.124707 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.126590 kubelet[2845]: W0123 01:10:56.125913 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.127821 kubelet[2845]: E0123 01:10:56.127416 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.141862 kubelet[2845]: E0123 01:10:56.141443 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.141862 kubelet[2845]: W0123 01:10:56.141578 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.141862 kubelet[2845]: E0123 01:10:56.141612 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.145336 kubelet[2845]: E0123 01:10:56.145114 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.145405 kubelet[2845]: W0123 01:10:56.145215 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.145405 kubelet[2845]: E0123 01:10:56.145381 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.147573 kubelet[2845]: E0123 01:10:56.146727 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.147573 kubelet[2845]: W0123 01:10:56.146819 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.147573 kubelet[2845]: E0123 01:10:56.146838 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.151660 kubelet[2845]: E0123 01:10:56.151627 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.151857 kubelet[2845]: W0123 01:10:56.151837 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.151957 kubelet[2845]: E0123 01:10:56.151939 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.154674 kubelet[2845]: E0123 01:10:56.154655 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.155047 kubelet[2845]: W0123 01:10:56.155027 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.155159 kubelet[2845]: E0123 01:10:56.155139 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.159729 kubelet[2845]: E0123 01:10:56.157964 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.159729 kubelet[2845]: W0123 01:10:56.158001 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.159729 kubelet[2845]: E0123 01:10:56.158032 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.159729 kubelet[2845]: E0123 01:10:56.158874 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.159729 kubelet[2845]: W0123 01:10:56.158892 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.159729 kubelet[2845]: E0123 01:10:56.158909 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.162913 kubelet[2845]: E0123 01:10:56.161052 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.162913 kubelet[2845]: W0123 01:10:56.161149 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.162913 kubelet[2845]: E0123 01:10:56.161166 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.165998 kubelet[2845]: E0123 01:10:56.165973 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.166087 kubelet[2845]: W0123 01:10:56.166067 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.166945 kubelet[2845]: E0123 01:10:56.166923 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.170383 kubelet[2845]: E0123 01:10:56.170145 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.170383 kubelet[2845]: W0123 01:10:56.170166 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.170383 kubelet[2845]: E0123 01:10:56.170184 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.171370 kubelet[2845]: E0123 01:10:56.171187 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.171370 kubelet[2845]: W0123 01:10:56.171204 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.171587 kubelet[2845]: E0123 01:10:56.171220 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.172401 kubelet[2845]: E0123 01:10:56.171938 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.172401 kubelet[2845]: W0123 01:10:56.171955 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.172401 kubelet[2845]: E0123 01:10:56.171969 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.173960 kubelet[2845]: E0123 01:10:56.173941 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.174102 kubelet[2845]: W0123 01:10:56.174040 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.174102 kubelet[2845]: E0123 01:10:56.174060 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.179835 kubelet[2845]: E0123 01:10:56.179680 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.180104 kubelet[2845]: W0123 01:10:56.180072 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.180222 kubelet[2845]: E0123 01:10:56.180190 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.637602 kubelet[2845]: E0123 01:10:56.637392 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:10:56.717875 kubelet[2845]: E0123 01:10:56.717644 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.717875 kubelet[2845]: W0123 01:10:56.717681 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.717875 kubelet[2845]: E0123 01:10:56.717712 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.720188 kubelet[2845]: E0123 01:10:56.719972 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.720188 kubelet[2845]: W0123 01:10:56.719990 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.720188 kubelet[2845]: E0123 01:10:56.720006 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.720942 kubelet[2845]: E0123 01:10:56.720920 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.721031 kubelet[2845]: W0123 01:10:56.721013 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.721114 kubelet[2845]: E0123 01:10:56.721095 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.723680 kubelet[2845]: E0123 01:10:56.723362 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.723680 kubelet[2845]: W0123 01:10:56.723378 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.723680 kubelet[2845]: E0123 01:10:56.723391 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.726885 kubelet[2845]: E0123 01:10:56.726864 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.726975 kubelet[2845]: W0123 01:10:56.726961 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.727035 kubelet[2845]: E0123 01:10:56.727023 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.833769 containerd[1546]: time="2026-01-23T01:10:56.833705360Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:56.844871 containerd[1546]: time="2026-01-23T01:10:56.844651214Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 23 01:10:56.858670 containerd[1546]: time="2026-01-23T01:10:56.858618843Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:56.872034 containerd[1546]: time="2026-01-23T01:10:56.868909630Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:56.875086 containerd[1546]: time="2026-01-23T01:10:56.874816323Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.985129852s" Jan 23 01:10:56.875086 containerd[1546]: time="2026-01-23T01:10:56.874968467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 23 01:10:56.906480 kubelet[2845]: E0123 01:10:56.905708 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:10:56.940119 kubelet[2845]: E0123 01:10:56.939765 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.940119 kubelet[2845]: W0123 01:10:56.939854 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.940119 kubelet[2845]: E0123 01:10:56.939883 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.945663 kubelet[2845]: E0123 01:10:56.940776 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.945663 kubelet[2845]: W0123 01:10:56.940792 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.945663 kubelet[2845]: E0123 01:10:56.940804 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.945792 containerd[1546]: time="2026-01-23T01:10:56.941868969Z" level=info msg="CreateContainer within sandbox \"f03f41d94137f2a0fcf55e17d0af74d7f519393aa99bbf23c9d07ade346b86f3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 01:10:56.949558 kubelet[2845]: E0123 01:10:56.947897 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.951441 kubelet[2845]: W0123 01:10:56.950131 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.953947 kubelet[2845]: E0123 01:10:56.952220 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.954933 kubelet[2845]: E0123 01:10:56.954908 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.957571 kubelet[2845]: W0123 01:10:56.955022 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.957571 kubelet[2845]: E0123 01:10:56.956436 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.957571 kubelet[2845]: E0123 01:10:56.957079 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.957571 kubelet[2845]: W0123 01:10:56.957094 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.957571 kubelet[2845]: E0123 01:10:56.957111 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.959971 kubelet[2845]: E0123 01:10:56.958922 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.959971 kubelet[2845]: W0123 01:10:56.958938 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.959971 kubelet[2845]: E0123 01:10:56.958952 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.959971 kubelet[2845]: E0123 01:10:56.959764 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.959971 kubelet[2845]: W0123 01:10:56.959777 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.959971 kubelet[2845]: E0123 01:10:56.959789 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.961627 kubelet[2845]: E0123 01:10:56.961604 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.962464 kubelet[2845]: W0123 01:10:56.962441 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.962649 kubelet[2845]: E0123 01:10:56.962632 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.964752 kubelet[2845]: E0123 01:10:56.964732 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.965806 kubelet[2845]: W0123 01:10:56.965784 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.966060 kubelet[2845]: E0123 01:10:56.966036 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.967446 kubelet[2845]: E0123 01:10:56.967426 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.967871 kubelet[2845]: W0123 01:10:56.967847 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.968051 kubelet[2845]: E0123 01:10:56.968033 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.969956 kubelet[2845]: E0123 01:10:56.969937 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.970051 kubelet[2845]: W0123 01:10:56.970036 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.972029 kubelet[2845]: E0123 01:10:56.971742 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.973193 kubelet[2845]: E0123 01:10:56.973023 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.973193 kubelet[2845]: W0123 01:10:56.973040 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.973193 kubelet[2845]: E0123 01:10:56.973054 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.980137 kubelet[2845]: E0123 01:10:56.978914 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.980137 kubelet[2845]: W0123 01:10:56.978947 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.980137 kubelet[2845]: E0123 01:10:56.978975 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.980137 kubelet[2845]: E0123 01:10:56.979788 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.980137 kubelet[2845]: W0123 01:10:56.979798 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.980137 kubelet[2845]: E0123 01:10:56.979813 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:56.980137 kubelet[2845]: E0123 01:10:56.980032 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:56.980137 kubelet[2845]: W0123 01:10:56.980043 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:56.980137 kubelet[2845]: E0123 01:10:56.980053 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.022898 kubelet[2845]: E0123 01:10:57.022657 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.022898 kubelet[2845]: W0123 01:10:57.022762 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.022898 kubelet[2845]: E0123 01:10:57.022791 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.023628 kubelet[2845]: E0123 01:10:57.023459 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.023802 kubelet[2845]: W0123 01:10:57.023691 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.023802 kubelet[2845]: E0123 01:10:57.023779 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.025916 kubelet[2845]: E0123 01:10:57.025595 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.025916 kubelet[2845]: W0123 01:10:57.025611 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.025916 kubelet[2845]: E0123 01:10:57.025624 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.028386 kubelet[2845]: E0123 01:10:57.027928 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.028386 kubelet[2845]: W0123 01:10:57.027940 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.028868 kubelet[2845]: E0123 01:10:57.027950 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.030612 kubelet[2845]: E0123 01:10:57.030590 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.032394 kubelet[2845]: W0123 01:10:57.032369 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.032838 kubelet[2845]: E0123 01:10:57.032819 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.034666 kubelet[2845]: E0123 01:10:57.034617 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.034666 kubelet[2845]: W0123 01:10:57.034635 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.034666 kubelet[2845]: E0123 01:10:57.034648 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.037083 containerd[1546]: time="2026-01-23T01:10:57.035006256Z" level=info msg="Container ce88e9b9545ddd5b7a8be457b3943730e8256d83b853d2250f0befab02e11bf0: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:10:57.039696 kubelet[2845]: E0123 01:10:57.039583 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.041622 kubelet[2845]: W0123 01:10:57.041600 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.041942 kubelet[2845]: E0123 01:10:57.041923 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.044683 kubelet[2845]: E0123 01:10:57.042991 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.044683 kubelet[2845]: W0123 01:10:57.043004 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.045808 kubelet[2845]: E0123 01:10:57.045787 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.050936 kubelet[2845]: E0123 01:10:57.050723 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.050936 kubelet[2845]: W0123 01:10:57.050743 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.050936 kubelet[2845]: E0123 01:10:57.050760 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.052613 kubelet[2845]: E0123 01:10:57.052594 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.052824 kubelet[2845]: W0123 01:10:57.052692 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.053686 kubelet[2845]: E0123 01:10:57.053667 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.056605 kubelet[2845]: E0123 01:10:57.056045 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.056605 kubelet[2845]: W0123 01:10:57.056061 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.056605 kubelet[2845]: E0123 01:10:57.056073 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.058643 kubelet[2845]: E0123 01:10:57.058366 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.058643 kubelet[2845]: W0123 01:10:57.058634 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.058722 kubelet[2845]: E0123 01:10:57.058654 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.063377 kubelet[2845]: E0123 01:10:57.062917 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.063377 kubelet[2845]: W0123 01:10:57.063011 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.063377 kubelet[2845]: E0123 01:10:57.063031 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.082591 kubelet[2845]: E0123 01:10:57.079691 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.082591 kubelet[2845]: W0123 01:10:57.079832 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.082591 kubelet[2845]: E0123 01:10:57.079865 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.089114 kubelet[2845]: E0123 01:10:57.089025 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.089114 kubelet[2845]: W0123 01:10:57.089112 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.089209 kubelet[2845]: E0123 01:10:57.089133 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.091881 kubelet[2845]: E0123 01:10:57.091792 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.091881 kubelet[2845]: W0123 01:10:57.091880 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.091968 kubelet[2845]: E0123 01:10:57.091898 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.097379 kubelet[2845]: E0123 01:10:57.096998 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.097379 kubelet[2845]: W0123 01:10:57.097089 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.097379 kubelet[2845]: E0123 01:10:57.097108 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.113930 containerd[1546]: time="2026-01-23T01:10:57.112692436Z" level=info msg="CreateContainer within sandbox \"f03f41d94137f2a0fcf55e17d0af74d7f519393aa99bbf23c9d07ade346b86f3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ce88e9b9545ddd5b7a8be457b3943730e8256d83b853d2250f0befab02e11bf0\"" Jan 23 01:10:57.114100 kubelet[2845]: E0123 01:10:57.113128 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.114100 kubelet[2845]: W0123 01:10:57.113150 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.114100 kubelet[2845]: E0123 01:10:57.113173 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.115097 containerd[1546]: time="2026-01-23T01:10:57.115004508Z" level=info msg="StartContainer for \"ce88e9b9545ddd5b7a8be457b3943730e8256d83b853d2250f0befab02e11bf0\"" Jan 23 01:10:57.122049 containerd[1546]: time="2026-01-23T01:10:57.121760450Z" level=info msg="connecting to shim ce88e9b9545ddd5b7a8be457b3943730e8256d83b853d2250f0befab02e11bf0" address="unix:///run/containerd/s/e6d17e0487980d681795df58a97b954f3193a193bb5dd07dbfeed38a97c1c13c" protocol=ttrpc version=3 Jan 23 01:10:57.352188 systemd[1]: Started cri-containerd-ce88e9b9545ddd5b7a8be457b3943730e8256d83b853d2250f0befab02e11bf0.scope - libcontainer container ce88e9b9545ddd5b7a8be457b3943730e8256d83b853d2250f0befab02e11bf0. Jan 23 01:10:57.548783 kubelet[2845]: E0123 01:10:57.548724 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7r7jt" podUID="e2553e8f-fa3b-4995-9072-1f7cce3ee2c8" Jan 23 01:10:57.558771 kubelet[2845]: E0123 01:10:57.555447 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:10:57.610938 kubelet[2845]: E0123 01:10:57.609117 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.610938 kubelet[2845]: W0123 01:10:57.609145 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.610938 kubelet[2845]: E0123 01:10:57.609169 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.621413 kubelet[2845]: E0123 01:10:57.620884 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.621413 kubelet[2845]: W0123 01:10:57.620917 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.621413 kubelet[2845]: E0123 01:10:57.620947 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.624481 kubelet[2845]: E0123 01:10:57.624462 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.626925 kubelet[2845]: W0123 01:10:57.624642 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.626925 kubelet[2845]: E0123 01:10:57.624676 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.632382 kubelet[2845]: E0123 01:10:57.632017 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.632382 kubelet[2845]: W0123 01:10:57.632044 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.632382 kubelet[2845]: E0123 01:10:57.632071 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.640145 kubelet[2845]: E0123 01:10:57.639792 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.640145 kubelet[2845]: W0123 01:10:57.639972 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.640145 kubelet[2845]: E0123 01:10:57.640006 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.668168 kubelet[2845]: E0123 01:10:57.663069 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.668168 kubelet[2845]: W0123 01:10:57.663682 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.668168 kubelet[2845]: E0123 01:10:57.664020 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.695717 kubelet[2845]: E0123 01:10:57.695675 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.697341 kubelet[2845]: W0123 01:10:57.696104 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.697341 kubelet[2845]: E0123 01:10:57.696145 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.699853 kubelet[2845]: E0123 01:10:57.699738 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.699899 kubelet[2845]: W0123 01:10:57.699851 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.699899 kubelet[2845]: E0123 01:10:57.699886 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.704034 kubelet[2845]: E0123 01:10:57.703977 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.704034 kubelet[2845]: W0123 01:10:57.704000 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.704034 kubelet[2845]: E0123 01:10:57.704035 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.705434 kubelet[2845]: E0123 01:10:57.704916 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.705434 kubelet[2845]: W0123 01:10:57.705011 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.705434 kubelet[2845]: E0123 01:10:57.705027 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.706035 kubelet[2845]: E0123 01:10:57.705873 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.706035 kubelet[2845]: W0123 01:10:57.706007 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.706035 kubelet[2845]: E0123 01:10:57.706026 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.710941 kubelet[2845]: E0123 01:10:57.710747 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.710941 kubelet[2845]: W0123 01:10:57.710769 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.710941 kubelet[2845]: E0123 01:10:57.710788 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.714148 kubelet[2845]: E0123 01:10:57.713481 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.714148 kubelet[2845]: W0123 01:10:57.713651 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.714148 kubelet[2845]: E0123 01:10:57.713682 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.716033 kubelet[2845]: E0123 01:10:57.715820 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.716033 kubelet[2845]: W0123 01:10:57.715850 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.716033 kubelet[2845]: E0123 01:10:57.715869 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.720159 kubelet[2845]: E0123 01:10:57.719875 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.720159 kubelet[2845]: W0123 01:10:57.719896 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.720159 kubelet[2845]: E0123 01:10:57.719919 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.726463 kubelet[2845]: E0123 01:10:57.725970 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.726463 kubelet[2845]: W0123 01:10:57.726068 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.726463 kubelet[2845]: E0123 01:10:57.726089 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.729183 kubelet[2845]: E0123 01:10:57.728876 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.729183 kubelet[2845]: W0123 01:10:57.728962 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.729183 kubelet[2845]: E0123 01:10:57.728979 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.734097 kubelet[2845]: E0123 01:10:57.732957 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.734097 kubelet[2845]: W0123 01:10:57.733051 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.734097 kubelet[2845]: E0123 01:10:57.733069 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.739611 kubelet[2845]: E0123 01:10:57.738607 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.739611 kubelet[2845]: W0123 01:10:57.738693 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.739611 kubelet[2845]: E0123 01:10:57.738709 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.742409 kubelet[2845]: E0123 01:10:57.741939 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.742409 kubelet[2845]: W0123 01:10:57.741958 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.742409 kubelet[2845]: E0123 01:10:57.741971 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.743815 kubelet[2845]: E0123 01:10:57.743023 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.743815 kubelet[2845]: W0123 01:10:57.743036 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.743815 kubelet[2845]: E0123 01:10:57.743047 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.746694 kubelet[2845]: E0123 01:10:57.746655 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.746694 kubelet[2845]: W0123 01:10:57.746674 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.746694 kubelet[2845]: E0123 01:10:57.746687 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.747686 kubelet[2845]: E0123 01:10:57.747643 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.747686 kubelet[2845]: W0123 01:10:57.747658 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.747686 kubelet[2845]: E0123 01:10:57.747672 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.749292 kubelet[2845]: E0123 01:10:57.748891 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.749292 kubelet[2845]: W0123 01:10:57.748909 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.749292 kubelet[2845]: E0123 01:10:57.748921 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.751985 kubelet[2845]: E0123 01:10:57.751882 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:57.751985 kubelet[2845]: W0123 01:10:57.751977 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:57.752077 kubelet[2845]: E0123 01:10:57.751994 2845 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:57.840297 containerd[1546]: time="2026-01-23T01:10:57.840187339Z" level=info msg="StartContainer for \"ce88e9b9545ddd5b7a8be457b3943730e8256d83b853d2250f0befab02e11bf0\" returns successfully" Jan 23 01:10:57.908972 systemd[1]: cri-containerd-ce88e9b9545ddd5b7a8be457b3943730e8256d83b853d2250f0befab02e11bf0.scope: Deactivated successfully. Jan 23 01:10:57.941716 kubelet[2845]: E0123 01:10:57.938918 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:10:57.941716 kubelet[2845]: E0123 01:10:57.941616 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:10:57.944139 containerd[1546]: time="2026-01-23T01:10:57.944094437Z" level=info msg="received container exit event container_id:\"ce88e9b9545ddd5b7a8be457b3943730e8256d83b853d2250f0befab02e11bf0\" id:\"ce88e9b9545ddd5b7a8be457b3943730e8256d83b853d2250f0befab02e11bf0\" pid:3555 exited_at:{seconds:1769130657 nanos:937673815}" Jan 23 01:10:58.248198 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce88e9b9545ddd5b7a8be457b3943730e8256d83b853d2250f0befab02e11bf0-rootfs.mount: Deactivated successfully. Jan 23 01:10:58.546863 kubelet[2845]: E0123 01:10:58.539982 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:10:58.951624 kubelet[2845]: E0123 01:10:58.947376 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:10:58.951624 kubelet[2845]: E0123 01:10:58.947872 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:10:58.962988 containerd[1546]: time="2026-01-23T01:10:58.959951838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 01:10:59.545931 kubelet[2845]: E0123 01:10:59.544113 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7r7jt" podUID="e2553e8f-fa3b-4995-9072-1f7cce3ee2c8" Jan 23 01:11:01.539354 kubelet[2845]: E0123 01:11:01.538032 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7r7jt" podUID="e2553e8f-fa3b-4995-9072-1f7cce3ee2c8" Jan 23 01:11:03.544126 kubelet[2845]: E0123 01:11:03.539948 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7r7jt" podUID="e2553e8f-fa3b-4995-9072-1f7cce3ee2c8" Jan 23 01:11:05.540313 kubelet[2845]: E0123 01:11:05.538486 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7r7jt" podUID="e2553e8f-fa3b-4995-9072-1f7cce3ee2c8" Jan 23 01:11:05.541747 kubelet[2845]: E0123 01:11:05.540426 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:11:07.540644 kubelet[2845]: E0123 01:11:07.540184 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7r7jt" podUID="e2553e8f-fa3b-4995-9072-1f7cce3ee2c8" Jan 23 01:11:09.539687 kubelet[2845]: E0123 01:11:09.537999 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7r7jt" podUID="e2553e8f-fa3b-4995-9072-1f7cce3ee2c8" Jan 23 01:11:09.584992 containerd[1546]: time="2026-01-23T01:11:09.584852108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:09.586873 containerd[1546]: time="2026-01-23T01:11:09.586817196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 23 01:11:09.592710 containerd[1546]: time="2026-01-23T01:11:09.592649140Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:09.599367 containerd[1546]: time="2026-01-23T01:11:09.599198745Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:09.600664 containerd[1546]: time="2026-01-23T01:11:09.600482054Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 10.640483769s" Jan 23 01:11:09.600750 containerd[1546]: time="2026-01-23T01:11:09.600682168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 23 01:11:09.612142 containerd[1546]: time="2026-01-23T01:11:09.612088369Z" level=info msg="CreateContainer within sandbox \"f03f41d94137f2a0fcf55e17d0af74d7f519393aa99bbf23c9d07ade346b86f3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 01:11:09.651388 containerd[1546]: time="2026-01-23T01:11:09.649679438Z" level=info msg="Container 31ea152b9a361afcd1892effe856ed7a6e82358e81babfc0a23cc9cbed58d3f1: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:11:09.679686 containerd[1546]: time="2026-01-23T01:11:09.679636877Z" level=info msg="CreateContainer within sandbox \"f03f41d94137f2a0fcf55e17d0af74d7f519393aa99bbf23c9d07ade346b86f3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"31ea152b9a361afcd1892effe856ed7a6e82358e81babfc0a23cc9cbed58d3f1\"" Jan 23 01:11:09.683980 containerd[1546]: time="2026-01-23T01:11:09.683149281Z" level=info msg="StartContainer for \"31ea152b9a361afcd1892effe856ed7a6e82358e81babfc0a23cc9cbed58d3f1\"" Jan 23 01:11:09.689220 containerd[1546]: time="2026-01-23T01:11:09.689014992Z" level=info msg="connecting to shim 31ea152b9a361afcd1892effe856ed7a6e82358e81babfc0a23cc9cbed58d3f1" address="unix:///run/containerd/s/e6d17e0487980d681795df58a97b954f3193a193bb5dd07dbfeed38a97c1c13c" protocol=ttrpc version=3 Jan 23 01:11:09.827901 systemd[1]: Started cri-containerd-31ea152b9a361afcd1892effe856ed7a6e82358e81babfc0a23cc9cbed58d3f1.scope - libcontainer container 31ea152b9a361afcd1892effe856ed7a6e82358e81babfc0a23cc9cbed58d3f1. Jan 23 01:11:10.140034 containerd[1546]: time="2026-01-23T01:11:10.139473698Z" level=info msg="StartContainer for \"31ea152b9a361afcd1892effe856ed7a6e82358e81babfc0a23cc9cbed58d3f1\" returns successfully" Jan 23 01:11:11.098971 kubelet[2845]: E0123 01:11:11.098683 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:11:11.541422 kubelet[2845]: E0123 01:11:11.538952 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7r7jt" podUID="e2553e8f-fa3b-4995-9072-1f7cce3ee2c8" Jan 23 01:11:12.137211 kubelet[2845]: E0123 01:11:12.131644 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:11:13.062180 systemd[1]: cri-containerd-31ea152b9a361afcd1892effe856ed7a6e82358e81babfc0a23cc9cbed58d3f1.scope: Deactivated successfully. Jan 23 01:11:13.063033 systemd[1]: cri-containerd-31ea152b9a361afcd1892effe856ed7a6e82358e81babfc0a23cc9cbed58d3f1.scope: Consumed 3.205s CPU time, 181.2M memory peak, 2.3M read from disk, 171.3M written to disk. Jan 23 01:11:13.076422 containerd[1546]: time="2026-01-23T01:11:13.076211075Z" level=info msg="received container exit event container_id:\"31ea152b9a361afcd1892effe856ed7a6e82358e81babfc0a23cc9cbed58d3f1\" id:\"31ea152b9a361afcd1892effe856ed7a6e82358e81babfc0a23cc9cbed58d3f1\" pid:3645 exited_at:{seconds:1769130673 nanos:75169189}" Jan 23 01:11:13.129471 kubelet[2845]: I0123 01:11:13.129432 2845 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 23 01:11:13.158826 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31ea152b9a361afcd1892effe856ed7a6e82358e81babfc0a23cc9cbed58d3f1-rootfs.mount: Deactivated successfully. Jan 23 01:11:13.332802 systemd[1]: Created slice kubepods-besteffort-pod369aa670_4b29_4a0c_8fff_6ae07d46c778.slice - libcontainer container kubepods-besteffort-pod369aa670_4b29_4a0c_8fff_6ae07d46c778.slice. Jan 23 01:11:13.376698 systemd[1]: Created slice kubepods-burstable-pod853a2368_0964_46f6_bbf0_478966b86444.slice - libcontainer container kubepods-burstable-pod853a2368_0964_46f6_bbf0_478966b86444.slice. Jan 23 01:11:13.396472 systemd[1]: Created slice kubepods-burstable-pod3ed63450_d0f7_42e4_856a_8ea4e718ff98.slice - libcontainer container kubepods-burstable-pod3ed63450_d0f7_42e4_856a_8ea4e718ff98.slice. Jan 23 01:11:13.417489 systemd[1]: Created slice kubepods-besteffort-pod662d18b3_33cf_4000_b003_c8e7f6b2e810.slice - libcontainer container kubepods-besteffort-pod662d18b3_33cf_4000_b003_c8e7f6b2e810.slice. Jan 23 01:11:13.428064 kubelet[2845]: I0123 01:11:13.427033 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q58bb\" (UniqueName: \"kubernetes.io/projected/369aa670-4b29-4a0c-8fff-6ae07d46c778-kube-api-access-q58bb\") pod \"calico-kube-controllers-86ccb5f87d-dkzhd\" (UID: \"369aa670-4b29-4a0c-8fff-6ae07d46c778\") " pod="calico-system/calico-kube-controllers-86ccb5f87d-dkzhd" Jan 23 01:11:13.428064 kubelet[2845]: I0123 01:11:13.427095 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/662d18b3-33cf-4000-b003-c8e7f6b2e810-goldmane-key-pair\") pod \"goldmane-7c778bb748-bmvtb\" (UID: \"662d18b3-33cf-4000-b003-c8e7f6b2e810\") " pod="calico-system/goldmane-7c778bb748-bmvtb" Jan 23 01:11:13.428064 kubelet[2845]: I0123 01:11:13.427128 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eddc7a17-a39d-4b74-9498-cfa4bf00bdf9-whisker-ca-bundle\") pod \"whisker-7697c4c58d-kbbbl\" (UID: \"eddc7a17-a39d-4b74-9498-cfa4bf00bdf9\") " pod="calico-system/whisker-7697c4c58d-kbbbl" Jan 23 01:11:13.428064 kubelet[2845]: I0123 01:11:13.427152 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ed63450-d0f7-42e4-856a-8ea4e718ff98-config-volume\") pod \"coredns-66bc5c9577-nbdj7\" (UID: \"3ed63450-d0f7-42e4-856a-8ea4e718ff98\") " pod="kube-system/coredns-66bc5c9577-nbdj7" Jan 23 01:11:13.428064 kubelet[2845]: I0123 01:11:13.427184 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/662d18b3-33cf-4000-b003-c8e7f6b2e810-config\") pod \"goldmane-7c778bb748-bmvtb\" (UID: \"662d18b3-33cf-4000-b003-c8e7f6b2e810\") " pod="calico-system/goldmane-7c778bb748-bmvtb" Jan 23 01:11:13.429098 kubelet[2845]: I0123 01:11:13.427212 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txb67\" (UniqueName: \"kubernetes.io/projected/eddc7a17-a39d-4b74-9498-cfa4bf00bdf9-kube-api-access-txb67\") pod \"whisker-7697c4c58d-kbbbl\" (UID: \"eddc7a17-a39d-4b74-9498-cfa4bf00bdf9\") " pod="calico-system/whisker-7697c4c58d-kbbbl" Jan 23 01:11:13.429098 kubelet[2845]: I0123 01:11:13.427460 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf2vg\" (UniqueName: \"kubernetes.io/projected/3ed63450-d0f7-42e4-856a-8ea4e718ff98-kube-api-access-lf2vg\") pod \"coredns-66bc5c9577-nbdj7\" (UID: \"3ed63450-d0f7-42e4-856a-8ea4e718ff98\") " pod="kube-system/coredns-66bc5c9577-nbdj7" Jan 23 01:11:13.429098 kubelet[2845]: I0123 01:11:13.427486 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/eddc7a17-a39d-4b74-9498-cfa4bf00bdf9-whisker-backend-key-pair\") pod \"whisker-7697c4c58d-kbbbl\" (UID: \"eddc7a17-a39d-4b74-9498-cfa4bf00bdf9\") " pod="calico-system/whisker-7697c4c58d-kbbbl" Jan 23 01:11:13.429098 kubelet[2845]: I0123 01:11:13.427604 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3b1f5033-cf51-4add-93c1-34dedb396092-calico-apiserver-certs\") pod \"calico-apiserver-68948fdbd6-4jxhx\" (UID: \"3b1f5033-cf51-4add-93c1-34dedb396092\") " pod="calico-apiserver/calico-apiserver-68948fdbd6-4jxhx" Jan 23 01:11:13.429098 kubelet[2845]: I0123 01:11:13.427636 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/369aa670-4b29-4a0c-8fff-6ae07d46c778-tigera-ca-bundle\") pod \"calico-kube-controllers-86ccb5f87d-dkzhd\" (UID: \"369aa670-4b29-4a0c-8fff-6ae07d46c778\") " pod="calico-system/calico-kube-controllers-86ccb5f87d-dkzhd" Jan 23 01:11:13.429479 kubelet[2845]: I0123 01:11:13.427853 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/853a2368-0964-46f6-bbf0-478966b86444-config-volume\") pod \"coredns-66bc5c9577-q7bt2\" (UID: \"853a2368-0964-46f6-bbf0-478966b86444\") " pod="kube-system/coredns-66bc5c9577-q7bt2" Jan 23 01:11:13.429479 kubelet[2845]: I0123 01:11:13.427881 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lhmb\" (UniqueName: \"kubernetes.io/projected/853a2368-0964-46f6-bbf0-478966b86444-kube-api-access-8lhmb\") pod \"coredns-66bc5c9577-q7bt2\" (UID: \"853a2368-0964-46f6-bbf0-478966b86444\") " pod="kube-system/coredns-66bc5c9577-q7bt2" Jan 23 01:11:13.429479 kubelet[2845]: I0123 01:11:13.427909 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/662d18b3-33cf-4000-b003-c8e7f6b2e810-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-bmvtb\" (UID: \"662d18b3-33cf-4000-b003-c8e7f6b2e810\") " pod="calico-system/goldmane-7c778bb748-bmvtb" Jan 23 01:11:13.429479 kubelet[2845]: I0123 01:11:13.427930 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6s82\" (UniqueName: \"kubernetes.io/projected/662d18b3-33cf-4000-b003-c8e7f6b2e810-kube-api-access-v6s82\") pod \"goldmane-7c778bb748-bmvtb\" (UID: \"662d18b3-33cf-4000-b003-c8e7f6b2e810\") " pod="calico-system/goldmane-7c778bb748-bmvtb" Jan 23 01:11:13.429479 kubelet[2845]: I0123 01:11:13.427955 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjfml\" (UniqueName: \"kubernetes.io/projected/3b1f5033-cf51-4add-93c1-34dedb396092-kube-api-access-jjfml\") pod \"calico-apiserver-68948fdbd6-4jxhx\" (UID: \"3b1f5033-cf51-4add-93c1-34dedb396092\") " pod="calico-apiserver/calico-apiserver-68948fdbd6-4jxhx" Jan 23 01:11:13.432392 systemd[1]: Created slice kubepods-besteffort-pod3b1f5033_cf51_4add_93c1_34dedb396092.slice - libcontainer container kubepods-besteffort-pod3b1f5033_cf51_4add_93c1_34dedb396092.slice. Jan 23 01:11:13.445830 systemd[1]: Created slice kubepods-besteffort-podeddc7a17_a39d_4b74_9498_cfa4bf00bdf9.slice - libcontainer container kubepods-besteffort-podeddc7a17_a39d_4b74_9498_cfa4bf00bdf9.slice. Jan 23 01:11:13.456800 systemd[1]: Created slice kubepods-besteffort-pod5865bb49_f0fe_4eb4_8f6c_74bc939474ad.slice - libcontainer container kubepods-besteffort-pod5865bb49_f0fe_4eb4_8f6c_74bc939474ad.slice. Jan 23 01:11:13.529140 kubelet[2845]: I0123 01:11:13.528823 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5865bb49-f0fe-4eb4-8f6c-74bc939474ad-calico-apiserver-certs\") pod \"calico-apiserver-68948fdbd6-mdlvc\" (UID: \"5865bb49-f0fe-4eb4-8f6c-74bc939474ad\") " pod="calico-apiserver/calico-apiserver-68948fdbd6-mdlvc" Jan 23 01:11:13.529140 kubelet[2845]: I0123 01:11:13.528945 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpvsb\" (UniqueName: \"kubernetes.io/projected/5865bb49-f0fe-4eb4-8f6c-74bc939474ad-kube-api-access-qpvsb\") pod \"calico-apiserver-68948fdbd6-mdlvc\" (UID: \"5865bb49-f0fe-4eb4-8f6c-74bc939474ad\") " pod="calico-apiserver/calico-apiserver-68948fdbd6-mdlvc" Jan 23 01:11:13.614974 systemd[1]: Created slice kubepods-besteffort-pode2553e8f_fa3b_4995_9072_1f7cce3ee2c8.slice - libcontainer container kubepods-besteffort-pode2553e8f_fa3b_4995_9072_1f7cce3ee2c8.slice. Jan 23 01:11:13.630754 containerd[1546]: time="2026-01-23T01:11:13.630696071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7r7jt,Uid:e2553e8f-fa3b-4995-9072-1f7cce3ee2c8,Namespace:calico-system,Attempt:0,}" Jan 23 01:11:13.682010 containerd[1546]: time="2026-01-23T01:11:13.681172317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86ccb5f87d-dkzhd,Uid:369aa670-4b29-4a0c-8fff-6ae07d46c778,Namespace:calico-system,Attempt:0,}" Jan 23 01:11:13.697420 kubelet[2845]: E0123 01:11:13.697030 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:11:13.710374 containerd[1546]: time="2026-01-23T01:11:13.710070969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-q7bt2,Uid:853a2368-0964-46f6-bbf0-478966b86444,Namespace:kube-system,Attempt:0,}" Jan 23 01:11:13.721496 kubelet[2845]: E0123 01:11:13.721455 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:11:13.724778 containerd[1546]: time="2026-01-23T01:11:13.724737559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nbdj7,Uid:3ed63450-d0f7-42e4-856a-8ea4e718ff98,Namespace:kube-system,Attempt:0,}" Jan 23 01:11:13.741007 containerd[1546]: time="2026-01-23T01:11:13.740954715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-bmvtb,Uid:662d18b3-33cf-4000-b003-c8e7f6b2e810,Namespace:calico-system,Attempt:0,}" Jan 23 01:11:13.756483 containerd[1546]: time="2026-01-23T01:11:13.756432688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68948fdbd6-4jxhx,Uid:3b1f5033-cf51-4add-93c1-34dedb396092,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:11:13.764794 containerd[1546]: time="2026-01-23T01:11:13.764757619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7697c4c58d-kbbbl,Uid:eddc7a17-a39d-4b74-9498-cfa4bf00bdf9,Namespace:calico-system,Attempt:0,}" Jan 23 01:11:13.772393 containerd[1546]: time="2026-01-23T01:11:13.772184085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68948fdbd6-mdlvc,Uid:5865bb49-f0fe-4eb4-8f6c-74bc939474ad,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:11:14.211801 kubelet[2845]: E0123 01:11:14.210762 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:11:14.237154 containerd[1546]: time="2026-01-23T01:11:14.236922383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 01:11:14.400732 containerd[1546]: time="2026-01-23T01:11:14.400587853Z" level=error msg="Failed to destroy network for sandbox \"110ead6ee2ae277d042ba80e11072c7497117f61db6c03d3e90fd698d295714d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:14.407435 systemd[1]: run-netns-cni\x2dda6572d3\x2de626\x2dee39\x2d7e5c\x2d76249c0aefbe.mount: Deactivated successfully. Jan 23 01:11:14.429577 containerd[1546]: time="2026-01-23T01:11:14.427836587Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68948fdbd6-4jxhx,Uid:3b1f5033-cf51-4add-93c1-34dedb396092,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"110ead6ee2ae277d042ba80e11072c7497117f61db6c03d3e90fd698d295714d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:14.498643 kubelet[2845]: E0123 01:11:14.497773 2845 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"110ead6ee2ae277d042ba80e11072c7497117f61db6c03d3e90fd698d295714d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:14.498643 kubelet[2845]: E0123 01:11:14.497947 2845 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"110ead6ee2ae277d042ba80e11072c7497117f61db6c03d3e90fd698d295714d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68948fdbd6-4jxhx" Jan 23 01:11:14.498643 kubelet[2845]: E0123 01:11:14.497975 2845 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"110ead6ee2ae277d042ba80e11072c7497117f61db6c03d3e90fd698d295714d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68948fdbd6-4jxhx" Jan 23 01:11:14.499435 kubelet[2845]: E0123 01:11:14.498044 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68948fdbd6-4jxhx_calico-apiserver(3b1f5033-cf51-4add-93c1-34dedb396092)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68948fdbd6-4jxhx_calico-apiserver(3b1f5033-cf51-4add-93c1-34dedb396092)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"110ead6ee2ae277d042ba80e11072c7497117f61db6c03d3e90fd698d295714d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68948fdbd6-4jxhx" podUID="3b1f5033-cf51-4add-93c1-34dedb396092" Jan 23 01:11:14.563920 containerd[1546]: time="2026-01-23T01:11:14.562972500Z" level=error msg="Failed to destroy network for sandbox \"c45c0446c10c21b02e52b0a22bef68893b49f4288fd52504cefdd87d190c0f19\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:14.572087 systemd[1]: run-netns-cni\x2d9ed0939f\x2d9064\x2d4a70\x2de5a9\x2d38f07ff741a6.mount: Deactivated successfully. Jan 23 01:11:14.575085 containerd[1546]: time="2026-01-23T01:11:14.574919370Z" level=error msg="Failed to destroy network for sandbox \"174047e76b27dd5f6b4332d8107943c5c1c1811e63486f7214cd383b6bec68d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:14.575985 containerd[1546]: time="2026-01-23T01:11:14.574942145Z" level=error msg="Failed to destroy network for sandbox \"edee210896b00177367e0086069852dc979b8e4da377d77066ca9ac48f6a1169\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:14.583169 systemd[1]: run-netns-cni\x2d5869e6f6\x2d2004\x2d000e\x2d2ac9\x2da1cbcdfa1989.mount: Deactivated successfully. Jan 23 01:11:14.586419 containerd[1546]: time="2026-01-23T01:11:14.585413097Z" level=error msg="Failed to destroy network for sandbox \"c71cd8749450e0ff1c2738d2296268881239f2aa9bdd73e230f95d6288f3c422\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:14.586419 containerd[1546]: time="2026-01-23T01:11:14.585737102Z" level=error msg="Failed to destroy network for sandbox \"a486d6ebcddf2d69f0bb95ac067d54c4f7c7b3a954e4cb67d9336d98c18b7944\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:14.586050 systemd[1]: run-netns-cni\x2d438250c4\x2d3141\x2d6e96\x2dd7b2\x2d07022e8bda29.mount: Deactivated successfully. Jan 23 01:11:14.593785 containerd[1546]: time="2026-01-23T01:11:14.593733187Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7697c4c58d-kbbbl,Uid:eddc7a17-a39d-4b74-9498-cfa4bf00bdf9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c45c0446c10c21b02e52b0a22bef68893b49f4288fd52504cefdd87d190c0f19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:14.594443 kubelet[2845]: E0123 01:11:14.594408 2845 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c45c0446c10c21b02e52b0a22bef68893b49f4288fd52504cefdd87d190c0f19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:14.594663 kubelet[2845]: E0123 01:11:14.594642 2845 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c45c0446c10c21b02e52b0a22bef68893b49f4288fd52504cefdd87d190c0f19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7697c4c58d-kbbbl" Jan 23 01:11:14.594740 kubelet[2845]: E0123 01:11:14.594721 2845 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c45c0446c10c21b02e52b0a22bef68893b49f4288fd52504cefdd87d190c0f19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7697c4c58d-kbbbl" Jan 23 01:11:14.594845 kubelet[2845]: E0123 01:11:14.594821 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7697c4c58d-kbbbl_calico-system(eddc7a17-a39d-4b74-9498-cfa4bf00bdf9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7697c4c58d-kbbbl_calico-system(eddc7a17-a39d-4b74-9498-cfa4bf00bdf9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c45c0446c10c21b02e52b0a22bef68893b49f4288fd52504cefdd87d190c0f19\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7697c4c58d-kbbbl" podUID="eddc7a17-a39d-4b74-9498-cfa4bf00bdf9" Jan 23 01:11:14.596909 systemd[1]: run-netns-cni\x2d72f4f009\x2d5027\x2d9c97\x2d2f7d\x2d366af1479382.mount: Deactivated successfully. Jan 23 01:11:14.597086 systemd[1]: run-netns-cni\x2d1151cc0b\x2dc8a1\x2df062\x2dc2f1\x2d5d0d98750bb9.mount: Deactivated successfully. Jan 23 01:11:14.598437 containerd[1546]: time="2026-01-23T01:11:14.597466527Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-bmvtb,Uid:662d18b3-33cf-4000-b003-c8e7f6b2e810,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"174047e76b27dd5f6b4332d8107943c5c1c1811e63486f7214cd383b6bec68d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:14.599143 kubelet[2845]: E0123 01:11:14.599077 2845 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"174047e76b27dd5f6b4332d8107943c5c1c1811e63486f7214cd383b6bec68d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:14.602064 kubelet[2845]: E0123 01:11:14.601806 2845 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"174047e76b27dd5f6b4332d8107943c5c1c1811e63486f7214cd383b6bec68d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-bmvtb" Jan 23 01:11:14.605363 kubelet[2845]: E0123 01:11:14.602200 2845 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"174047e76b27dd5f6b4332d8107943c5c1c1811e63486f7214cd383b6bec68d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-bmvtb" Jan 23 01:11:14.606169 kubelet[2845]: E0123 01:11:14.605418 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-bmvtb_calico-system(662d18b3-33cf-4000-b003-c8e7f6b2e810)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-bmvtb_calico-system(662d18b3-33cf-4000-b003-c8e7f6b2e810)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"174047e76b27dd5f6b4332d8107943c5c1c1811e63486f7214cd383b6bec68d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-bmvtb" podUID="662d18b3-33cf-4000-b003-c8e7f6b2e810" Jan 23 01:11:14.617053 containerd[1546]: time="2026-01-23T01:11:14.615945549Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-q7bt2,Uid:853a2368-0964-46f6-bbf0-478966b86444,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"edee210896b00177367e0086069852dc979b8e4da377d77066ca9ac48f6a1169\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:14.622142 kubelet[2845]: E0123 01:11:14.620843 2845 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edee210896b00177367e0086069852dc979b8e4da377d77066ca9ac48f6a1169\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:14.622142 kubelet[2845]: E0123 01:11:14.620901 2845 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edee210896b00177367e0086069852dc979b8e4da377d77066ca9ac48f6a1169\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-q7bt2" Jan 23 01:11:14.622142 kubelet[2845]: E0123 01:11:14.620925 2845 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edee210896b00177367e0086069852dc979b8e4da377d77066ca9ac48f6a1169\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-q7bt2" Jan 23 01:11:14.622659 kubelet[2845]: E0123 01:11:14.620993 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-q7bt2_kube-system(853a2368-0964-46f6-bbf0-478966b86444)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-q7bt2_kube-system(853a2368-0964-46f6-bbf0-478966b86444)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"edee210896b00177367e0086069852dc979b8e4da377d77066ca9ac48f6a1169\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-q7bt2" podUID="853a2368-0964-46f6-bbf0-478966b86444" Jan 23 01:11:14.624804 containerd[1546]: time="2026-01-23T01:11:14.624643688Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7r7jt,Uid:e2553e8f-fa3b-4995-9072-1f7cce3ee2c8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c71cd8749450e0ff1c2738d2296268881239f2aa9bdd73e230f95d6288f3c422\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:14.627087 kubelet[2845]: E0123 01:11:14.626776 2845 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c71cd8749450e0ff1c2738d2296268881239f2aa9bdd73e230f95d6288f3c422\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:14.627087 kubelet[2845]: E0123 01:11:14.626849 2845 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c71cd8749450e0ff1c2738d2296268881239f2aa9bdd73e230f95d6288f3c422\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7r7jt" Jan 23 01:11:14.627087 kubelet[2845]: E0123 01:11:14.626962 2845 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c71cd8749450e0ff1c2738d2296268881239f2aa9bdd73e230f95d6288f3c422\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7r7jt" Jan 23 01:11:14.627220 kubelet[2845]: E0123 01:11:14.627027 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7r7jt_calico-system(e2553e8f-fa3b-4995-9072-1f7cce3ee2c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7r7jt_calico-system(e2553e8f-fa3b-4995-9072-1f7cce3ee2c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c71cd8749450e0ff1c2738d2296268881239f2aa9bdd73e230f95d6288f3c422\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7r7jt" podUID="e2553e8f-fa3b-4995-9072-1f7cce3ee2c8" Jan 23 01:11:14.632937 containerd[1546]: time="2026-01-23T01:11:14.632734514Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nbdj7,Uid:3ed63450-d0f7-42e4-856a-8ea4e718ff98,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a486d6ebcddf2d69f0bb95ac067d54c4f7c7b3a954e4cb67d9336d98c18b7944\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:14.635784 kubelet[2845]: E0123 01:11:14.635657 2845 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a486d6ebcddf2d69f0bb95ac067d54c4f7c7b3a954e4cb67d9336d98c18b7944\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:14.635784 kubelet[2845]: E0123 01:11:14.635728 2845 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a486d6ebcddf2d69f0bb95ac067d54c4f7c7b3a954e4cb67d9336d98c18b7944\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-nbdj7" Jan 23 01:11:14.635784 kubelet[2845]: E0123 01:11:14.635764 2845 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a486d6ebcddf2d69f0bb95ac067d54c4f7c7b3a954e4cb67d9336d98c18b7944\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-nbdj7" Jan 23 01:11:14.635934 kubelet[2845]: E0123 01:11:14.635819 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-nbdj7_kube-system(3ed63450-d0f7-42e4-856a-8ea4e718ff98)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-nbdj7_kube-system(3ed63450-d0f7-42e4-856a-8ea4e718ff98)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a486d6ebcddf2d69f0bb95ac067d54c4f7c7b3a954e4cb67d9336d98c18b7944\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-nbdj7" podUID="3ed63450-d0f7-42e4-856a-8ea4e718ff98" Jan 23 01:11:14.655134 containerd[1546]: time="2026-01-23T01:11:14.655061597Z" level=error msg="Failed to destroy network for sandbox \"b314a2a207c74aa61fe4ab2e5c25e0164a5cc88c5eec979ec2b4581e23231d17\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:14.662098 containerd[1546]: time="2026-01-23T01:11:14.662044083Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86ccb5f87d-dkzhd,Uid:369aa670-4b29-4a0c-8fff-6ae07d46c778,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b314a2a207c74aa61fe4ab2e5c25e0164a5cc88c5eec979ec2b4581e23231d17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:14.664924 kubelet[2845]: E0123 01:11:14.664167 2845 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b314a2a207c74aa61fe4ab2e5c25e0164a5cc88c5eec979ec2b4581e23231d17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:14.665052 kubelet[2845]: E0123 01:11:14.664963 2845 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b314a2a207c74aa61fe4ab2e5c25e0164a5cc88c5eec979ec2b4581e23231d17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86ccb5f87d-dkzhd" Jan 23 01:11:14.665052 kubelet[2845]: E0123 01:11:14.664998 2845 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b314a2a207c74aa61fe4ab2e5c25e0164a5cc88c5eec979ec2b4581e23231d17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86ccb5f87d-dkzhd" Jan 23 01:11:14.665141 kubelet[2845]: E0123 01:11:14.665099 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86ccb5f87d-dkzhd_calico-system(369aa670-4b29-4a0c-8fff-6ae07d46c778)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86ccb5f87d-dkzhd_calico-system(369aa670-4b29-4a0c-8fff-6ae07d46c778)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b314a2a207c74aa61fe4ab2e5c25e0164a5cc88c5eec979ec2b4581e23231d17\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86ccb5f87d-dkzhd" podUID="369aa670-4b29-4a0c-8fff-6ae07d46c778" Jan 23 01:11:14.724849 containerd[1546]: time="2026-01-23T01:11:14.724176981Z" level=error msg="Failed to destroy network for sandbox \"fa8a0f38fb63768f650dbc16ea852400cc1e14d16462a06302b637fbe10ce88e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:14.731083 containerd[1546]: time="2026-01-23T01:11:14.730109376Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68948fdbd6-mdlvc,Uid:5865bb49-f0fe-4eb4-8f6c-74bc939474ad,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa8a0f38fb63768f650dbc16ea852400cc1e14d16462a06302b637fbe10ce88e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:14.731716 kubelet[2845]: E0123 01:11:14.730786 2845 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa8a0f38fb63768f650dbc16ea852400cc1e14d16462a06302b637fbe10ce88e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:14.731716 kubelet[2845]: E0123 01:11:14.730860 2845 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa8a0f38fb63768f650dbc16ea852400cc1e14d16462a06302b637fbe10ce88e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68948fdbd6-mdlvc" Jan 23 01:11:14.731716 kubelet[2845]: E0123 01:11:14.730887 2845 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa8a0f38fb63768f650dbc16ea852400cc1e14d16462a06302b637fbe10ce88e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68948fdbd6-mdlvc" Jan 23 01:11:14.731911 kubelet[2845]: E0123 01:11:14.730938 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68948fdbd6-mdlvc_calico-apiserver(5865bb49-f0fe-4eb4-8f6c-74bc939474ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68948fdbd6-mdlvc_calico-apiserver(5865bb49-f0fe-4eb4-8f6c-74bc939474ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa8a0f38fb63768f650dbc16ea852400cc1e14d16462a06302b637fbe10ce88e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68948fdbd6-mdlvc" podUID="5865bb49-f0fe-4eb4-8f6c-74bc939474ad" Jan 23 01:11:15.162996 systemd[1]: run-netns-cni\x2d91cd4a1f\x2da7df\x2d6abe\x2d652f\x2d63edae01d496.mount: Deactivated successfully. Jan 23 01:11:15.163437 systemd[1]: run-netns-cni\x2d599d2727\x2df9f7\x2dc04f\x2d15c4\x2d4a769c6125c8.mount: Deactivated successfully. Jan 23 01:11:25.587904 containerd[1546]: time="2026-01-23T01:11:25.586800031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68948fdbd6-4jxhx,Uid:3b1f5033-cf51-4add-93c1-34dedb396092,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:11:25.599414 kubelet[2845]: E0123 01:11:25.594774 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:11:25.607724 containerd[1546]: time="2026-01-23T01:11:25.607421581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-q7bt2,Uid:853a2368-0964-46f6-bbf0-478966b86444,Namespace:kube-system,Attempt:0,}" Jan 23 01:11:26.073849 containerd[1546]: time="2026-01-23T01:11:26.072627918Z" level=error msg="Failed to destroy network for sandbox \"534d0e38d720f4c0f7b579e8d1cd74cc8dfd449bb2a3ffbac2d87c4d8fcda9a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:26.088866 systemd[1]: run-netns-cni\x2d7fd964ef\x2dc4bd\x2d5a2f\x2d4f6c\x2d1c960df512e6.mount: Deactivated successfully. Jan 23 01:11:26.111447 containerd[1546]: time="2026-01-23T01:11:26.108481030Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-q7bt2,Uid:853a2368-0964-46f6-bbf0-478966b86444,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"534d0e38d720f4c0f7b579e8d1cd74cc8dfd449bb2a3ffbac2d87c4d8fcda9a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:26.112142 kubelet[2845]: E0123 01:11:26.109858 2845 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"534d0e38d720f4c0f7b579e8d1cd74cc8dfd449bb2a3ffbac2d87c4d8fcda9a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:26.112142 kubelet[2845]: E0123 01:11:26.111392 2845 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"534d0e38d720f4c0f7b579e8d1cd74cc8dfd449bb2a3ffbac2d87c4d8fcda9a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-q7bt2" Jan 23 01:11:26.112142 kubelet[2845]: E0123 01:11:26.111418 2845 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"534d0e38d720f4c0f7b579e8d1cd74cc8dfd449bb2a3ffbac2d87c4d8fcda9a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-q7bt2" Jan 23 01:11:26.122130 kubelet[2845]: E0123 01:11:26.111474 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-q7bt2_kube-system(853a2368-0964-46f6-bbf0-478966b86444)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-q7bt2_kube-system(853a2368-0964-46f6-bbf0-478966b86444)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"534d0e38d720f4c0f7b579e8d1cd74cc8dfd449bb2a3ffbac2d87c4d8fcda9a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-q7bt2" podUID="853a2368-0964-46f6-bbf0-478966b86444" Jan 23 01:11:26.272181 containerd[1546]: time="2026-01-23T01:11:26.272012910Z" level=error msg="Failed to destroy network for sandbox \"4dc9a94d893b6ed1c9e91a86907d27f682836113fd0e96a335513d7aecc4ec81\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:26.285983 containerd[1546]: time="2026-01-23T01:11:26.285827522Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68948fdbd6-4jxhx,Uid:3b1f5033-cf51-4add-93c1-34dedb396092,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dc9a94d893b6ed1c9e91a86907d27f682836113fd0e96a335513d7aecc4ec81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:26.286694 systemd[1]: run-netns-cni\x2d7824db3f\x2de76a\x2d4bf7\x2dd603\x2d195127212e12.mount: Deactivated successfully. Jan 23 01:11:26.291086 kubelet[2845]: E0123 01:11:26.287843 2845 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dc9a94d893b6ed1c9e91a86907d27f682836113fd0e96a335513d7aecc4ec81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:26.291086 kubelet[2845]: E0123 01:11:26.287923 2845 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dc9a94d893b6ed1c9e91a86907d27f682836113fd0e96a335513d7aecc4ec81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68948fdbd6-4jxhx" Jan 23 01:11:26.291086 kubelet[2845]: E0123 01:11:26.290623 2845 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dc9a94d893b6ed1c9e91a86907d27f682836113fd0e96a335513d7aecc4ec81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68948fdbd6-4jxhx" Jan 23 01:11:26.291441 kubelet[2845]: E0123 01:11:26.291031 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68948fdbd6-4jxhx_calico-apiserver(3b1f5033-cf51-4add-93c1-34dedb396092)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68948fdbd6-4jxhx_calico-apiserver(3b1f5033-cf51-4add-93c1-34dedb396092)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4dc9a94d893b6ed1c9e91a86907d27f682836113fd0e96a335513d7aecc4ec81\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68948fdbd6-4jxhx" podUID="3b1f5033-cf51-4add-93c1-34dedb396092" Jan 23 01:11:26.639889 containerd[1546]: time="2026-01-23T01:11:26.639655212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68948fdbd6-mdlvc,Uid:5865bb49-f0fe-4eb4-8f6c-74bc939474ad,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:11:27.091758 containerd[1546]: time="2026-01-23T01:11:27.088885391Z" level=error msg="Failed to destroy network for sandbox \"57bec26dd8ad07c413ba23a440f2fb341a49980d8f67743b7b7187f57d9d53f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:27.101724 containerd[1546]: time="2026-01-23T01:11:27.101458675Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68948fdbd6-mdlvc,Uid:5865bb49-f0fe-4eb4-8f6c-74bc939474ad,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"57bec26dd8ad07c413ba23a440f2fb341a49980d8f67743b7b7187f57d9d53f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:27.102427 kubelet[2845]: E0123 01:11:27.102053 2845 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57bec26dd8ad07c413ba23a440f2fb341a49980d8f67743b7b7187f57d9d53f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:27.102427 kubelet[2845]: E0123 01:11:27.102132 2845 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57bec26dd8ad07c413ba23a440f2fb341a49980d8f67743b7b7187f57d9d53f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68948fdbd6-mdlvc" Jan 23 01:11:27.102427 kubelet[2845]: E0123 01:11:27.102157 2845 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57bec26dd8ad07c413ba23a440f2fb341a49980d8f67743b7b7187f57d9d53f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68948fdbd6-mdlvc" Jan 23 01:11:27.106215 kubelet[2845]: E0123 01:11:27.102221 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68948fdbd6-mdlvc_calico-apiserver(5865bb49-f0fe-4eb4-8f6c-74bc939474ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68948fdbd6-mdlvc_calico-apiserver(5865bb49-f0fe-4eb4-8f6c-74bc939474ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"57bec26dd8ad07c413ba23a440f2fb341a49980d8f67743b7b7187f57d9d53f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68948fdbd6-mdlvc" podUID="5865bb49-f0fe-4eb4-8f6c-74bc939474ad" Jan 23 01:11:27.142178 systemd[1]: run-netns-cni\x2d61787769\x2dfdbf\x2d9f32\x2d7b12\x2df48f7ab87fe0.mount: Deactivated successfully. Jan 23 01:11:27.558724 containerd[1546]: time="2026-01-23T01:11:27.558630520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7r7jt,Uid:e2553e8f-fa3b-4995-9072-1f7cce3ee2c8,Namespace:calico-system,Attempt:0,}" Jan 23 01:11:27.568693 kubelet[2845]: E0123 01:11:27.568617 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:11:27.582116 containerd[1546]: time="2026-01-23T01:11:27.581437321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nbdj7,Uid:3ed63450-d0f7-42e4-856a-8ea4e718ff98,Namespace:kube-system,Attempt:0,}" Jan 23 01:11:28.189422 containerd[1546]: time="2026-01-23T01:11:28.185794410Z" level=error msg="Failed to destroy network for sandbox \"8ad6de7c01af4b02ca79f35d11692e906ef4730d77de6e23b6cd22c005b6ffce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:28.201693 systemd[1]: run-netns-cni\x2d8d30382e\x2da53d\x2dbf5b\x2d7b59\x2de6ae0ed320c7.mount: Deactivated successfully. Jan 23 01:11:28.203990 containerd[1546]: time="2026-01-23T01:11:28.203925808Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nbdj7,Uid:3ed63450-d0f7-42e4-856a-8ea4e718ff98,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ad6de7c01af4b02ca79f35d11692e906ef4730d77de6e23b6cd22c005b6ffce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:28.205764 kubelet[2845]: E0123 01:11:28.205716 2845 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ad6de7c01af4b02ca79f35d11692e906ef4730d77de6e23b6cd22c005b6ffce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:28.206904 kubelet[2845]: E0123 01:11:28.206817 2845 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ad6de7c01af4b02ca79f35d11692e906ef4730d77de6e23b6cd22c005b6ffce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-nbdj7" Jan 23 01:11:28.206904 kubelet[2845]: E0123 01:11:28.206852 2845 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ad6de7c01af4b02ca79f35d11692e906ef4730d77de6e23b6cd22c005b6ffce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-nbdj7" Jan 23 01:11:28.212169 kubelet[2845]: E0123 01:11:28.211157 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-nbdj7_kube-system(3ed63450-d0f7-42e4-856a-8ea4e718ff98)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-nbdj7_kube-system(3ed63450-d0f7-42e4-856a-8ea4e718ff98)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ad6de7c01af4b02ca79f35d11692e906ef4730d77de6e23b6cd22c005b6ffce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-nbdj7" podUID="3ed63450-d0f7-42e4-856a-8ea4e718ff98" Jan 23 01:11:28.225354 containerd[1546]: time="2026-01-23T01:11:28.224011539Z" level=error msg="Failed to destroy network for sandbox \"a16c2eda6a51e08223716984c65e48f4094341a95776c5af5f4afccc874e77e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:28.234154 systemd[1]: run-netns-cni\x2d389281ea\x2d8a04\x2d7c48\x2d0843\x2d56de0acf4aa5.mount: Deactivated successfully. Jan 23 01:11:28.265402 containerd[1546]: time="2026-01-23T01:11:28.265070308Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7r7jt,Uid:e2553e8f-fa3b-4995-9072-1f7cce3ee2c8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a16c2eda6a51e08223716984c65e48f4094341a95776c5af5f4afccc874e77e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:28.266469 kubelet[2845]: E0123 01:11:28.265890 2845 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a16c2eda6a51e08223716984c65e48f4094341a95776c5af5f4afccc874e77e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:28.266469 kubelet[2845]: E0123 01:11:28.265969 2845 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a16c2eda6a51e08223716984c65e48f4094341a95776c5af5f4afccc874e77e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7r7jt" Jan 23 01:11:28.266469 kubelet[2845]: E0123 01:11:28.266004 2845 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a16c2eda6a51e08223716984c65e48f4094341a95776c5af5f4afccc874e77e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7r7jt" Jan 23 01:11:28.266758 kubelet[2845]: E0123 01:11:28.266072 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7r7jt_calico-system(e2553e8f-fa3b-4995-9072-1f7cce3ee2c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7r7jt_calico-system(e2553e8f-fa3b-4995-9072-1f7cce3ee2c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a16c2eda6a51e08223716984c65e48f4094341a95776c5af5f4afccc874e77e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7r7jt" podUID="e2553e8f-fa3b-4995-9072-1f7cce3ee2c8" Jan 23 01:11:28.596651 containerd[1546]: time="2026-01-23T01:11:28.592442638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-bmvtb,Uid:662d18b3-33cf-4000-b003-c8e7f6b2e810,Namespace:calico-system,Attempt:0,}" Jan 23 01:11:28.627383 containerd[1546]: time="2026-01-23T01:11:28.620738595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86ccb5f87d-dkzhd,Uid:369aa670-4b29-4a0c-8fff-6ae07d46c778,Namespace:calico-system,Attempt:0,}" Jan 23 01:11:29.560107 containerd[1546]: time="2026-01-23T01:11:29.559975434Z" level=error msg="Failed to destroy network for sandbox \"2053d943f2ae8312f361239053ef5628b2f55494a9ec302556f6a921166c7e79\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:29.566441 systemd[1]: run-netns-cni\x2d7b99eb4c\x2d741c\x2dc81e\x2d65bf\x2d1f969ce71790.mount: Deactivated successfully. Jan 23 01:11:29.573802 containerd[1546]: time="2026-01-23T01:11:29.573759108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7697c4c58d-kbbbl,Uid:eddc7a17-a39d-4b74-9498-cfa4bf00bdf9,Namespace:calico-system,Attempt:0,}" Jan 23 01:11:29.585832 containerd[1546]: time="2026-01-23T01:11:29.585500674Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-bmvtb,Uid:662d18b3-33cf-4000-b003-c8e7f6b2e810,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2053d943f2ae8312f361239053ef5628b2f55494a9ec302556f6a921166c7e79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:29.592753 kubelet[2845]: E0123 01:11:29.591701 2845 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2053d943f2ae8312f361239053ef5628b2f55494a9ec302556f6a921166c7e79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:29.592753 kubelet[2845]: E0123 01:11:29.591779 2845 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2053d943f2ae8312f361239053ef5628b2f55494a9ec302556f6a921166c7e79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-bmvtb" Jan 23 01:11:29.592753 kubelet[2845]: E0123 01:11:29.591806 2845 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2053d943f2ae8312f361239053ef5628b2f55494a9ec302556f6a921166c7e79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-bmvtb" Jan 23 01:11:29.593753 kubelet[2845]: E0123 01:11:29.591871 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-bmvtb_calico-system(662d18b3-33cf-4000-b003-c8e7f6b2e810)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-bmvtb_calico-system(662d18b3-33cf-4000-b003-c8e7f6b2e810)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2053d943f2ae8312f361239053ef5628b2f55494a9ec302556f6a921166c7e79\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-bmvtb" podUID="662d18b3-33cf-4000-b003-c8e7f6b2e810" Jan 23 01:11:29.606201 containerd[1546]: time="2026-01-23T01:11:29.594671547Z" level=error msg="Failed to destroy network for sandbox \"7cd9e3f1bd659b91ca3900788a39adaa35e7070ec986d0e44a45dbc6076c8841\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:29.605737 systemd[1]: run-netns-cni\x2d0c050db6\x2de9cc\x2dd048\x2d1dbd\x2dcf2f516028cf.mount: Deactivated successfully. Jan 23 01:11:29.619004 containerd[1546]: time="2026-01-23T01:11:29.618851370Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86ccb5f87d-dkzhd,Uid:369aa670-4b29-4a0c-8fff-6ae07d46c778,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cd9e3f1bd659b91ca3900788a39adaa35e7070ec986d0e44a45dbc6076c8841\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:29.620219 kubelet[2845]: E0123 01:11:29.620025 2845 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cd9e3f1bd659b91ca3900788a39adaa35e7070ec986d0e44a45dbc6076c8841\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:29.622198 kubelet[2845]: E0123 01:11:29.621893 2845 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cd9e3f1bd659b91ca3900788a39adaa35e7070ec986d0e44a45dbc6076c8841\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86ccb5f87d-dkzhd" Jan 23 01:11:29.622198 kubelet[2845]: E0123 01:11:29.621920 2845 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cd9e3f1bd659b91ca3900788a39adaa35e7070ec986d0e44a45dbc6076c8841\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86ccb5f87d-dkzhd" Jan 23 01:11:29.622198 kubelet[2845]: E0123 01:11:29.621970 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86ccb5f87d-dkzhd_calico-system(369aa670-4b29-4a0c-8fff-6ae07d46c778)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86ccb5f87d-dkzhd_calico-system(369aa670-4b29-4a0c-8fff-6ae07d46c778)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7cd9e3f1bd659b91ca3900788a39adaa35e7070ec986d0e44a45dbc6076c8841\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86ccb5f87d-dkzhd" podUID="369aa670-4b29-4a0c-8fff-6ae07d46c778" Jan 23 01:11:30.092418 containerd[1546]: time="2026-01-23T01:11:30.089508581Z" level=error msg="Failed to destroy network for sandbox \"afc2b244c407d6558d32c7cb3d095e265e30bb32aaf00e843273fe6f5a6d2cdf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:30.095409 systemd[1]: run-netns-cni\x2db6dc6ea3\x2d7462\x2deb85\x2d880b\x2d76215f47e062.mount: Deactivated successfully. Jan 23 01:11:30.100125 containerd[1546]: time="2026-01-23T01:11:30.097432194Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7697c4c58d-kbbbl,Uid:eddc7a17-a39d-4b74-9498-cfa4bf00bdf9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"afc2b244c407d6558d32c7cb3d095e265e30bb32aaf00e843273fe6f5a6d2cdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:30.101713 kubelet[2845]: E0123 01:11:30.099813 2845 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afc2b244c407d6558d32c7cb3d095e265e30bb32aaf00e843273fe6f5a6d2cdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:30.101713 kubelet[2845]: E0123 01:11:30.099871 2845 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afc2b244c407d6558d32c7cb3d095e265e30bb32aaf00e843273fe6f5a6d2cdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7697c4c58d-kbbbl" Jan 23 01:11:30.101713 kubelet[2845]: E0123 01:11:30.099890 2845 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afc2b244c407d6558d32c7cb3d095e265e30bb32aaf00e843273fe6f5a6d2cdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7697c4c58d-kbbbl" Jan 23 01:11:30.101849 kubelet[2845]: E0123 01:11:30.100920 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7697c4c58d-kbbbl_calico-system(eddc7a17-a39d-4b74-9498-cfa4bf00bdf9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7697c4c58d-kbbbl_calico-system(eddc7a17-a39d-4b74-9498-cfa4bf00bdf9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"afc2b244c407d6558d32c7cb3d095e265e30bb32aaf00e843273fe6f5a6d2cdf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7697c4c58d-kbbbl" podUID="eddc7a17-a39d-4b74-9498-cfa4bf00bdf9" Jan 23 01:11:32.884186 systemd[1]: Started sshd@9-10.0.0.42:22-10.0.0.1:40798.service - OpenSSH per-connection server daemon (10.0.0.1:40798). Jan 23 01:11:33.280393 sshd[4207]: Accepted publickey for core from 10.0.0.1 port 40798 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:11:33.293933 sshd-session[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:33.343073 systemd-logind[1529]: New session 10 of user core. Jan 23 01:11:33.351174 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 01:11:34.280779 sshd[4210]: Connection closed by 10.0.0.1 port 40798 Jan 23 01:11:34.311024 systemd[1]: sshd@9-10.0.0.42:22-10.0.0.1:40798.service: Deactivated successfully. Jan 23 01:11:34.285505 sshd-session[4207]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:34.318764 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 01:11:34.333803 systemd-logind[1529]: Session 10 logged out. Waiting for processes to exit. Jan 23 01:11:34.341647 systemd-logind[1529]: Removed session 10. Jan 23 01:11:39.313455 systemd[1]: Started sshd@10-10.0.0.42:22-10.0.0.1:40814.service - OpenSSH per-connection server daemon (10.0.0.1:40814). Jan 23 01:11:39.765539 sshd[4229]: Accepted publickey for core from 10.0.0.1 port 40814 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:11:39.769847 sshd-session[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:39.827070 systemd-logind[1529]: New session 11 of user core. Jan 23 01:11:39.840725 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 01:11:39.847965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1316392892.mount: Deactivated successfully. Jan 23 01:11:40.024725 containerd[1546]: time="2026-01-23T01:11:40.024048266Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:40.036485 containerd[1546]: time="2026-01-23T01:11:40.033970233Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 23 01:11:40.038905 containerd[1546]: time="2026-01-23T01:11:40.038870178Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:40.051032 containerd[1546]: time="2026-01-23T01:11:40.049102022Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:40.051032 containerd[1546]: time="2026-01-23T01:11:40.050513427Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 25.813456953s" Jan 23 01:11:40.051032 containerd[1546]: time="2026-01-23T01:11:40.050546939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 23 01:11:40.237940 containerd[1546]: time="2026-01-23T01:11:40.237488308Z" level=info msg="CreateContainer within sandbox \"f03f41d94137f2a0fcf55e17d0af74d7f519393aa99bbf23c9d07ade346b86f3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 01:11:40.336708 containerd[1546]: time="2026-01-23T01:11:40.334687475Z" level=info msg="Container 93fe018cafdd1b9f03b9e0cdabf27841eefde78295b860fb4b8fd505d8ad56c7: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:11:40.342020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3677497872.mount: Deactivated successfully. Jan 23 01:11:40.555973 containerd[1546]: time="2026-01-23T01:11:40.554915128Z" level=info msg="CreateContainer within sandbox \"f03f41d94137f2a0fcf55e17d0af74d7f519393aa99bbf23c9d07ade346b86f3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"93fe018cafdd1b9f03b9e0cdabf27841eefde78295b860fb4b8fd505d8ad56c7\"" Jan 23 01:11:40.597463 containerd[1546]: time="2026-01-23T01:11:40.595735983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68948fdbd6-4jxhx,Uid:3b1f5033-cf51-4add-93c1-34dedb396092,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:11:40.611689 containerd[1546]: time="2026-01-23T01:11:40.606694611Z" level=info msg="StartContainer for \"93fe018cafdd1b9f03b9e0cdabf27841eefde78295b860fb4b8fd505d8ad56c7\"" Jan 23 01:11:40.611825 kubelet[2845]: E0123 01:11:40.611496 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:11:40.617494 containerd[1546]: time="2026-01-23T01:11:40.617160742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nbdj7,Uid:3ed63450-d0f7-42e4-856a-8ea4e718ff98,Namespace:kube-system,Attempt:0,}" Jan 23 01:11:40.621662 containerd[1546]: time="2026-01-23T01:11:40.621137525Z" level=info msg="connecting to shim 93fe018cafdd1b9f03b9e0cdabf27841eefde78295b860fb4b8fd505d8ad56c7" address="unix:///run/containerd/s/e6d17e0487980d681795df58a97b954f3193a193bb5dd07dbfeed38a97c1c13c" protocol=ttrpc version=3 Jan 23 01:11:40.635696 containerd[1546]: time="2026-01-23T01:11:40.634794273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7r7jt,Uid:e2553e8f-fa3b-4995-9072-1f7cce3ee2c8,Namespace:calico-system,Attempt:0,}" Jan 23 01:11:40.696531 kubelet[2845]: E0123 01:11:40.683676 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:11:40.731675 containerd[1546]: time="2026-01-23T01:11:40.731022080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-q7bt2,Uid:853a2368-0964-46f6-bbf0-478966b86444,Namespace:kube-system,Attempt:0,}" Jan 23 01:11:40.830417 sshd[4232]: Connection closed by 10.0.0.1 port 40814 Jan 23 01:11:40.830910 sshd-session[4229]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:40.864701 systemd[1]: sshd@10-10.0.0.42:22-10.0.0.1:40814.service: Deactivated successfully. Jan 23 01:11:40.874219 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 01:11:40.888956 systemd-logind[1529]: Session 11 logged out. Waiting for processes to exit. Jan 23 01:11:40.894674 systemd-logind[1529]: Removed session 11. Jan 23 01:11:40.956959 systemd[1]: Started cri-containerd-93fe018cafdd1b9f03b9e0cdabf27841eefde78295b860fb4b8fd505d8ad56c7.scope - libcontainer container 93fe018cafdd1b9f03b9e0cdabf27841eefde78295b860fb4b8fd505d8ad56c7. Jan 23 01:11:41.458992 containerd[1546]: time="2026-01-23T01:11:41.456751378Z" level=error msg="Failed to destroy network for sandbox \"f64d641eee8eaedbcc3fa81225028edde5997af9a1b770a31b7f28f906dc848e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:41.484222 systemd[1]: run-netns-cni\x2d6c551307\x2d8b59\x2dd26a\x2dc59f\x2debbae27c32a3.mount: Deactivated successfully. Jan 23 01:11:41.501718 containerd[1546]: time="2026-01-23T01:11:41.500131787Z" level=error msg="Failed to destroy network for sandbox \"732389aed760c75338c56f00237d52143881382e1f2faadb5136ba00baf2c999\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:41.530477 systemd[1]: run-netns-cni\x2d2c617687\x2d5aaa\x2d5459\x2db7bc\x2d4939d1c769ef.mount: Deactivated successfully. Jan 23 01:11:41.552041 containerd[1546]: time="2026-01-23T01:11:41.550955980Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7r7jt,Uid:e2553e8f-fa3b-4995-9072-1f7cce3ee2c8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f64d641eee8eaedbcc3fa81225028edde5997af9a1b770a31b7f28f906dc848e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:41.554765 kubelet[2845]: E0123 01:11:41.554718 2845 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f64d641eee8eaedbcc3fa81225028edde5997af9a1b770a31b7f28f906dc848e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:41.558086 kubelet[2845]: E0123 01:11:41.558047 2845 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f64d641eee8eaedbcc3fa81225028edde5997af9a1b770a31b7f28f906dc848e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7r7jt" Jan 23 01:11:41.558197 kubelet[2845]: E0123 01:11:41.558173 2845 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f64d641eee8eaedbcc3fa81225028edde5997af9a1b770a31b7f28f906dc848e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7r7jt" Jan 23 01:11:41.560077 kubelet[2845]: E0123 01:11:41.559480 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7r7jt_calico-system(e2553e8f-fa3b-4995-9072-1f7cce3ee2c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7r7jt_calico-system(e2553e8f-fa3b-4995-9072-1f7cce3ee2c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f64d641eee8eaedbcc3fa81225028edde5997af9a1b770a31b7f28f906dc848e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7r7jt" podUID="e2553e8f-fa3b-4995-9072-1f7cce3ee2c8" Jan 23 01:11:41.566176 containerd[1546]: time="2026-01-23T01:11:41.566111206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-bmvtb,Uid:662d18b3-33cf-4000-b003-c8e7f6b2e810,Namespace:calico-system,Attempt:0,}" Jan 23 01:11:41.602113 containerd[1546]: time="2026-01-23T01:11:41.601078889Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nbdj7,Uid:3ed63450-d0f7-42e4-856a-8ea4e718ff98,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"732389aed760c75338c56f00237d52143881382e1f2faadb5136ba00baf2c999\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:41.606189 kubelet[2845]: E0123 01:11:41.603013 2845 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"732389aed760c75338c56f00237d52143881382e1f2faadb5136ba00baf2c999\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:41.606189 kubelet[2845]: E0123 01:11:41.603706 2845 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"732389aed760c75338c56f00237d52143881382e1f2faadb5136ba00baf2c999\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-nbdj7" Jan 23 01:11:41.606189 kubelet[2845]: E0123 01:11:41.603750 2845 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"732389aed760c75338c56f00237d52143881382e1f2faadb5136ba00baf2c999\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-nbdj7" Jan 23 01:11:41.609006 containerd[1546]: time="2026-01-23T01:11:41.608710589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68948fdbd6-mdlvc,Uid:5865bb49-f0fe-4eb4-8f6c-74bc939474ad,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:11:41.609929 kubelet[2845]: E0123 01:11:41.604067 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-nbdj7_kube-system(3ed63450-d0f7-42e4-856a-8ea4e718ff98)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-nbdj7_kube-system(3ed63450-d0f7-42e4-856a-8ea4e718ff98)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"732389aed760c75338c56f00237d52143881382e1f2faadb5136ba00baf2c999\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-nbdj7" podUID="3ed63450-d0f7-42e4-856a-8ea4e718ff98" Jan 23 01:11:41.671551 containerd[1546]: time="2026-01-23T01:11:41.669807480Z" level=error msg="Failed to destroy network for sandbox \"06fd19718c6e95af6d3b2ee72e8a02f493996b4466a6ef58b4ad87ddb662474a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:41.694784 systemd[1]: run-netns-cni\x2d5b47b9eb\x2dcea5\x2dc171\x2d0feb\x2d88daa540cf64.mount: Deactivated successfully. Jan 23 01:11:41.760547 containerd[1546]: time="2026-01-23T01:11:41.760497213Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-q7bt2,Uid:853a2368-0964-46f6-bbf0-478966b86444,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"06fd19718c6e95af6d3b2ee72e8a02f493996b4466a6ef58b4ad87ddb662474a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:41.821700 kubelet[2845]: E0123 01:11:41.820730 2845 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06fd19718c6e95af6d3b2ee72e8a02f493996b4466a6ef58b4ad87ddb662474a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:41.821700 kubelet[2845]: E0123 01:11:41.821760 2845 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06fd19718c6e95af6d3b2ee72e8a02f493996b4466a6ef58b4ad87ddb662474a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-q7bt2" Jan 23 01:11:41.821700 kubelet[2845]: E0123 01:11:41.821793 2845 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06fd19718c6e95af6d3b2ee72e8a02f493996b4466a6ef58b4ad87ddb662474a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-q7bt2" Jan 23 01:11:41.843997 kubelet[2845]: E0123 01:11:41.831171 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-q7bt2_kube-system(853a2368-0964-46f6-bbf0-478966b86444)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-q7bt2_kube-system(853a2368-0964-46f6-bbf0-478966b86444)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"06fd19718c6e95af6d3b2ee72e8a02f493996b4466a6ef58b4ad87ddb662474a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-q7bt2" podUID="853a2368-0964-46f6-bbf0-478966b86444" Jan 23 01:11:42.129486 containerd[1546]: time="2026-01-23T01:11:42.127204464Z" level=error msg="Failed to destroy network for sandbox \"c911ad0ed1d240592e61a46e6bfb856655a60a007d75d5600bc61f7d56ced7a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:42.143931 systemd[1]: run-netns-cni\x2d225fda02\x2d2c47\x2d110a\x2d3b69\x2d621fac146579.mount: Deactivated successfully. Jan 23 01:11:42.176105 containerd[1546]: time="2026-01-23T01:11:42.175886963Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68948fdbd6-4jxhx,Uid:3b1f5033-cf51-4add-93c1-34dedb396092,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c911ad0ed1d240592e61a46e6bfb856655a60a007d75d5600bc61f7d56ced7a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:42.183524 kubelet[2845]: E0123 01:11:42.183467 2845 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c911ad0ed1d240592e61a46e6bfb856655a60a007d75d5600bc61f7d56ced7a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:42.183927 kubelet[2845]: E0123 01:11:42.183895 2845 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c911ad0ed1d240592e61a46e6bfb856655a60a007d75d5600bc61f7d56ced7a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68948fdbd6-4jxhx" Jan 23 01:11:42.184040 kubelet[2845]: E0123 01:11:42.184017 2845 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c911ad0ed1d240592e61a46e6bfb856655a60a007d75d5600bc61f7d56ced7a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68948fdbd6-4jxhx" Jan 23 01:11:42.184715 kubelet[2845]: E0123 01:11:42.184184 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68948fdbd6-4jxhx_calico-apiserver(3b1f5033-cf51-4add-93c1-34dedb396092)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68948fdbd6-4jxhx_calico-apiserver(3b1f5033-cf51-4add-93c1-34dedb396092)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c911ad0ed1d240592e61a46e6bfb856655a60a007d75d5600bc61f7d56ced7a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68948fdbd6-4jxhx" podUID="3b1f5033-cf51-4add-93c1-34dedb396092" Jan 23 01:11:42.207893 containerd[1546]: time="2026-01-23T01:11:42.207817575Z" level=info msg="StartContainer for \"93fe018cafdd1b9f03b9e0cdabf27841eefde78295b860fb4b8fd505d8ad56c7\" returns successfully" Jan 23 01:11:42.454002 containerd[1546]: time="2026-01-23T01:11:42.452114264Z" level=error msg="Failed to destroy network for sandbox \"f2761cec9641e934d808b5b1210f999aac3544ba211f8c48a52806a214c7285a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:42.465855 systemd[1]: run-netns-cni\x2d9e8a0523\x2df609\x2dde32\x2dcd5e\x2d68a61b77ab06.mount: Deactivated successfully. Jan 23 01:11:42.471552 containerd[1546]: time="2026-01-23T01:11:42.470890168Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68948fdbd6-mdlvc,Uid:5865bb49-f0fe-4eb4-8f6c-74bc939474ad,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2761cec9641e934d808b5b1210f999aac3544ba211f8c48a52806a214c7285a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:42.480383 kubelet[2845]: E0123 01:11:42.480107 2845 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2761cec9641e934d808b5b1210f999aac3544ba211f8c48a52806a214c7285a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:42.480953 kubelet[2845]: E0123 01:11:42.480789 2845 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2761cec9641e934d808b5b1210f999aac3544ba211f8c48a52806a214c7285a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68948fdbd6-mdlvc" Jan 23 01:11:42.480953 kubelet[2845]: E0123 01:11:42.480826 2845 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2761cec9641e934d808b5b1210f999aac3544ba211f8c48a52806a214c7285a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68948fdbd6-mdlvc" Jan 23 01:11:42.483950 kubelet[2845]: E0123 01:11:42.483058 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68948fdbd6-mdlvc_calico-apiserver(5865bb49-f0fe-4eb4-8f6c-74bc939474ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68948fdbd6-mdlvc_calico-apiserver(5865bb49-f0fe-4eb4-8f6c-74bc939474ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f2761cec9641e934d808b5b1210f999aac3544ba211f8c48a52806a214c7285a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68948fdbd6-mdlvc" podUID="5865bb49-f0fe-4eb4-8f6c-74bc939474ad" Jan 23 01:11:42.655113 containerd[1546]: time="2026-01-23T01:11:42.646848591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86ccb5f87d-dkzhd,Uid:369aa670-4b29-4a0c-8fff-6ae07d46c778,Namespace:calico-system,Attempt:0,}" Jan 23 01:11:42.762519 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 01:11:42.766673 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 01:11:42.863516 containerd[1546]: time="2026-01-23T01:11:42.862996290Z" level=error msg="Failed to destroy network for sandbox \"a6e64ccf0c96d0774b8a879047e09826970a4b89b79bdc896af5dab3f32e295a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:42.884507 containerd[1546]: time="2026-01-23T01:11:42.883503848Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-bmvtb,Uid:662d18b3-33cf-4000-b003-c8e7f6b2e810,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6e64ccf0c96d0774b8a879047e09826970a4b89b79bdc896af5dab3f32e295a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:42.887924 kubelet[2845]: E0123 01:11:42.887132 2845 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6e64ccf0c96d0774b8a879047e09826970a4b89b79bdc896af5dab3f32e295a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:42.892396 kubelet[2845]: E0123 01:11:42.891850 2845 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6e64ccf0c96d0774b8a879047e09826970a4b89b79bdc896af5dab3f32e295a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-bmvtb" Jan 23 01:11:42.892396 kubelet[2845]: E0123 01:11:42.891991 2845 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6e64ccf0c96d0774b8a879047e09826970a4b89b79bdc896af5dab3f32e295a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-bmvtb" Jan 23 01:11:42.892518 kubelet[2845]: E0123 01:11:42.892178 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-bmvtb_calico-system(662d18b3-33cf-4000-b003-c8e7f6b2e810)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-bmvtb_calico-system(662d18b3-33cf-4000-b003-c8e7f6b2e810)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6e64ccf0c96d0774b8a879047e09826970a4b89b79bdc896af5dab3f32e295a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-bmvtb" podUID="662d18b3-33cf-4000-b003-c8e7f6b2e810" Jan 23 01:11:42.895007 systemd[1]: run-netns-cni\x2db85e31ff\x2d7cc5\x2d0499\x2da898\x2dd845ac9cb710.mount: Deactivated successfully. Jan 23 01:11:43.276869 kubelet[2845]: E0123 01:11:43.275713 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:11:43.446428 containerd[1546]: time="2026-01-23T01:11:43.444194670Z" level=error msg="Failed to destroy network for sandbox \"0fb593a6d1b02365ddfc2019fff9444d5eed402b3ba7529b8844d4b987398489\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:43.458211 systemd[1]: run-netns-cni\x2dd2d656a1\x2d4c8b\x2d94c9\x2dc15b\x2d13dce0afe849.mount: Deactivated successfully. Jan 23 01:11:43.469757 containerd[1546]: time="2026-01-23T01:11:43.469713966Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86ccb5f87d-dkzhd,Uid:369aa670-4b29-4a0c-8fff-6ae07d46c778,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fb593a6d1b02365ddfc2019fff9444d5eed402b3ba7529b8844d4b987398489\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:43.478494 kubelet[2845]: E0123 01:11:43.478442 2845 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fb593a6d1b02365ddfc2019fff9444d5eed402b3ba7529b8844d4b987398489\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:43.480014 kubelet[2845]: E0123 01:11:43.478882 2845 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fb593a6d1b02365ddfc2019fff9444d5eed402b3ba7529b8844d4b987398489\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86ccb5f87d-dkzhd" Jan 23 01:11:43.480014 kubelet[2845]: E0123 01:11:43.478920 2845 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fb593a6d1b02365ddfc2019fff9444d5eed402b3ba7529b8844d4b987398489\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86ccb5f87d-dkzhd" Jan 23 01:11:43.480014 kubelet[2845]: E0123 01:11:43.478991 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86ccb5f87d-dkzhd_calico-system(369aa670-4b29-4a0c-8fff-6ae07d46c778)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86ccb5f87d-dkzhd_calico-system(369aa670-4b29-4a0c-8fff-6ae07d46c778)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0fb593a6d1b02365ddfc2019fff9444d5eed402b3ba7529b8844d4b987398489\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86ccb5f87d-dkzhd" podUID="369aa670-4b29-4a0c-8fff-6ae07d46c778" Jan 23 01:11:43.784075 kubelet[2845]: I0123 01:11:43.783105 2845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mgtxl" podStartSLOduration=5.160603796 podStartE2EDuration="54.783078877s" podCreationTimestamp="2026-01-23 01:10:49 +0000 UTC" firstStartedPulling="2026-01-23 01:10:50.435977637 +0000 UTC m=+67.916062262" lastFinishedPulling="2026-01-23 01:11:40.058452708 +0000 UTC m=+117.538537343" observedRunningTime="2026-01-23 01:11:43.446956837 +0000 UTC m=+120.927041462" watchObservedRunningTime="2026-01-23 01:11:43.783078877 +0000 UTC m=+121.263163502" Jan 23 01:11:43.911964 kubelet[2845]: I0123 01:11:43.908843 2845 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/eddc7a17-a39d-4b74-9498-cfa4bf00bdf9-whisker-backend-key-pair\") pod \"eddc7a17-a39d-4b74-9498-cfa4bf00bdf9\" (UID: \"eddc7a17-a39d-4b74-9498-cfa4bf00bdf9\") " Jan 23 01:11:43.911964 kubelet[2845]: I0123 01:11:43.908909 2845 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txb67\" (UniqueName: \"kubernetes.io/projected/eddc7a17-a39d-4b74-9498-cfa4bf00bdf9-kube-api-access-txb67\") pod \"eddc7a17-a39d-4b74-9498-cfa4bf00bdf9\" (UID: \"eddc7a17-a39d-4b74-9498-cfa4bf00bdf9\") " Jan 23 01:11:43.911964 kubelet[2845]: I0123 01:11:43.908937 2845 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eddc7a17-a39d-4b74-9498-cfa4bf00bdf9-whisker-ca-bundle\") pod \"eddc7a17-a39d-4b74-9498-cfa4bf00bdf9\" (UID: \"eddc7a17-a39d-4b74-9498-cfa4bf00bdf9\") " Jan 23 01:11:43.911964 kubelet[2845]: I0123 01:11:43.911540 2845 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eddc7a17-a39d-4b74-9498-cfa4bf00bdf9-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "eddc7a17-a39d-4b74-9498-cfa4bf00bdf9" (UID: "eddc7a17-a39d-4b74-9498-cfa4bf00bdf9"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 01:11:43.965887 kubelet[2845]: I0123 01:11:43.965107 2845 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eddc7a17-a39d-4b74-9498-cfa4bf00bdf9-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "eddc7a17-a39d-4b74-9498-cfa4bf00bdf9" (UID: "eddc7a17-a39d-4b74-9498-cfa4bf00bdf9"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 01:11:43.968984 systemd[1]: var-lib-kubelet-pods-eddc7a17\x2da39d\x2d4b74\x2d9498\x2dcfa4bf00bdf9-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 01:11:43.985077 systemd[1]: var-lib-kubelet-pods-eddc7a17\x2da39d\x2d4b74\x2d9498\x2dcfa4bf00bdf9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtxb67.mount: Deactivated successfully. Jan 23 01:11:43.989703 kubelet[2845]: I0123 01:11:43.989166 2845 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eddc7a17-a39d-4b74-9498-cfa4bf00bdf9-kube-api-access-txb67" (OuterVolumeSpecName: "kube-api-access-txb67") pod "eddc7a17-a39d-4b74-9498-cfa4bf00bdf9" (UID: "eddc7a17-a39d-4b74-9498-cfa4bf00bdf9"). InnerVolumeSpecName "kube-api-access-txb67". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 01:11:44.013496 kubelet[2845]: I0123 01:11:44.013003 2845 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-txb67\" (UniqueName: \"kubernetes.io/projected/eddc7a17-a39d-4b74-9498-cfa4bf00bdf9-kube-api-access-txb67\") on node \"localhost\" DevicePath \"\"" Jan 23 01:11:44.013496 kubelet[2845]: I0123 01:11:44.013140 2845 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eddc7a17-a39d-4b74-9498-cfa4bf00bdf9-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 23 01:11:44.013496 kubelet[2845]: I0123 01:11:44.013155 2845 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/eddc7a17-a39d-4b74-9498-cfa4bf00bdf9-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 23 01:11:44.304817 kubelet[2845]: E0123 01:11:44.304132 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:11:44.329221 systemd[1]: Removed slice kubepods-besteffort-podeddc7a17_a39d_4b74_9498_cfa4bf00bdf9.slice - libcontainer container kubepods-besteffort-podeddc7a17_a39d_4b74_9498_cfa4bf00bdf9.slice. Jan 23 01:11:44.909850 systemd[1]: Created slice kubepods-besteffort-pod8cf6c8e5_97b0_4acf_833b_96387e1e4a45.slice - libcontainer container kubepods-besteffort-pod8cf6c8e5_97b0_4acf_833b_96387e1e4a45.slice. Jan 23 01:11:44.938557 kubelet[2845]: I0123 01:11:44.937682 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjwhg\" (UniqueName: \"kubernetes.io/projected/8cf6c8e5-97b0-4acf-833b-96387e1e4a45-kube-api-access-xjwhg\") pod \"whisker-8554d56494-ldgsm\" (UID: \"8cf6c8e5-97b0-4acf-833b-96387e1e4a45\") " pod="calico-system/whisker-8554d56494-ldgsm" Jan 23 01:11:44.938557 kubelet[2845]: I0123 01:11:44.937755 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8cf6c8e5-97b0-4acf-833b-96387e1e4a45-whisker-backend-key-pair\") pod \"whisker-8554d56494-ldgsm\" (UID: \"8cf6c8e5-97b0-4acf-833b-96387e1e4a45\") " pod="calico-system/whisker-8554d56494-ldgsm" Jan 23 01:11:44.938557 kubelet[2845]: I0123 01:11:44.937780 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cf6c8e5-97b0-4acf-833b-96387e1e4a45-whisker-ca-bundle\") pod \"whisker-8554d56494-ldgsm\" (UID: \"8cf6c8e5-97b0-4acf-833b-96387e1e4a45\") " pod="calico-system/whisker-8554d56494-ldgsm" Jan 23 01:11:45.238812 containerd[1546]: time="2026-01-23T01:11:45.237800553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8554d56494-ldgsm,Uid:8cf6c8e5-97b0-4acf-833b-96387e1e4a45,Namespace:calico-system,Attempt:0,}" Jan 23 01:11:45.862833 systemd[1]: Started sshd@11-10.0.0.42:22-10.0.0.1:37316.service - OpenSSH per-connection server daemon (10.0.0.1:37316). Jan 23 01:11:46.011509 sshd[4621]: Accepted publickey for core from 10.0.0.1 port 37316 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:11:46.016774 sshd-session[4621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:46.035161 systemd-logind[1529]: New session 12 of user core. Jan 23 01:11:46.043871 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 01:11:46.342771 systemd-networkd[1450]: calia427329633d: Link UP Jan 23 01:11:46.344450 systemd-networkd[1450]: calia427329633d: Gained carrier Jan 23 01:11:46.495411 containerd[1546]: 2026-01-23 01:11:45.372 [INFO][4588] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 01:11:46.495411 containerd[1546]: 2026-01-23 01:11:45.493 [INFO][4588] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--8554d56494--ldgsm-eth0 whisker-8554d56494- calico-system 8cf6c8e5-97b0-4acf-833b-96387e1e4a45 1238 0 2026-01-23 01:11:44 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8554d56494 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-8554d56494-ldgsm eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia427329633d [] [] }} ContainerID="ca995ea3ab3855c145c495592c6a3cc21acae9d48dcd7113e8df182e8a9e1ac7" Namespace="calico-system" Pod="whisker-8554d56494-ldgsm" WorkloadEndpoint="localhost-k8s-whisker--8554d56494--ldgsm-" Jan 23 01:11:46.495411 containerd[1546]: 2026-01-23 01:11:45.494 [INFO][4588] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ca995ea3ab3855c145c495592c6a3cc21acae9d48dcd7113e8df182e8a9e1ac7" Namespace="calico-system" Pod="whisker-8554d56494-ldgsm" WorkloadEndpoint="localhost-k8s-whisker--8554d56494--ldgsm-eth0" Jan 23 01:11:46.495411 containerd[1546]: 2026-01-23 01:11:45.933 [INFO][4614] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ca995ea3ab3855c145c495592c6a3cc21acae9d48dcd7113e8df182e8a9e1ac7" HandleID="k8s-pod-network.ca995ea3ab3855c145c495592c6a3cc21acae9d48dcd7113e8df182e8a9e1ac7" Workload="localhost-k8s-whisker--8554d56494--ldgsm-eth0" Jan 23 01:11:46.498544 containerd[1546]: 2026-01-23 01:11:45.933 [INFO][4614] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ca995ea3ab3855c145c495592c6a3cc21acae9d48dcd7113e8df182e8a9e1ac7" HandleID="k8s-pod-network.ca995ea3ab3855c145c495592c6a3cc21acae9d48dcd7113e8df182e8a9e1ac7" Workload="localhost-k8s-whisker--8554d56494--ldgsm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e1e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-8554d56494-ldgsm", "timestamp":"2026-01-23 01:11:45.93316744 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:11:46.498544 containerd[1546]: 2026-01-23 01:11:45.933 [INFO][4614] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:11:46.498544 containerd[1546]: 2026-01-23 01:11:45.934 [INFO][4614] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:11:46.498544 containerd[1546]: 2026-01-23 01:11:45.935 [INFO][4614] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 01:11:46.498544 containerd[1546]: 2026-01-23 01:11:45.977 [INFO][4614] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ca995ea3ab3855c145c495592c6a3cc21acae9d48dcd7113e8df182e8a9e1ac7" host="localhost" Jan 23 01:11:46.498544 containerd[1546]: 2026-01-23 01:11:46.059 [INFO][4614] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 01:11:46.498544 containerd[1546]: 2026-01-23 01:11:46.118 [INFO][4614] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 01:11:46.498544 containerd[1546]: 2026-01-23 01:11:46.134 [INFO][4614] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 01:11:46.498544 containerd[1546]: 2026-01-23 01:11:46.150 [INFO][4614] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 01:11:46.498544 containerd[1546]: 2026-01-23 01:11:46.150 [INFO][4614] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ca995ea3ab3855c145c495592c6a3cc21acae9d48dcd7113e8df182e8a9e1ac7" host="localhost" Jan 23 01:11:46.499136 containerd[1546]: 2026-01-23 01:11:46.159 [INFO][4614] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ca995ea3ab3855c145c495592c6a3cc21acae9d48dcd7113e8df182e8a9e1ac7 Jan 23 01:11:46.499136 containerd[1546]: 2026-01-23 01:11:46.189 [INFO][4614] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ca995ea3ab3855c145c495592c6a3cc21acae9d48dcd7113e8df182e8a9e1ac7" host="localhost" Jan 23 01:11:46.499136 containerd[1546]: 2026-01-23 01:11:46.232 [INFO][4614] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.ca995ea3ab3855c145c495592c6a3cc21acae9d48dcd7113e8df182e8a9e1ac7" host="localhost" Jan 23 01:11:46.499136 containerd[1546]: 2026-01-23 01:11:46.233 [INFO][4614] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.ca995ea3ab3855c145c495592c6a3cc21acae9d48dcd7113e8df182e8a9e1ac7" host="localhost" Jan 23 01:11:46.499136 containerd[1546]: 2026-01-23 01:11:46.234 [INFO][4614] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:11:46.499136 containerd[1546]: 2026-01-23 01:11:46.234 [INFO][4614] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="ca995ea3ab3855c145c495592c6a3cc21acae9d48dcd7113e8df182e8a9e1ac7" HandleID="k8s-pod-network.ca995ea3ab3855c145c495592c6a3cc21acae9d48dcd7113e8df182e8a9e1ac7" Workload="localhost-k8s-whisker--8554d56494--ldgsm-eth0" Jan 23 01:11:46.500841 containerd[1546]: 2026-01-23 01:11:46.247 [INFO][4588] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ca995ea3ab3855c145c495592c6a3cc21acae9d48dcd7113e8df182e8a9e1ac7" Namespace="calico-system" Pod="whisker-8554d56494-ldgsm" WorkloadEndpoint="localhost-k8s-whisker--8554d56494--ldgsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--8554d56494--ldgsm-eth0", GenerateName:"whisker-8554d56494-", Namespace:"calico-system", SelfLink:"", UID:"8cf6c8e5-97b0-4acf-833b-96387e1e4a45", ResourceVersion:"1238", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 11, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8554d56494", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-8554d56494-ldgsm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia427329633d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:46.500841 containerd[1546]: 2026-01-23 01:11:46.248 [INFO][4588] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="ca995ea3ab3855c145c495592c6a3cc21acae9d48dcd7113e8df182e8a9e1ac7" Namespace="calico-system" Pod="whisker-8554d56494-ldgsm" WorkloadEndpoint="localhost-k8s-whisker--8554d56494--ldgsm-eth0" Jan 23 01:11:46.501138 containerd[1546]: 2026-01-23 01:11:46.248 [INFO][4588] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia427329633d ContainerID="ca995ea3ab3855c145c495592c6a3cc21acae9d48dcd7113e8df182e8a9e1ac7" Namespace="calico-system" Pod="whisker-8554d56494-ldgsm" WorkloadEndpoint="localhost-k8s-whisker--8554d56494--ldgsm-eth0" Jan 23 01:11:46.501138 containerd[1546]: 2026-01-23 01:11:46.342 [INFO][4588] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ca995ea3ab3855c145c495592c6a3cc21acae9d48dcd7113e8df182e8a9e1ac7" Namespace="calico-system" Pod="whisker-8554d56494-ldgsm" WorkloadEndpoint="localhost-k8s-whisker--8554d56494--ldgsm-eth0" Jan 23 01:11:46.501740 containerd[1546]: 2026-01-23 01:11:46.343 [INFO][4588] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ca995ea3ab3855c145c495592c6a3cc21acae9d48dcd7113e8df182e8a9e1ac7" Namespace="calico-system" Pod="whisker-8554d56494-ldgsm" WorkloadEndpoint="localhost-k8s-whisker--8554d56494--ldgsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--8554d56494--ldgsm-eth0", GenerateName:"whisker-8554d56494-", Namespace:"calico-system", SelfLink:"", UID:"8cf6c8e5-97b0-4acf-833b-96387e1e4a45", ResourceVersion:"1238", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 11, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8554d56494", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ca995ea3ab3855c145c495592c6a3cc21acae9d48dcd7113e8df182e8a9e1ac7", Pod:"whisker-8554d56494-ldgsm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia427329633d", MAC:"0e:9a:35:04:4f:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:46.501992 containerd[1546]: 2026-01-23 01:11:46.430 [INFO][4588] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ca995ea3ab3855c145c495592c6a3cc21acae9d48dcd7113e8df182e8a9e1ac7" Namespace="calico-system" Pod="whisker-8554d56494-ldgsm" WorkloadEndpoint="localhost-k8s-whisker--8554d56494--ldgsm-eth0" Jan 23 01:11:46.563969 kubelet[2845]: I0123 01:11:46.563738 2845 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eddc7a17-a39d-4b74-9498-cfa4bf00bdf9" path="/var/lib/kubelet/pods/eddc7a17-a39d-4b74-9498-cfa4bf00bdf9/volumes" Jan 23 01:11:46.623842 sshd[4628]: Connection closed by 10.0.0.1 port 37316 Jan 23 01:11:46.625051 sshd-session[4621]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:46.661092 systemd[1]: sshd@11-10.0.0.42:22-10.0.0.1:37316.service: Deactivated successfully. Jan 23 01:11:46.667912 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 01:11:46.685118 systemd-logind[1529]: Session 12 logged out. Waiting for processes to exit. Jan 23 01:11:46.699020 systemd-logind[1529]: Removed session 12. Jan 23 01:11:47.024855 containerd[1546]: time="2026-01-23T01:11:47.022792749Z" level=info msg="connecting to shim ca995ea3ab3855c145c495592c6a3cc21acae9d48dcd7113e8df182e8a9e1ac7" address="unix:///run/containerd/s/db98984c76b6c543a46de0440532c174c5c181404311c01b68d017b0e2db7f84" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:11:47.349132 systemd[1]: Started cri-containerd-ca995ea3ab3855c145c495592c6a3cc21acae9d48dcd7113e8df182e8a9e1ac7.scope - libcontainer container ca995ea3ab3855c145c495592c6a3cc21acae9d48dcd7113e8df182e8a9e1ac7. Jan 23 01:11:47.499065 systemd-resolved[1454]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 01:11:47.779817 systemd-networkd[1450]: calia427329633d: Gained IPv6LL Jan 23 01:11:47.827726 containerd[1546]: time="2026-01-23T01:11:47.826127870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8554d56494-ldgsm,Uid:8cf6c8e5-97b0-4acf-833b-96387e1e4a45,Namespace:calico-system,Attempt:0,} returns sandbox id \"ca995ea3ab3855c145c495592c6a3cc21acae9d48dcd7113e8df182e8a9e1ac7\"" Jan 23 01:11:47.881998 containerd[1546]: time="2026-01-23T01:11:47.881810000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:11:47.990921 containerd[1546]: time="2026-01-23T01:11:47.989775832Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:11:47.997543 containerd[1546]: time="2026-01-23T01:11:47.995947382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:11:48.008840 containerd[1546]: time="2026-01-23T01:11:48.008775126Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:11:48.011083 kubelet[2845]: E0123 01:11:48.009981 2845 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:11:48.011083 kubelet[2845]: E0123 01:11:48.010155 2845 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:11:48.011959 kubelet[2845]: E0123 01:11:48.011195 2845 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-8554d56494-ldgsm_calico-system(8cf6c8e5-97b0-4acf-833b-96387e1e4a45): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:11:48.018776 containerd[1546]: time="2026-01-23T01:11:48.017068400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:11:48.214907 containerd[1546]: time="2026-01-23T01:11:48.213563027Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:11:48.222540 containerd[1546]: time="2026-01-23T01:11:48.222403772Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:11:48.222540 containerd[1546]: time="2026-01-23T01:11:48.222521322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:11:48.223994 kubelet[2845]: E0123 01:11:48.223725 2845 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:11:48.223994 kubelet[2845]: E0123 01:11:48.223890 2845 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:11:48.224125 kubelet[2845]: E0123 01:11:48.223992 2845 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-8554d56494-ldgsm_calico-system(8cf6c8e5-97b0-4acf-833b-96387e1e4a45): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:11:48.224374 kubelet[2845]: E0123 01:11:48.224154 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8554d56494-ldgsm" podUID="8cf6c8e5-97b0-4acf-833b-96387e1e4a45" Jan 23 01:11:48.442913 kubelet[2845]: E0123 01:11:48.442564 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8554d56494-ldgsm" podUID="8cf6c8e5-97b0-4acf-833b-96387e1e4a45" Jan 23 01:11:49.460813 kubelet[2845]: E0123 01:11:49.456203 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8554d56494-ldgsm" podUID="8cf6c8e5-97b0-4acf-833b-96387e1e4a45" Jan 23 01:11:50.715483 systemd-networkd[1450]: vxlan.calico: Link UP Jan 23 01:11:50.715495 systemd-networkd[1450]: vxlan.calico: Gained carrier Jan 23 01:11:51.677861 systemd[1]: Started sshd@12-10.0.0.42:22-10.0.0.1:37322.service - OpenSSH per-connection server daemon (10.0.0.1:37322). Jan 23 01:11:52.218744 sshd[4885]: Accepted publickey for core from 10.0.0.1 port 37322 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:11:52.229699 sshd-session[4885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:52.255818 systemd-networkd[1450]: vxlan.calico: Gained IPv6LL Jan 23 01:11:52.263178 systemd-logind[1529]: New session 13 of user core. Jan 23 01:11:52.274898 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 01:11:53.172805 sshd[4899]: Connection closed by 10.0.0.1 port 37322 Jan 23 01:11:53.193057 sshd-session[4885]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:53.239143 systemd-logind[1529]: Session 13 logged out. Waiting for processes to exit. Jan 23 01:11:53.244756 systemd[1]: sshd@12-10.0.0.42:22-10.0.0.1:37322.service: Deactivated successfully. Jan 23 01:11:53.254790 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 01:11:53.286428 systemd-logind[1529]: Removed session 13. Jan 23 01:11:53.565583 kubelet[2845]: E0123 01:11:53.565535 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:11:53.569509 containerd[1546]: time="2026-01-23T01:11:53.569464243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nbdj7,Uid:3ed63450-d0f7-42e4-856a-8ea4e718ff98,Namespace:kube-system,Attempt:0,}" Jan 23 01:11:54.602409 containerd[1546]: time="2026-01-23T01:11:54.600728488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7r7jt,Uid:e2553e8f-fa3b-4995-9072-1f7cce3ee2c8,Namespace:calico-system,Attempt:0,}" Jan 23 01:11:55.120489 systemd-networkd[1450]: cali8eacdd179e8: Link UP Jan 23 01:11:55.129450 systemd-networkd[1450]: cali8eacdd179e8: Gained carrier Jan 23 01:11:55.248826 containerd[1546]: 2026-01-23 01:11:54.170 [INFO][4934] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--nbdj7-eth0 coredns-66bc5c9577- kube-system 3ed63450-d0f7-42e4-856a-8ea4e718ff98 1043 0 2026-01-23 01:09:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-nbdj7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8eacdd179e8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="ca17bcd4913053b4c4515df55c556b30b93aa27b70c969557bf0397d7e02af3f" Namespace="kube-system" Pod="coredns-66bc5c9577-nbdj7" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nbdj7-" Jan 23 01:11:55.248826 containerd[1546]: 2026-01-23 01:11:54.171 [INFO][4934] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ca17bcd4913053b4c4515df55c556b30b93aa27b70c969557bf0397d7e02af3f" Namespace="kube-system" Pod="coredns-66bc5c9577-nbdj7" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nbdj7-eth0" Jan 23 01:11:55.248826 containerd[1546]: 2026-01-23 01:11:54.595 [INFO][4948] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ca17bcd4913053b4c4515df55c556b30b93aa27b70c969557bf0397d7e02af3f" HandleID="k8s-pod-network.ca17bcd4913053b4c4515df55c556b30b93aa27b70c969557bf0397d7e02af3f" Workload="localhost-k8s-coredns--66bc5c9577--nbdj7-eth0" Jan 23 01:11:55.251097 containerd[1546]: 2026-01-23 01:11:54.603 [INFO][4948] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ca17bcd4913053b4c4515df55c556b30b93aa27b70c969557bf0397d7e02af3f" HandleID="k8s-pod-network.ca17bcd4913053b4c4515df55c556b30b93aa27b70c969557bf0397d7e02af3f" Workload="localhost-k8s-coredns--66bc5c9577--nbdj7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000460460), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-nbdj7", "timestamp":"2026-01-23 01:11:54.595097999 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:11:55.251097 containerd[1546]: 2026-01-23 01:11:54.604 [INFO][4948] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:11:55.251097 containerd[1546]: 2026-01-23 01:11:54.604 [INFO][4948] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:11:55.251097 containerd[1546]: 2026-01-23 01:11:54.604 [INFO][4948] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 01:11:55.251097 containerd[1546]: 2026-01-23 01:11:54.690 [INFO][4948] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ca17bcd4913053b4c4515df55c556b30b93aa27b70c969557bf0397d7e02af3f" host="localhost" Jan 23 01:11:55.251097 containerd[1546]: 2026-01-23 01:11:54.748 [INFO][4948] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 01:11:55.251097 containerd[1546]: 2026-01-23 01:11:54.788 [INFO][4948] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 01:11:55.251097 containerd[1546]: 2026-01-23 01:11:54.822 [INFO][4948] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 01:11:55.251097 containerd[1546]: 2026-01-23 01:11:54.865 [INFO][4948] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 01:11:55.251097 containerd[1546]: 2026-01-23 01:11:54.865 [INFO][4948] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ca17bcd4913053b4c4515df55c556b30b93aa27b70c969557bf0397d7e02af3f" host="localhost" Jan 23 01:11:55.252727 containerd[1546]: 2026-01-23 01:11:54.889 [INFO][4948] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ca17bcd4913053b4c4515df55c556b30b93aa27b70c969557bf0397d7e02af3f Jan 23 01:11:55.252727 containerd[1546]: 2026-01-23 01:11:54.936 [INFO][4948] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ca17bcd4913053b4c4515df55c556b30b93aa27b70c969557bf0397d7e02af3f" host="localhost" Jan 23 01:11:55.252727 containerd[1546]: 2026-01-23 01:11:55.021 [INFO][4948] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ca17bcd4913053b4c4515df55c556b30b93aa27b70c969557bf0397d7e02af3f" host="localhost" Jan 23 01:11:55.252727 containerd[1546]: 2026-01-23 01:11:55.021 [INFO][4948] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ca17bcd4913053b4c4515df55c556b30b93aa27b70c969557bf0397d7e02af3f" host="localhost" Jan 23 01:11:55.252727 containerd[1546]: 2026-01-23 01:11:55.022 [INFO][4948] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:11:55.252727 containerd[1546]: 2026-01-23 01:11:55.022 [INFO][4948] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ca17bcd4913053b4c4515df55c556b30b93aa27b70c969557bf0397d7e02af3f" HandleID="k8s-pod-network.ca17bcd4913053b4c4515df55c556b30b93aa27b70c969557bf0397d7e02af3f" Workload="localhost-k8s-coredns--66bc5c9577--nbdj7-eth0" Jan 23 01:11:55.252914 containerd[1546]: 2026-01-23 01:11:55.070 [INFO][4934] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ca17bcd4913053b4c4515df55c556b30b93aa27b70c969557bf0397d7e02af3f" Namespace="kube-system" Pod="coredns-66bc5c9577-nbdj7" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nbdj7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--nbdj7-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"3ed63450-d0f7-42e4-856a-8ea4e718ff98", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 9, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-nbdj7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8eacdd179e8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:55.252914 containerd[1546]: 2026-01-23 01:11:55.091 [INFO][4934] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ca17bcd4913053b4c4515df55c556b30b93aa27b70c969557bf0397d7e02af3f" Namespace="kube-system" Pod="coredns-66bc5c9577-nbdj7" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nbdj7-eth0" Jan 23 01:11:55.252914 containerd[1546]: 2026-01-23 01:11:55.092 [INFO][4934] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8eacdd179e8 ContainerID="ca17bcd4913053b4c4515df55c556b30b93aa27b70c969557bf0397d7e02af3f" Namespace="kube-system" Pod="coredns-66bc5c9577-nbdj7" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nbdj7-eth0" Jan 23 01:11:55.252914 containerd[1546]: 2026-01-23 01:11:55.141 [INFO][4934] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ca17bcd4913053b4c4515df55c556b30b93aa27b70c969557bf0397d7e02af3f" Namespace="kube-system" Pod="coredns-66bc5c9577-nbdj7" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nbdj7-eth0" Jan 23 01:11:55.252914 containerd[1546]: 2026-01-23 01:11:55.143 [INFO][4934] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ca17bcd4913053b4c4515df55c556b30b93aa27b70c969557bf0397d7e02af3f" Namespace="kube-system" Pod="coredns-66bc5c9577-nbdj7" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nbdj7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--nbdj7-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"3ed63450-d0f7-42e4-856a-8ea4e718ff98", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 9, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ca17bcd4913053b4c4515df55c556b30b93aa27b70c969557bf0397d7e02af3f", Pod:"coredns-66bc5c9577-nbdj7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8eacdd179e8", MAC:"ca:27:89:a4:c3:ed", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:55.252914 containerd[1546]: 2026-01-23 01:11:55.233 [INFO][4934] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ca17bcd4913053b4c4515df55c556b30b93aa27b70c969557bf0397d7e02af3f" Namespace="kube-system" Pod="coredns-66bc5c9577-nbdj7" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nbdj7-eth0" Jan 23 01:11:55.564449 containerd[1546]: time="2026-01-23T01:11:55.559142708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68948fdbd6-4jxhx,Uid:3b1f5033-cf51-4add-93c1-34dedb396092,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:11:55.564773 kubelet[2845]: E0123 01:11:55.564552 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:11:55.578194 containerd[1546]: time="2026-01-23T01:11:55.576073717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-q7bt2,Uid:853a2368-0964-46f6-bbf0-478966b86444,Namespace:kube-system,Attempt:0,}" Jan 23 01:11:55.620194 containerd[1546]: time="2026-01-23T01:11:55.589996100Z" level=info msg="connecting to shim ca17bcd4913053b4c4515df55c556b30b93aa27b70c969557bf0397d7e02af3f" address="unix:///run/containerd/s/1c87c1ca71e2a3ad0a09d9e7ca49df0fb7dda9019a32fa9fb167b198387265ed" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:11:55.628157 containerd[1546]: time="2026-01-23T01:11:55.604491282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68948fdbd6-mdlvc,Uid:5865bb49-f0fe-4eb4-8f6c-74bc939474ad,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:11:56.310976 systemd[1]: Started cri-containerd-ca17bcd4913053b4c4515df55c556b30b93aa27b70c969557bf0397d7e02af3f.scope - libcontainer container ca17bcd4913053b4c4515df55c556b30b93aa27b70c969557bf0397d7e02af3f. Jan 23 01:11:56.455200 systemd-resolved[1454]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 01:11:56.823115 systemd-networkd[1450]: caliadd415220c6: Link UP Jan 23 01:11:56.826009 systemd-networkd[1450]: caliadd415220c6: Gained carrier Jan 23 01:11:56.990552 systemd-networkd[1450]: cali8eacdd179e8: Gained IPv6LL Jan 23 01:11:57.052954 containerd[1546]: time="2026-01-23T01:11:57.049902282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nbdj7,Uid:3ed63450-d0f7-42e4-856a-8ea4e718ff98,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca17bcd4913053b4c4515df55c556b30b93aa27b70c969557bf0397d7e02af3f\"" Jan 23 01:11:57.052954 containerd[1546]: 2026-01-23 01:11:55.170 [INFO][4957] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--7r7jt-eth0 csi-node-driver- calico-system e2553e8f-fa3b-4995-9072-1f7cce3ee2c8 894 0 2026-01-23 01:10:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-7r7jt eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliadd415220c6 [] [] }} ContainerID="fbfc3bb040876c04fa89f527ef34e13e15b08a982018b030e5395b0b2ba09255" Namespace="calico-system" Pod="csi-node-driver-7r7jt" WorkloadEndpoint="localhost-k8s-csi--node--driver--7r7jt-" Jan 23 01:11:57.052954 containerd[1546]: 2026-01-23 01:11:55.176 [INFO][4957] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fbfc3bb040876c04fa89f527ef34e13e15b08a982018b030e5395b0b2ba09255" Namespace="calico-system" Pod="csi-node-driver-7r7jt" WorkloadEndpoint="localhost-k8s-csi--node--driver--7r7jt-eth0" Jan 23 01:11:57.052954 containerd[1546]: 2026-01-23 01:11:56.192 [INFO][4979] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fbfc3bb040876c04fa89f527ef34e13e15b08a982018b030e5395b0b2ba09255" HandleID="k8s-pod-network.fbfc3bb040876c04fa89f527ef34e13e15b08a982018b030e5395b0b2ba09255" Workload="localhost-k8s-csi--node--driver--7r7jt-eth0" Jan 23 01:11:57.052954 containerd[1546]: 2026-01-23 01:11:56.198 [INFO][4979] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fbfc3bb040876c04fa89f527ef34e13e15b08a982018b030e5395b0b2ba09255" HandleID="k8s-pod-network.fbfc3bb040876c04fa89f527ef34e13e15b08a982018b030e5395b0b2ba09255" Workload="localhost-k8s-csi--node--driver--7r7jt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000555eb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-7r7jt", "timestamp":"2026-01-23 01:11:56.192528073 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:11:57.052954 containerd[1546]: 2026-01-23 01:11:56.198 [INFO][4979] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:11:57.052954 containerd[1546]: 2026-01-23 01:11:56.199 [INFO][4979] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:11:57.052954 containerd[1546]: 2026-01-23 01:11:56.200 [INFO][4979] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 01:11:57.052954 containerd[1546]: 2026-01-23 01:11:56.258 [INFO][4979] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fbfc3bb040876c04fa89f527ef34e13e15b08a982018b030e5395b0b2ba09255" host="localhost" Jan 23 01:11:57.052954 containerd[1546]: 2026-01-23 01:11:56.345 [INFO][4979] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 01:11:57.052954 containerd[1546]: 2026-01-23 01:11:56.402 [INFO][4979] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 01:11:57.052954 containerd[1546]: 2026-01-23 01:11:56.450 [INFO][4979] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 01:11:57.052954 containerd[1546]: 2026-01-23 01:11:56.526 [INFO][4979] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 01:11:57.052954 containerd[1546]: 2026-01-23 01:11:56.526 [INFO][4979] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fbfc3bb040876c04fa89f527ef34e13e15b08a982018b030e5395b0b2ba09255" host="localhost" Jan 23 01:11:57.052954 containerd[1546]: 2026-01-23 01:11:56.570 [INFO][4979] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fbfc3bb040876c04fa89f527ef34e13e15b08a982018b030e5395b0b2ba09255 Jan 23 01:11:57.052954 containerd[1546]: 2026-01-23 01:11:56.629 [INFO][4979] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fbfc3bb040876c04fa89f527ef34e13e15b08a982018b030e5395b0b2ba09255" host="localhost" Jan 23 01:11:57.052954 containerd[1546]: 2026-01-23 01:11:56.728 [INFO][4979] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.fbfc3bb040876c04fa89f527ef34e13e15b08a982018b030e5395b0b2ba09255" host="localhost" Jan 23 01:11:57.052954 containerd[1546]: 2026-01-23 01:11:56.729 [INFO][4979] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.fbfc3bb040876c04fa89f527ef34e13e15b08a982018b030e5395b0b2ba09255" host="localhost" Jan 23 01:11:57.052954 containerd[1546]: 2026-01-23 01:11:56.730 [INFO][4979] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:11:57.052954 containerd[1546]: 2026-01-23 01:11:56.730 [INFO][4979] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="fbfc3bb040876c04fa89f527ef34e13e15b08a982018b030e5395b0b2ba09255" HandleID="k8s-pod-network.fbfc3bb040876c04fa89f527ef34e13e15b08a982018b030e5395b0b2ba09255" Workload="localhost-k8s-csi--node--driver--7r7jt-eth0" Jan 23 01:11:57.060886 containerd[1546]: 2026-01-23 01:11:56.800 [INFO][4957] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fbfc3bb040876c04fa89f527ef34e13e15b08a982018b030e5395b0b2ba09255" Namespace="calico-system" Pod="csi-node-driver-7r7jt" WorkloadEndpoint="localhost-k8s-csi--node--driver--7r7jt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7r7jt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e2553e8f-fa3b-4995-9072-1f7cce3ee2c8", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 10, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-7r7jt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliadd415220c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:57.060886 containerd[1546]: 2026-01-23 01:11:56.800 [INFO][4957] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="fbfc3bb040876c04fa89f527ef34e13e15b08a982018b030e5395b0b2ba09255" Namespace="calico-system" Pod="csi-node-driver-7r7jt" WorkloadEndpoint="localhost-k8s-csi--node--driver--7r7jt-eth0" Jan 23 01:11:57.060886 containerd[1546]: 2026-01-23 01:11:56.801 [INFO][4957] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliadd415220c6 ContainerID="fbfc3bb040876c04fa89f527ef34e13e15b08a982018b030e5395b0b2ba09255" Namespace="calico-system" Pod="csi-node-driver-7r7jt" WorkloadEndpoint="localhost-k8s-csi--node--driver--7r7jt-eth0" Jan 23 01:11:57.060886 containerd[1546]: 2026-01-23 01:11:56.844 [INFO][4957] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fbfc3bb040876c04fa89f527ef34e13e15b08a982018b030e5395b0b2ba09255" Namespace="calico-system" Pod="csi-node-driver-7r7jt" WorkloadEndpoint="localhost-k8s-csi--node--driver--7r7jt-eth0" Jan 23 01:11:57.060886 containerd[1546]: 2026-01-23 01:11:56.850 [INFO][4957] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fbfc3bb040876c04fa89f527ef34e13e15b08a982018b030e5395b0b2ba09255" Namespace="calico-system" Pod="csi-node-driver-7r7jt" WorkloadEndpoint="localhost-k8s-csi--node--driver--7r7jt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7r7jt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e2553e8f-fa3b-4995-9072-1f7cce3ee2c8", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 10, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fbfc3bb040876c04fa89f527ef34e13e15b08a982018b030e5395b0b2ba09255", Pod:"csi-node-driver-7r7jt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliadd415220c6", MAC:"f6:ef:9d:8c:23:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:57.060886 containerd[1546]: 2026-01-23 01:11:56.966 [INFO][4957] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fbfc3bb040876c04fa89f527ef34e13e15b08a982018b030e5395b0b2ba09255" Namespace="calico-system" Pod="csi-node-driver-7r7jt" WorkloadEndpoint="localhost-k8s-csi--node--driver--7r7jt-eth0" Jan 23 01:11:57.068946 kubelet[2845]: E0123 01:11:57.068908 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:11:57.125496 containerd[1546]: time="2026-01-23T01:11:57.118206201Z" level=info msg="CreateContainer within sandbox \"ca17bcd4913053b4c4515df55c556b30b93aa27b70c969557bf0397d7e02af3f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 01:11:57.419204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3027396258.mount: Deactivated successfully. Jan 23 01:11:57.433980 containerd[1546]: time="2026-01-23T01:11:57.432905961Z" level=info msg="Container 761b6330e83115820269c5a204b523050af805d51da89b3f8da227ed1357dab5: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:11:57.445875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2476876028.mount: Deactivated successfully. Jan 23 01:11:57.450833 containerd[1546]: time="2026-01-23T01:11:57.450214651Z" level=info msg="connecting to shim fbfc3bb040876c04fa89f527ef34e13e15b08a982018b030e5395b0b2ba09255" address="unix:///run/containerd/s/f76a8c7dc13eb67f5b6d0031a5690193755aa4507052b892eb1e33efc7c22cbc" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:11:57.461546 containerd[1546]: time="2026-01-23T01:11:57.458786045Z" level=info msg="CreateContainer within sandbox \"ca17bcd4913053b4c4515df55c556b30b93aa27b70c969557bf0397d7e02af3f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"761b6330e83115820269c5a204b523050af805d51da89b3f8da227ed1357dab5\"" Jan 23 01:11:57.463880 containerd[1546]: time="2026-01-23T01:11:57.462855260Z" level=info msg="StartContainer for \"761b6330e83115820269c5a204b523050af805d51da89b3f8da227ed1357dab5\"" Jan 23 01:11:57.513967 containerd[1546]: time="2026-01-23T01:11:57.513038253Z" level=info msg="connecting to shim 761b6330e83115820269c5a204b523050af805d51da89b3f8da227ed1357dab5" address="unix:///run/containerd/s/1c87c1ca71e2a3ad0a09d9e7ca49df0fb7dda9019a32fa9fb167b198387265ed" protocol=ttrpc version=3 Jan 23 01:11:57.573454 containerd[1546]: time="2026-01-23T01:11:57.573065104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-bmvtb,Uid:662d18b3-33cf-4000-b003-c8e7f6b2e810,Namespace:calico-system,Attempt:0,}" Jan 23 01:11:57.616808 systemd[1]: Started cri-containerd-761b6330e83115820269c5a204b523050af805d51da89b3f8da227ed1357dab5.scope - libcontainer container 761b6330e83115820269c5a204b523050af805d51da89b3f8da227ed1357dab5. Jan 23 01:11:57.665138 systemd[1]: Started cri-containerd-fbfc3bb040876c04fa89f527ef34e13e15b08a982018b030e5395b0b2ba09255.scope - libcontainer container fbfc3bb040876c04fa89f527ef34e13e15b08a982018b030e5395b0b2ba09255. Jan 23 01:11:57.818542 systemd-networkd[1450]: calid63f23f0708: Link UP Jan 23 01:11:57.828556 systemd-networkd[1450]: calid63f23f0708: Gained carrier Jan 23 01:11:58.145047 systemd-resolved[1454]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 01:11:58.209719 systemd-networkd[1450]: caliadd415220c6: Gained IPv6LL Jan 23 01:11:58.228814 systemd[1]: Started sshd@13-10.0.0.42:22-10.0.0.1:60150.service - OpenSSH per-connection server daemon (10.0.0.1:60150). Jan 23 01:11:58.285091 containerd[1546]: 2026-01-23 01:11:56.495 [INFO][5014] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--q7bt2-eth0 coredns-66bc5c9577- kube-system 853a2368-0964-46f6-bbf0-478966b86444 1042 0 2026-01-23 01:09:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-q7bt2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid63f23f0708 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="e8facd46cfdde5379ec8e74544b3bb7acf9f33a855aa1fbd1484f0d94931f9c5" Namespace="kube-system" Pod="coredns-66bc5c9577-q7bt2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--q7bt2-" Jan 23 01:11:58.285091 containerd[1546]: 2026-01-23 01:11:56.502 [INFO][5014] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e8facd46cfdde5379ec8e74544b3bb7acf9f33a855aa1fbd1484f0d94931f9c5" Namespace="kube-system" Pod="coredns-66bc5c9577-q7bt2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--q7bt2-eth0" Jan 23 01:11:58.285091 containerd[1546]: 2026-01-23 01:11:57.112 [INFO][5081] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e8facd46cfdde5379ec8e74544b3bb7acf9f33a855aa1fbd1484f0d94931f9c5" HandleID="k8s-pod-network.e8facd46cfdde5379ec8e74544b3bb7acf9f33a855aa1fbd1484f0d94931f9c5" Workload="localhost-k8s-coredns--66bc5c9577--q7bt2-eth0" Jan 23 01:11:58.285091 containerd[1546]: 2026-01-23 01:11:57.112 [INFO][5081] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e8facd46cfdde5379ec8e74544b3bb7acf9f33a855aa1fbd1484f0d94931f9c5" HandleID="k8s-pod-network.e8facd46cfdde5379ec8e74544b3bb7acf9f33a855aa1fbd1484f0d94931f9c5" Workload="localhost-k8s-coredns--66bc5c9577--q7bt2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a92f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-q7bt2", "timestamp":"2026-01-23 01:11:57.112512121 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:11:58.285091 containerd[1546]: 2026-01-23 01:11:57.112 [INFO][5081] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:11:58.285091 containerd[1546]: 2026-01-23 01:11:57.113 [INFO][5081] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:11:58.285091 containerd[1546]: 2026-01-23 01:11:57.113 [INFO][5081] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 01:11:58.285091 containerd[1546]: 2026-01-23 01:11:57.298 [INFO][5081] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e8facd46cfdde5379ec8e74544b3bb7acf9f33a855aa1fbd1484f0d94931f9c5" host="localhost" Jan 23 01:11:58.285091 containerd[1546]: 2026-01-23 01:11:57.369 [INFO][5081] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 01:11:58.285091 containerd[1546]: 2026-01-23 01:11:57.454 [INFO][5081] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 01:11:58.285091 containerd[1546]: 2026-01-23 01:11:57.480 [INFO][5081] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 01:11:58.285091 containerd[1546]: 2026-01-23 01:11:57.560 [INFO][5081] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 01:11:58.285091 containerd[1546]: 2026-01-23 01:11:57.560 [INFO][5081] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e8facd46cfdde5379ec8e74544b3bb7acf9f33a855aa1fbd1484f0d94931f9c5" host="localhost" Jan 23 01:11:58.285091 containerd[1546]: 2026-01-23 01:11:57.571 [INFO][5081] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e8facd46cfdde5379ec8e74544b3bb7acf9f33a855aa1fbd1484f0d94931f9c5 Jan 23 01:11:58.285091 containerd[1546]: 2026-01-23 01:11:57.646 [INFO][5081] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e8facd46cfdde5379ec8e74544b3bb7acf9f33a855aa1fbd1484f0d94931f9c5" host="localhost" Jan 23 01:11:58.285091 containerd[1546]: 2026-01-23 01:11:57.725 [INFO][5081] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.e8facd46cfdde5379ec8e74544b3bb7acf9f33a855aa1fbd1484f0d94931f9c5" host="localhost" Jan 23 01:11:58.285091 containerd[1546]: 2026-01-23 01:11:57.725 [INFO][5081] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.e8facd46cfdde5379ec8e74544b3bb7acf9f33a855aa1fbd1484f0d94931f9c5" host="localhost" Jan 23 01:11:58.285091 containerd[1546]: 2026-01-23 01:11:57.726 [INFO][5081] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:11:58.285091 containerd[1546]: 2026-01-23 01:11:57.726 [INFO][5081] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="e8facd46cfdde5379ec8e74544b3bb7acf9f33a855aa1fbd1484f0d94931f9c5" HandleID="k8s-pod-network.e8facd46cfdde5379ec8e74544b3bb7acf9f33a855aa1fbd1484f0d94931f9c5" Workload="localhost-k8s-coredns--66bc5c9577--q7bt2-eth0" Jan 23 01:11:58.291206 containerd[1546]: 2026-01-23 01:11:57.801 [INFO][5014] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e8facd46cfdde5379ec8e74544b3bb7acf9f33a855aa1fbd1484f0d94931f9c5" Namespace="kube-system" Pod="coredns-66bc5c9577-q7bt2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--q7bt2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--q7bt2-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"853a2368-0964-46f6-bbf0-478966b86444", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 9, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-q7bt2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid63f23f0708", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:58.291206 containerd[1546]: 2026-01-23 01:11:57.802 [INFO][5014] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="e8facd46cfdde5379ec8e74544b3bb7acf9f33a855aa1fbd1484f0d94931f9c5" Namespace="kube-system" Pod="coredns-66bc5c9577-q7bt2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--q7bt2-eth0" Jan 23 01:11:58.291206 containerd[1546]: 2026-01-23 01:11:57.802 [INFO][5014] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid63f23f0708 ContainerID="e8facd46cfdde5379ec8e74544b3bb7acf9f33a855aa1fbd1484f0d94931f9c5" Namespace="kube-system" Pod="coredns-66bc5c9577-q7bt2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--q7bt2-eth0" Jan 23 01:11:58.291206 containerd[1546]: 2026-01-23 01:11:57.835 [INFO][5014] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e8facd46cfdde5379ec8e74544b3bb7acf9f33a855aa1fbd1484f0d94931f9c5" Namespace="kube-system" Pod="coredns-66bc5c9577-q7bt2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--q7bt2-eth0" Jan 23 01:11:58.291206 containerd[1546]: 2026-01-23 01:11:57.855 [INFO][5014] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e8facd46cfdde5379ec8e74544b3bb7acf9f33a855aa1fbd1484f0d94931f9c5" Namespace="kube-system" Pod="coredns-66bc5c9577-q7bt2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--q7bt2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--q7bt2-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"853a2368-0964-46f6-bbf0-478966b86444", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 9, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e8facd46cfdde5379ec8e74544b3bb7acf9f33a855aa1fbd1484f0d94931f9c5", Pod:"coredns-66bc5c9577-q7bt2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid63f23f0708", MAC:"b6:4d:f4:bb:28:93", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:58.291206 containerd[1546]: 2026-01-23 01:11:58.117 [INFO][5014] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e8facd46cfdde5379ec8e74544b3bb7acf9f33a855aa1fbd1484f0d94931f9c5" Namespace="kube-system" Pod="coredns-66bc5c9577-q7bt2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--q7bt2-eth0" Jan 23 01:11:58.632432 containerd[1546]: time="2026-01-23T01:11:58.626174913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86ccb5f87d-dkzhd,Uid:369aa670-4b29-4a0c-8fff-6ae07d46c778,Namespace:calico-system,Attempt:0,}" Jan 23 01:11:58.715421 containerd[1546]: time="2026-01-23T01:11:58.697774508Z" level=info msg="connecting to shim e8facd46cfdde5379ec8e74544b3bb7acf9f33a855aa1fbd1484f0d94931f9c5" address="unix:///run/containerd/s/6718a08860c2bff50d1c26e38df9cf4619f681b8f3343b0f63c02f1e74e00c0a" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:11:58.845899 systemd-networkd[1450]: calic5f82c4e274: Link UP Jan 23 01:11:58.860823 systemd-networkd[1450]: calic5f82c4e274: Gained carrier Jan 23 01:11:58.910070 containerd[1546]: time="2026-01-23T01:11:58.906020226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7r7jt,Uid:e2553e8f-fa3b-4995-9072-1f7cce3ee2c8,Namespace:calico-system,Attempt:0,} returns sandbox id \"fbfc3bb040876c04fa89f527ef34e13e15b08a982018b030e5395b0b2ba09255\"" Jan 23 01:11:58.918792 containerd[1546]: time="2026-01-23T01:11:58.913578228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:11:59.323480 sshd[5202]: Accepted publickey for core from 10.0.0.1 port 60150 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:11:59.343956 containerd[1546]: time="2026-01-23T01:11:59.343213790Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:11:59.386182 containerd[1546]: time="2026-01-23T01:11:59.369806699Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:11:59.386182 containerd[1546]: time="2026-01-23T01:11:59.370179204Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:11:59.386585 kubelet[2845]: E0123 01:11:59.374067 2845 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:11:59.386585 kubelet[2845]: E0123 01:11:59.374121 2845 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:11:59.386585 kubelet[2845]: E0123 01:11:59.374209 2845 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7r7jt_calico-system(e2553e8f-fa3b-4995-9072-1f7cce3ee2c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:11:59.400114 sshd-session[5202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:59.448484 containerd[1546]: time="2026-01-23T01:11:59.446848696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:11:59.497095 containerd[1546]: 2026-01-23 01:11:56.560 [INFO][5001] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--68948fdbd6--4jxhx-eth0 calico-apiserver-68948fdbd6- calico-apiserver 3b1f5033-cf51-4add-93c1-34dedb396092 1045 0 2026-01-23 01:10:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68948fdbd6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-68948fdbd6-4jxhx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic5f82c4e274 [] [] }} ContainerID="5df48a6cd2393b5f243287243d1b29cab0aa455a02e3fff152c27ccd8cc393b4" Namespace="calico-apiserver" Pod="calico-apiserver-68948fdbd6-4jxhx" WorkloadEndpoint="localhost-k8s-calico--apiserver--68948fdbd6--4jxhx-" Jan 23 01:11:59.497095 containerd[1546]: 2026-01-23 01:11:56.576 [INFO][5001] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5df48a6cd2393b5f243287243d1b29cab0aa455a02e3fff152c27ccd8cc393b4" Namespace="calico-apiserver" Pod="calico-apiserver-68948fdbd6-4jxhx" WorkloadEndpoint="localhost-k8s-calico--apiserver--68948fdbd6--4jxhx-eth0" Jan 23 01:11:59.497095 containerd[1546]: 2026-01-23 01:11:57.070 [INFO][5084] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5df48a6cd2393b5f243287243d1b29cab0aa455a02e3fff152c27ccd8cc393b4" HandleID="k8s-pod-network.5df48a6cd2393b5f243287243d1b29cab0aa455a02e3fff152c27ccd8cc393b4" Workload="localhost-k8s-calico--apiserver--68948fdbd6--4jxhx-eth0" Jan 23 01:11:59.497095 containerd[1546]: 2026-01-23 01:11:57.163 [INFO][5084] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5df48a6cd2393b5f243287243d1b29cab0aa455a02e3fff152c27ccd8cc393b4" HandleID="k8s-pod-network.5df48a6cd2393b5f243287243d1b29cab0aa455a02e3fff152c27ccd8cc393b4" Workload="localhost-k8s-calico--apiserver--68948fdbd6--4jxhx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00022a780), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-68948fdbd6-4jxhx", "timestamp":"2026-01-23 01:11:57.070211938 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:11:59.497095 containerd[1546]: 2026-01-23 01:11:57.163 [INFO][5084] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:11:59.497095 containerd[1546]: 2026-01-23 01:11:57.727 [INFO][5084] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:11:59.497095 containerd[1546]: 2026-01-23 01:11:57.729 [INFO][5084] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 01:11:59.497095 containerd[1546]: 2026-01-23 01:11:57.823 [INFO][5084] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5df48a6cd2393b5f243287243d1b29cab0aa455a02e3fff152c27ccd8cc393b4" host="localhost" Jan 23 01:11:59.497095 containerd[1546]: 2026-01-23 01:11:58.120 [INFO][5084] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 01:11:59.497095 containerd[1546]: 2026-01-23 01:11:58.319 [INFO][5084] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 01:11:59.497095 containerd[1546]: 2026-01-23 01:11:58.359 [INFO][5084] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 01:11:59.497095 containerd[1546]: 2026-01-23 01:11:58.417 [INFO][5084] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 01:11:59.497095 containerd[1546]: 2026-01-23 01:11:58.417 [INFO][5084] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5df48a6cd2393b5f243287243d1b29cab0aa455a02e3fff152c27ccd8cc393b4" host="localhost" Jan 23 01:11:59.497095 containerd[1546]: 2026-01-23 01:11:58.432 [INFO][5084] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5df48a6cd2393b5f243287243d1b29cab0aa455a02e3fff152c27ccd8cc393b4 Jan 23 01:11:59.497095 containerd[1546]: 2026-01-23 01:11:58.494 [INFO][5084] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5df48a6cd2393b5f243287243d1b29cab0aa455a02e3fff152c27ccd8cc393b4" host="localhost" Jan 23 01:11:59.497095 containerd[1546]: 2026-01-23 01:11:58.727 [INFO][5084] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.5df48a6cd2393b5f243287243d1b29cab0aa455a02e3fff152c27ccd8cc393b4" host="localhost" Jan 23 01:11:59.497095 containerd[1546]: 2026-01-23 01:11:58.728 [INFO][5084] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.5df48a6cd2393b5f243287243d1b29cab0aa455a02e3fff152c27ccd8cc393b4" host="localhost" Jan 23 01:11:59.497095 containerd[1546]: 2026-01-23 01:11:58.734 [INFO][5084] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:11:59.497095 containerd[1546]: 2026-01-23 01:11:58.758 [INFO][5084] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="5df48a6cd2393b5f243287243d1b29cab0aa455a02e3fff152c27ccd8cc393b4" HandleID="k8s-pod-network.5df48a6cd2393b5f243287243d1b29cab0aa455a02e3fff152c27ccd8cc393b4" Workload="localhost-k8s-calico--apiserver--68948fdbd6--4jxhx-eth0" Jan 23 01:11:59.501943 containerd[1546]: 2026-01-23 01:11:58.814 [INFO][5001] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5df48a6cd2393b5f243287243d1b29cab0aa455a02e3fff152c27ccd8cc393b4" Namespace="calico-apiserver" Pod="calico-apiserver-68948fdbd6-4jxhx" WorkloadEndpoint="localhost-k8s-calico--apiserver--68948fdbd6--4jxhx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68948fdbd6--4jxhx-eth0", GenerateName:"calico-apiserver-68948fdbd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"3b1f5033-cf51-4add-93c1-34dedb396092", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 10, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68948fdbd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-68948fdbd6-4jxhx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic5f82c4e274", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:59.501943 containerd[1546]: 2026-01-23 01:11:58.815 [INFO][5001] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="5df48a6cd2393b5f243287243d1b29cab0aa455a02e3fff152c27ccd8cc393b4" Namespace="calico-apiserver" Pod="calico-apiserver-68948fdbd6-4jxhx" WorkloadEndpoint="localhost-k8s-calico--apiserver--68948fdbd6--4jxhx-eth0" Jan 23 01:11:59.501943 containerd[1546]: 2026-01-23 01:11:58.815 [INFO][5001] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic5f82c4e274 ContainerID="5df48a6cd2393b5f243287243d1b29cab0aa455a02e3fff152c27ccd8cc393b4" Namespace="calico-apiserver" Pod="calico-apiserver-68948fdbd6-4jxhx" WorkloadEndpoint="localhost-k8s-calico--apiserver--68948fdbd6--4jxhx-eth0" Jan 23 01:11:59.501943 containerd[1546]: 2026-01-23 01:11:58.875 [INFO][5001] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5df48a6cd2393b5f243287243d1b29cab0aa455a02e3fff152c27ccd8cc393b4" Namespace="calico-apiserver" Pod="calico-apiserver-68948fdbd6-4jxhx" WorkloadEndpoint="localhost-k8s-calico--apiserver--68948fdbd6--4jxhx-eth0" Jan 23 01:11:59.501943 containerd[1546]: 2026-01-23 01:11:58.915 [INFO][5001] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5df48a6cd2393b5f243287243d1b29cab0aa455a02e3fff152c27ccd8cc393b4" Namespace="calico-apiserver" Pod="calico-apiserver-68948fdbd6-4jxhx" WorkloadEndpoint="localhost-k8s-calico--apiserver--68948fdbd6--4jxhx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68948fdbd6--4jxhx-eth0", GenerateName:"calico-apiserver-68948fdbd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"3b1f5033-cf51-4add-93c1-34dedb396092", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 10, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68948fdbd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5df48a6cd2393b5f243287243d1b29cab0aa455a02e3fff152c27ccd8cc393b4", Pod:"calico-apiserver-68948fdbd6-4jxhx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic5f82c4e274", MAC:"d2:8e:d6:ce:91:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:59.501943 containerd[1546]: 2026-01-23 01:11:59.394 [INFO][5001] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5df48a6cd2393b5f243287243d1b29cab0aa455a02e3fff152c27ccd8cc393b4" Namespace="calico-apiserver" Pod="calico-apiserver-68948fdbd6-4jxhx" WorkloadEndpoint="localhost-k8s-calico--apiserver--68948fdbd6--4jxhx-eth0" Jan 23 01:11:59.500755 systemd-logind[1529]: New session 14 of user core. Jan 23 01:11:59.521715 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 01:11:59.580174 containerd[1546]: time="2026-01-23T01:11:59.578953926Z" level=info msg="StartContainer for \"761b6330e83115820269c5a204b523050af805d51da89b3f8da227ed1357dab5\" returns successfully" Jan 23 01:11:59.617486 systemd-networkd[1450]: calid63f23f0708: Gained IPv6LL Jan 23 01:11:59.763555 containerd[1546]: time="2026-01-23T01:11:59.762007033Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:11:59.805216 systemd[1]: Started cri-containerd-e8facd46cfdde5379ec8e74544b3bb7acf9f33a855aa1fbd1484f0d94931f9c5.scope - libcontainer container e8facd46cfdde5379ec8e74544b3bb7acf9f33a855aa1fbd1484f0d94931f9c5. Jan 23 01:11:59.806956 containerd[1546]: time="2026-01-23T01:11:59.806476494Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:11:59.807217 containerd[1546]: time="2026-01-23T01:11:59.807147387Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:11:59.815132 kubelet[2845]: E0123 01:11:59.814146 2845 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:11:59.815132 kubelet[2845]: E0123 01:11:59.814883 2845 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:11:59.815132 kubelet[2845]: E0123 01:11:59.814988 2845 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7r7jt_calico-system(e2553e8f-fa3b-4995-9072-1f7cce3ee2c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:11:59.815792 kubelet[2845]: E0123 01:11:59.815042 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7r7jt" podUID="e2553e8f-fa3b-4995-9072-1f7cce3ee2c8" Jan 23 01:11:59.942756 containerd[1546]: time="2026-01-23T01:11:59.941467307Z" level=info msg="connecting to shim 5df48a6cd2393b5f243287243d1b29cab0aa455a02e3fff152c27ccd8cc393b4" address="unix:///run/containerd/s/de395e38dedbfe0a620b3756fb5110b1076bfec1135ea4c78e88ca070ff3778c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:11:59.946581 kubelet[2845]: E0123 01:11:59.940217 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7r7jt" podUID="e2553e8f-fa3b-4995-9072-1f7cce3ee2c8" Jan 23 01:12:00.127536 kubelet[2845]: E0123 01:12:00.127504 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:12:00.258018 systemd-resolved[1454]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 01:12:00.412120 kubelet[2845]: I0123 01:12:00.391897 2845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-nbdj7" podStartSLOduration=137.391873387 podStartE2EDuration="2m17.391873387s" podCreationTimestamp="2026-01-23 01:09:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:12:00.37349783 +0000 UTC m=+137.853582485" watchObservedRunningTime="2026-01-23 01:12:00.391873387 +0000 UTC m=+137.871958011" Jan 23 01:12:00.510199 systemd-networkd[1450]: calic5f82c4e274: Gained IPv6LL Jan 23 01:12:00.724549 systemd-networkd[1450]: calid9521f62a18: Link UP Jan 23 01:12:00.731032 systemd-networkd[1450]: calid9521f62a18: Gained carrier Jan 23 01:12:00.790583 sshd[5285]: Connection closed by 10.0.0.1 port 60150 Jan 23 01:12:00.795470 sshd-session[5202]: pam_unix(sshd:session): session closed for user core Jan 23 01:12:00.823422 systemd[1]: sshd@13-10.0.0.42:22-10.0.0.1:60150.service: Deactivated successfully. Jan 23 01:12:00.839438 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 01:12:00.863067 systemd-logind[1529]: Session 14 logged out. Waiting for processes to exit. Jan 23 01:12:00.919755 systemd[1]: Started cri-containerd-5df48a6cd2393b5f243287243d1b29cab0aa455a02e3fff152c27ccd8cc393b4.scope - libcontainer container 5df48a6cd2393b5f243287243d1b29cab0aa455a02e3fff152c27ccd8cc393b4. Jan 23 01:12:00.951721 systemd-logind[1529]: Removed session 14. Jan 23 01:12:00.985988 containerd[1546]: 2026-01-23 01:11:56.823 [INFO][5038] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--68948fdbd6--mdlvc-eth0 calico-apiserver-68948fdbd6- calico-apiserver 5865bb49-f0fe-4eb4-8f6c-74bc939474ad 1046 0 2026-01-23 01:10:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68948fdbd6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-68948fdbd6-mdlvc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid9521f62a18 [] [] }} ContainerID="3efdc332455e8cfd8239fb326c7aec307d02e722252af7ca244f242da2474839" Namespace="calico-apiserver" Pod="calico-apiserver-68948fdbd6-mdlvc" WorkloadEndpoint="localhost-k8s-calico--apiserver--68948fdbd6--mdlvc-" Jan 23 01:12:00.985988 containerd[1546]: 2026-01-23 01:11:56.846 [INFO][5038] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3efdc332455e8cfd8239fb326c7aec307d02e722252af7ca244f242da2474839" Namespace="calico-apiserver" Pod="calico-apiserver-68948fdbd6-mdlvc" WorkloadEndpoint="localhost-k8s-calico--apiserver--68948fdbd6--mdlvc-eth0" Jan 23 01:12:00.985988 containerd[1546]: 2026-01-23 01:11:57.476 [INFO][5104] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3efdc332455e8cfd8239fb326c7aec307d02e722252af7ca244f242da2474839" HandleID="k8s-pod-network.3efdc332455e8cfd8239fb326c7aec307d02e722252af7ca244f242da2474839" Workload="localhost-k8s-calico--apiserver--68948fdbd6--mdlvc-eth0" Jan 23 01:12:00.985988 containerd[1546]: 2026-01-23 01:11:57.477 [INFO][5104] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3efdc332455e8cfd8239fb326c7aec307d02e722252af7ca244f242da2474839" HandleID="k8s-pod-network.3efdc332455e8cfd8239fb326c7aec307d02e722252af7ca244f242da2474839" Workload="localhost-k8s-calico--apiserver--68948fdbd6--mdlvc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000205db0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-68948fdbd6-mdlvc", "timestamp":"2026-01-23 01:11:57.476908275 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:12:00.985988 containerd[1546]: 2026-01-23 01:11:57.477 [INFO][5104] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:12:00.985988 containerd[1546]: 2026-01-23 01:11:58.737 [INFO][5104] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:12:00.985988 containerd[1546]: 2026-01-23 01:11:58.738 [INFO][5104] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 01:12:00.985988 containerd[1546]: 2026-01-23 01:11:58.861 [INFO][5104] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3efdc332455e8cfd8239fb326c7aec307d02e722252af7ca244f242da2474839" host="localhost" Jan 23 01:12:00.985988 containerd[1546]: 2026-01-23 01:11:59.335 [INFO][5104] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 01:12:00.985988 containerd[1546]: 2026-01-23 01:11:59.575 [INFO][5104] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 01:12:00.985988 containerd[1546]: 2026-01-23 01:11:59.638 [INFO][5104] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 01:12:00.985988 containerd[1546]: 2026-01-23 01:11:59.838 [INFO][5104] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 01:12:00.985988 containerd[1546]: 2026-01-23 01:11:59.838 [INFO][5104] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3efdc332455e8cfd8239fb326c7aec307d02e722252af7ca244f242da2474839" host="localhost" Jan 23 01:12:00.985988 containerd[1546]: 2026-01-23 01:11:59.932 [INFO][5104] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3efdc332455e8cfd8239fb326c7aec307d02e722252af7ca244f242da2474839 Jan 23 01:12:00.985988 containerd[1546]: 2026-01-23 01:12:00.390 [INFO][5104] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3efdc332455e8cfd8239fb326c7aec307d02e722252af7ca244f242da2474839" host="localhost" Jan 23 01:12:00.985988 containerd[1546]: 2026-01-23 01:12:00.492 [INFO][5104] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.3efdc332455e8cfd8239fb326c7aec307d02e722252af7ca244f242da2474839" host="localhost" Jan 23 01:12:00.985988 containerd[1546]: 2026-01-23 01:12:00.502 [INFO][5104] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.3efdc332455e8cfd8239fb326c7aec307d02e722252af7ca244f242da2474839" host="localhost" Jan 23 01:12:00.985988 containerd[1546]: 2026-01-23 01:12:00.502 [INFO][5104] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:12:00.985988 containerd[1546]: 2026-01-23 01:12:00.512 [INFO][5104] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="3efdc332455e8cfd8239fb326c7aec307d02e722252af7ca244f242da2474839" HandleID="k8s-pod-network.3efdc332455e8cfd8239fb326c7aec307d02e722252af7ca244f242da2474839" Workload="localhost-k8s-calico--apiserver--68948fdbd6--mdlvc-eth0" Jan 23 01:12:00.991779 containerd[1546]: 2026-01-23 01:12:00.665 [INFO][5038] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3efdc332455e8cfd8239fb326c7aec307d02e722252af7ca244f242da2474839" Namespace="calico-apiserver" Pod="calico-apiserver-68948fdbd6-mdlvc" WorkloadEndpoint="localhost-k8s-calico--apiserver--68948fdbd6--mdlvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68948fdbd6--mdlvc-eth0", GenerateName:"calico-apiserver-68948fdbd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"5865bb49-f0fe-4eb4-8f6c-74bc939474ad", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 10, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68948fdbd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-68948fdbd6-mdlvc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid9521f62a18", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:12:00.991779 containerd[1546]: 2026-01-23 01:12:00.674 [INFO][5038] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="3efdc332455e8cfd8239fb326c7aec307d02e722252af7ca244f242da2474839" Namespace="calico-apiserver" Pod="calico-apiserver-68948fdbd6-mdlvc" WorkloadEndpoint="localhost-k8s-calico--apiserver--68948fdbd6--mdlvc-eth0" Jan 23 01:12:00.991779 containerd[1546]: 2026-01-23 01:12:00.674 [INFO][5038] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid9521f62a18 ContainerID="3efdc332455e8cfd8239fb326c7aec307d02e722252af7ca244f242da2474839" Namespace="calico-apiserver" Pod="calico-apiserver-68948fdbd6-mdlvc" WorkloadEndpoint="localhost-k8s-calico--apiserver--68948fdbd6--mdlvc-eth0" Jan 23 01:12:00.991779 containerd[1546]: 2026-01-23 01:12:00.732 [INFO][5038] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3efdc332455e8cfd8239fb326c7aec307d02e722252af7ca244f242da2474839" Namespace="calico-apiserver" Pod="calico-apiserver-68948fdbd6-mdlvc" WorkloadEndpoint="localhost-k8s-calico--apiserver--68948fdbd6--mdlvc-eth0" Jan 23 01:12:00.991779 containerd[1546]: 2026-01-23 01:12:00.734 [INFO][5038] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3efdc332455e8cfd8239fb326c7aec307d02e722252af7ca244f242da2474839" Namespace="calico-apiserver" Pod="calico-apiserver-68948fdbd6-mdlvc" WorkloadEndpoint="localhost-k8s-calico--apiserver--68948fdbd6--mdlvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68948fdbd6--mdlvc-eth0", GenerateName:"calico-apiserver-68948fdbd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"5865bb49-f0fe-4eb4-8f6c-74bc939474ad", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 10, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68948fdbd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3efdc332455e8cfd8239fb326c7aec307d02e722252af7ca244f242da2474839", Pod:"calico-apiserver-68948fdbd6-mdlvc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid9521f62a18", MAC:"2e:6d:58:ac:42:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:12:00.991779 containerd[1546]: 2026-01-23 01:12:00.842 [INFO][5038] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3efdc332455e8cfd8239fb326c7aec307d02e722252af7ca244f242da2474839" Namespace="calico-apiserver" Pod="calico-apiserver-68948fdbd6-mdlvc" WorkloadEndpoint="localhost-k8s-calico--apiserver--68948fdbd6--mdlvc-eth0" Jan 23 01:12:01.188745 kubelet[2845]: E0123 01:12:01.182473 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:12:01.201892 kubelet[2845]: E0123 01:12:01.201826 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7r7jt" podUID="e2553e8f-fa3b-4995-9072-1f7cce3ee2c8" Jan 23 01:12:01.549051 systemd-resolved[1454]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 01:12:01.555113 containerd[1546]: time="2026-01-23T01:12:01.555064667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-q7bt2,Uid:853a2368-0964-46f6-bbf0-478966b86444,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8facd46cfdde5379ec8e74544b3bb7acf9f33a855aa1fbd1484f0d94931f9c5\"" Jan 23 01:12:01.571690 kubelet[2845]: E0123 01:12:01.569852 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:12:01.616481 systemd-networkd[1450]: cali38e5e8b832b: Link UP Jan 23 01:12:01.642904 systemd-networkd[1450]: cali38e5e8b832b: Gained carrier Jan 23 01:12:01.668199 containerd[1546]: time="2026-01-23T01:12:01.668070050Z" level=info msg="CreateContainer within sandbox \"e8facd46cfdde5379ec8e74544b3bb7acf9f33a855aa1fbd1484f0d94931f9c5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 01:12:01.824491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2006285270.mount: Deactivated successfully. Jan 23 01:12:01.834834 containerd[1546]: time="2026-01-23T01:12:01.833713428Z" level=info msg="Container 2b69a602d0b5041058671d46a7bbea2ec8e89f9a600a44d4b110a2f18911ded6: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:12:01.863410 containerd[1546]: 2026-01-23 01:11:58.629 [INFO][5161] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--bmvtb-eth0 goldmane-7c778bb748- calico-system 662d18b3-33cf-4000-b003-c8e7f6b2e810 1047 0 2026-01-23 01:10:37 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-bmvtb eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali38e5e8b832b [] [] }} ContainerID="794aad64994ca0fa939f96fce1e66b50d0ffd1c58c0713acb195d7830196657a" Namespace="calico-system" Pod="goldmane-7c778bb748-bmvtb" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--bmvtb-" Jan 23 01:12:01.863410 containerd[1546]: 2026-01-23 01:11:58.697 [INFO][5161] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="794aad64994ca0fa939f96fce1e66b50d0ffd1c58c0713acb195d7830196657a" Namespace="calico-system" Pod="goldmane-7c778bb748-bmvtb" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--bmvtb-eth0" Jan 23 01:12:01.863410 containerd[1546]: 2026-01-23 01:12:00.601 [INFO][5254] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="794aad64994ca0fa939f96fce1e66b50d0ffd1c58c0713acb195d7830196657a" HandleID="k8s-pod-network.794aad64994ca0fa939f96fce1e66b50d0ffd1c58c0713acb195d7830196657a" Workload="localhost-k8s-goldmane--7c778bb748--bmvtb-eth0" Jan 23 01:12:01.863410 containerd[1546]: 2026-01-23 01:12:00.603 [INFO][5254] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="794aad64994ca0fa939f96fce1e66b50d0ffd1c58c0713acb195d7830196657a" HandleID="k8s-pod-network.794aad64994ca0fa939f96fce1e66b50d0ffd1c58c0713acb195d7830196657a" Workload="localhost-k8s-goldmane--7c778bb748--bmvtb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319a30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-bmvtb", "timestamp":"2026-01-23 01:12:00.601087939 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:12:01.863410 containerd[1546]: 2026-01-23 01:12:00.603 [INFO][5254] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:12:01.863410 containerd[1546]: 2026-01-23 01:12:00.603 [INFO][5254] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:12:01.863410 containerd[1546]: 2026-01-23 01:12:00.603 [INFO][5254] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 01:12:01.863410 containerd[1546]: 2026-01-23 01:12:00.707 [INFO][5254] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.794aad64994ca0fa939f96fce1e66b50d0ffd1c58c0713acb195d7830196657a" host="localhost" Jan 23 01:12:01.863410 containerd[1546]: 2026-01-23 01:12:00.916 [INFO][5254] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 01:12:01.863410 containerd[1546]: 2026-01-23 01:12:01.007 [INFO][5254] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 01:12:01.863410 containerd[1546]: 2026-01-23 01:12:01.170 [INFO][5254] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 01:12:01.863410 containerd[1546]: 2026-01-23 01:12:01.333 [INFO][5254] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 01:12:01.863410 containerd[1546]: 2026-01-23 01:12:01.335 [INFO][5254] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.794aad64994ca0fa939f96fce1e66b50d0ffd1c58c0713acb195d7830196657a" host="localhost" Jan 23 01:12:01.863410 containerd[1546]: 2026-01-23 01:12:01.430 [INFO][5254] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.794aad64994ca0fa939f96fce1e66b50d0ffd1c58c0713acb195d7830196657a Jan 23 01:12:01.863410 containerd[1546]: 2026-01-23 01:12:01.496 [INFO][5254] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.794aad64994ca0fa939f96fce1e66b50d0ffd1c58c0713acb195d7830196657a" host="localhost" Jan 23 01:12:01.863410 containerd[1546]: 2026-01-23 01:12:01.564 [INFO][5254] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.794aad64994ca0fa939f96fce1e66b50d0ffd1c58c0713acb195d7830196657a" host="localhost" Jan 23 01:12:01.863410 containerd[1546]: 2026-01-23 01:12:01.565 [INFO][5254] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.794aad64994ca0fa939f96fce1e66b50d0ffd1c58c0713acb195d7830196657a" host="localhost" Jan 23 01:12:01.863410 containerd[1546]: 2026-01-23 01:12:01.565 [INFO][5254] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:12:01.863410 containerd[1546]: 2026-01-23 01:12:01.565 [INFO][5254] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="794aad64994ca0fa939f96fce1e66b50d0ffd1c58c0713acb195d7830196657a" HandleID="k8s-pod-network.794aad64994ca0fa939f96fce1e66b50d0ffd1c58c0713acb195d7830196657a" Workload="localhost-k8s-goldmane--7c778bb748--bmvtb-eth0" Jan 23 01:12:01.871120 containerd[1546]: 2026-01-23 01:12:01.605 [INFO][5161] cni-plugin/k8s.go 418: Populated endpoint ContainerID="794aad64994ca0fa939f96fce1e66b50d0ffd1c58c0713acb195d7830196657a" Namespace="calico-system" Pod="goldmane-7c778bb748-bmvtb" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--bmvtb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--bmvtb-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"662d18b3-33cf-4000-b003-c8e7f6b2e810", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 10, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-bmvtb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali38e5e8b832b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:12:01.871120 containerd[1546]: 2026-01-23 01:12:01.606 [INFO][5161] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="794aad64994ca0fa939f96fce1e66b50d0ffd1c58c0713acb195d7830196657a" Namespace="calico-system" Pod="goldmane-7c778bb748-bmvtb" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--bmvtb-eth0" Jan 23 01:12:01.871120 containerd[1546]: 2026-01-23 01:12:01.606 [INFO][5161] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali38e5e8b832b ContainerID="794aad64994ca0fa939f96fce1e66b50d0ffd1c58c0713acb195d7830196657a" Namespace="calico-system" Pod="goldmane-7c778bb748-bmvtb" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--bmvtb-eth0" Jan 23 01:12:01.871120 containerd[1546]: 2026-01-23 01:12:01.640 [INFO][5161] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="794aad64994ca0fa939f96fce1e66b50d0ffd1c58c0713acb195d7830196657a" Namespace="calico-system" Pod="goldmane-7c778bb748-bmvtb" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--bmvtb-eth0" Jan 23 01:12:01.871120 containerd[1546]: 2026-01-23 01:12:01.654 [INFO][5161] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="794aad64994ca0fa939f96fce1e66b50d0ffd1c58c0713acb195d7830196657a" Namespace="calico-system" Pod="goldmane-7c778bb748-bmvtb" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--bmvtb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--bmvtb-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"662d18b3-33cf-4000-b003-c8e7f6b2e810", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 10, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"794aad64994ca0fa939f96fce1e66b50d0ffd1c58c0713acb195d7830196657a", Pod:"goldmane-7c778bb748-bmvtb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali38e5e8b832b", MAC:"72:94:e5:ae:69:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:12:01.871120 containerd[1546]: 2026-01-23 01:12:01.830 [INFO][5161] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="794aad64994ca0fa939f96fce1e66b50d0ffd1c58c0713acb195d7830196657a" Namespace="calico-system" Pod="goldmane-7c778bb748-bmvtb" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--bmvtb-eth0" Jan 23 01:12:01.917066 containerd[1546]: time="2026-01-23T01:12:01.916770604Z" level=info msg="connecting to shim 3efdc332455e8cfd8239fb326c7aec307d02e722252af7ca244f242da2474839" address="unix:///run/containerd/s/c9e1cd896e945237a04b07a810d42604614db2e2b923d9880b69e3c5c88f39c2" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:12:01.940080 containerd[1546]: time="2026-01-23T01:12:01.939019773Z" level=info msg="CreateContainer within sandbox \"e8facd46cfdde5379ec8e74544b3bb7acf9f33a855aa1fbd1484f0d94931f9c5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2b69a602d0b5041058671d46a7bbea2ec8e89f9a600a44d4b110a2f18911ded6\"" Jan 23 01:12:01.946500 containerd[1546]: time="2026-01-23T01:12:01.945442634Z" level=info msg="StartContainer for \"2b69a602d0b5041058671d46a7bbea2ec8e89f9a600a44d4b110a2f18911ded6\"" Jan 23 01:12:01.949437 containerd[1546]: time="2026-01-23T01:12:01.947181701Z" level=info msg="connecting to shim 2b69a602d0b5041058671d46a7bbea2ec8e89f9a600a44d4b110a2f18911ded6" address="unix:///run/containerd/s/6718a08860c2bff50d1c26e38df9cf4619f681b8f3343b0f63c02f1e74e00c0a" protocol=ttrpc version=3 Jan 23 01:12:02.303529 kubelet[2845]: E0123 01:12:02.303045 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:12:02.327748 containerd[1546]: time="2026-01-23T01:12:02.325068824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68948fdbd6-4jxhx,Uid:3b1f5033-cf51-4add-93c1-34dedb396092,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5df48a6cd2393b5f243287243d1b29cab0aa455a02e3fff152c27ccd8cc393b4\"" Jan 23 01:12:02.336848 containerd[1546]: time="2026-01-23T01:12:02.334518316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:12:02.372094 systemd[1]: Started cri-containerd-3efdc332455e8cfd8239fb326c7aec307d02e722252af7ca244f242da2474839.scope - libcontainer container 3efdc332455e8cfd8239fb326c7aec307d02e722252af7ca244f242da2474839. Jan 23 01:12:02.403453 containerd[1546]: time="2026-01-23T01:12:02.403153687Z" level=info msg="connecting to shim 794aad64994ca0fa939f96fce1e66b50d0ffd1c58c0713acb195d7830196657a" address="unix:///run/containerd/s/14f4c8f863023f0006cda4ec6b30cb713ffa0f8e5d44a9d44dcd009681942449" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:12:02.461506 containerd[1546]: time="2026-01-23T01:12:02.461448500Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:02.488818 containerd[1546]: time="2026-01-23T01:12:02.488744720Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:12:02.494114 containerd[1546]: time="2026-01-23T01:12:02.489416044Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:12:02.496827 kubelet[2845]: E0123 01:12:02.496185 2845 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:12:02.496827 kubelet[2845]: E0123 01:12:02.496493 2845 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:12:02.526452 kubelet[2845]: E0123 01:12:02.525487 2845 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-68948fdbd6-4jxhx_calico-apiserver(3b1f5033-cf51-4add-93c1-34dedb396092): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:02.526452 kubelet[2845]: E0123 01:12:02.525560 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68948fdbd6-4jxhx" podUID="3b1f5033-cf51-4add-93c1-34dedb396092" Jan 23 01:12:02.591960 systemd[1]: Started cri-containerd-2b69a602d0b5041058671d46a7bbea2ec8e89f9a600a44d4b110a2f18911ded6.scope - libcontainer container 2b69a602d0b5041058671d46a7bbea2ec8e89f9a600a44d4b110a2f18911ded6. Jan 23 01:12:02.693999 systemd-networkd[1450]: calid9521f62a18: Gained IPv6LL Jan 23 01:12:02.788516 systemd-resolved[1454]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 01:12:02.807913 systemd-networkd[1450]: cali81f254fb006: Link UP Jan 23 01:12:02.814145 systemd-networkd[1450]: cali81f254fb006: Gained carrier Jan 23 01:12:02.956887 systemd[1]: Started cri-containerd-794aad64994ca0fa939f96fce1e66b50d0ffd1c58c0713acb195d7830196657a.scope - libcontainer container 794aad64994ca0fa939f96fce1e66b50d0ffd1c58c0713acb195d7830196657a. Jan 23 01:12:02.969767 containerd[1546]: 2026-01-23 01:12:00.661 [INFO][5239] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--86ccb5f87d--dkzhd-eth0 calico-kube-controllers-86ccb5f87d- calico-system 369aa670-4b29-4a0c-8fff-6ae07d46c778 1041 0 2026-01-23 01:10:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:86ccb5f87d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-86ccb5f87d-dkzhd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali81f254fb006 [] [] }} ContainerID="a60ac8c6a2b486d258cb54c084506c1965e5c00af3c64647d7b45c72d2df9aac" Namespace="calico-system" Pod="calico-kube-controllers-86ccb5f87d-dkzhd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86ccb5f87d--dkzhd-" Jan 23 01:12:02.969767 containerd[1546]: 2026-01-23 01:12:00.671 [INFO][5239] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a60ac8c6a2b486d258cb54c084506c1965e5c00af3c64647d7b45c72d2df9aac" Namespace="calico-system" Pod="calico-kube-controllers-86ccb5f87d-dkzhd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86ccb5f87d--dkzhd-eth0" Jan 23 01:12:02.969767 containerd[1546]: 2026-01-23 01:12:01.766 [INFO][5351] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a60ac8c6a2b486d258cb54c084506c1965e5c00af3c64647d7b45c72d2df9aac" HandleID="k8s-pod-network.a60ac8c6a2b486d258cb54c084506c1965e5c00af3c64647d7b45c72d2df9aac" Workload="localhost-k8s-calico--kube--controllers--86ccb5f87d--dkzhd-eth0" Jan 23 01:12:02.969767 containerd[1546]: 2026-01-23 01:12:01.770 [INFO][5351] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a60ac8c6a2b486d258cb54c084506c1965e5c00af3c64647d7b45c72d2df9aac" HandleID="k8s-pod-network.a60ac8c6a2b486d258cb54c084506c1965e5c00af3c64647d7b45c72d2df9aac" Workload="localhost-k8s-calico--kube--controllers--86ccb5f87d--dkzhd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005202f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-86ccb5f87d-dkzhd", "timestamp":"2026-01-23 01:12:01.76652098 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:12:02.969767 containerd[1546]: 2026-01-23 01:12:01.828 [INFO][5351] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:12:02.969767 containerd[1546]: 2026-01-23 01:12:01.831 [INFO][5351] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:12:02.969767 containerd[1546]: 2026-01-23 01:12:01.831 [INFO][5351] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 01:12:02.969767 containerd[1546]: 2026-01-23 01:12:01.898 [INFO][5351] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a60ac8c6a2b486d258cb54c084506c1965e5c00af3c64647d7b45c72d2df9aac" host="localhost" Jan 23 01:12:02.969767 containerd[1546]: 2026-01-23 01:12:02.219 [INFO][5351] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 01:12:02.969767 containerd[1546]: 2026-01-23 01:12:02.256 [INFO][5351] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 01:12:02.969767 containerd[1546]: 2026-01-23 01:12:02.296 [INFO][5351] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 01:12:02.969767 containerd[1546]: 2026-01-23 01:12:02.363 [INFO][5351] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 01:12:02.969767 containerd[1546]: 2026-01-23 01:12:02.365 [INFO][5351] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a60ac8c6a2b486d258cb54c084506c1965e5c00af3c64647d7b45c72d2df9aac" host="localhost" Jan 23 01:12:02.969767 containerd[1546]: 2026-01-23 01:12:02.408 [INFO][5351] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a60ac8c6a2b486d258cb54c084506c1965e5c00af3c64647d7b45c72d2df9aac Jan 23 01:12:02.969767 containerd[1546]: 2026-01-23 01:12:02.436 [INFO][5351] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a60ac8c6a2b486d258cb54c084506c1965e5c00af3c64647d7b45c72d2df9aac" host="localhost" Jan 23 01:12:02.969767 containerd[1546]: 2026-01-23 01:12:02.493 [INFO][5351] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.a60ac8c6a2b486d258cb54c084506c1965e5c00af3c64647d7b45c72d2df9aac" host="localhost" Jan 23 01:12:02.969767 containerd[1546]: 2026-01-23 01:12:02.494 [INFO][5351] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.a60ac8c6a2b486d258cb54c084506c1965e5c00af3c64647d7b45c72d2df9aac" host="localhost" Jan 23 01:12:02.969767 containerd[1546]: 2026-01-23 01:12:02.519 [INFO][5351] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:12:02.969767 containerd[1546]: 2026-01-23 01:12:02.520 [INFO][5351] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="a60ac8c6a2b486d258cb54c084506c1965e5c00af3c64647d7b45c72d2df9aac" HandleID="k8s-pod-network.a60ac8c6a2b486d258cb54c084506c1965e5c00af3c64647d7b45c72d2df9aac" Workload="localhost-k8s-calico--kube--controllers--86ccb5f87d--dkzhd-eth0" Jan 23 01:12:02.982944 containerd[1546]: 2026-01-23 01:12:02.571 [INFO][5239] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a60ac8c6a2b486d258cb54c084506c1965e5c00af3c64647d7b45c72d2df9aac" Namespace="calico-system" Pod="calico-kube-controllers-86ccb5f87d-dkzhd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86ccb5f87d--dkzhd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--86ccb5f87d--dkzhd-eth0", GenerateName:"calico-kube-controllers-86ccb5f87d-", Namespace:"calico-system", SelfLink:"", UID:"369aa670-4b29-4a0c-8fff-6ae07d46c778", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 10, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86ccb5f87d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-86ccb5f87d-dkzhd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali81f254fb006", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:12:02.982944 containerd[1546]: 2026-01-23 01:12:02.577 [INFO][5239] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="a60ac8c6a2b486d258cb54c084506c1965e5c00af3c64647d7b45c72d2df9aac" Namespace="calico-system" Pod="calico-kube-controllers-86ccb5f87d-dkzhd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86ccb5f87d--dkzhd-eth0" Jan 23 01:12:02.982944 containerd[1546]: 2026-01-23 01:12:02.577 [INFO][5239] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali81f254fb006 ContainerID="a60ac8c6a2b486d258cb54c084506c1965e5c00af3c64647d7b45c72d2df9aac" Namespace="calico-system" Pod="calico-kube-controllers-86ccb5f87d-dkzhd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86ccb5f87d--dkzhd-eth0" Jan 23 01:12:02.982944 containerd[1546]: 2026-01-23 01:12:02.843 [INFO][5239] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a60ac8c6a2b486d258cb54c084506c1965e5c00af3c64647d7b45c72d2df9aac" Namespace="calico-system" Pod="calico-kube-controllers-86ccb5f87d-dkzhd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86ccb5f87d--dkzhd-eth0" Jan 23 01:12:02.982944 containerd[1546]: 2026-01-23 01:12:02.872 [INFO][5239] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a60ac8c6a2b486d258cb54c084506c1965e5c00af3c64647d7b45c72d2df9aac" Namespace="calico-system" Pod="calico-kube-controllers-86ccb5f87d-dkzhd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86ccb5f87d--dkzhd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--86ccb5f87d--dkzhd-eth0", GenerateName:"calico-kube-controllers-86ccb5f87d-", Namespace:"calico-system", SelfLink:"", UID:"369aa670-4b29-4a0c-8fff-6ae07d46c778", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 10, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86ccb5f87d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a60ac8c6a2b486d258cb54c084506c1965e5c00af3c64647d7b45c72d2df9aac", Pod:"calico-kube-controllers-86ccb5f87d-dkzhd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali81f254fb006", MAC:"ce:6a:a6:48:82:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:12:02.982944 containerd[1546]: 2026-01-23 01:12:02.954 [INFO][5239] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a60ac8c6a2b486d258cb54c084506c1965e5c00af3c64647d7b45c72d2df9aac" Namespace="calico-system" Pod="calico-kube-controllers-86ccb5f87d-dkzhd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86ccb5f87d--dkzhd-eth0" Jan 23 01:12:03.130442 systemd-resolved[1454]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 01:12:03.134955 containerd[1546]: time="2026-01-23T01:12:03.134909806Z" level=info msg="StartContainer for \"2b69a602d0b5041058671d46a7bbea2ec8e89f9a600a44d4b110a2f18911ded6\" returns successfully" Jan 23 01:12:03.280529 containerd[1546]: time="2026-01-23T01:12:03.280465307Z" level=info msg="connecting to shim a60ac8c6a2b486d258cb54c084506c1965e5c00af3c64647d7b45c72d2df9aac" address="unix:///run/containerd/s/44a1c38939102731613e74e2af83753913a6d532216b632d108289dc0c85ded0" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:12:03.325488 systemd-networkd[1450]: cali38e5e8b832b: Gained IPv6LL Jan 23 01:12:03.394802 containerd[1546]: time="2026-01-23T01:12:03.394186319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68948fdbd6-mdlvc,Uid:5865bb49-f0fe-4eb4-8f6c-74bc939474ad,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3efdc332455e8cfd8239fb326c7aec307d02e722252af7ca244f242da2474839\"" Jan 23 01:12:03.402119 kubelet[2845]: E0123 01:12:03.402029 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:12:03.506856 kubelet[2845]: E0123 01:12:03.503210 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:12:03.515206 containerd[1546]: time="2026-01-23T01:12:03.515026157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:12:03.518478 kubelet[2845]: E0123 01:12:03.517474 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68948fdbd6-4jxhx" podUID="3b1f5033-cf51-4add-93c1-34dedb396092" Jan 23 01:12:03.843717 containerd[1546]: time="2026-01-23T01:12:03.764496797Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:03.843717 containerd[1546]: time="2026-01-23T01:12:03.787851189Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:12:03.843717 containerd[1546]: time="2026-01-23T01:12:03.787980771Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:12:03.843717 containerd[1546]: time="2026-01-23T01:12:03.790045246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:12:03.845453 kubelet[2845]: E0123 01:12:03.788570 2845 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:12:03.845453 kubelet[2845]: E0123 01:12:03.788727 2845 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:12:03.845453 kubelet[2845]: E0123 01:12:03.789112 2845 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-68948fdbd6-mdlvc_calico-apiserver(5865bb49-f0fe-4eb4-8f6c-74bc939474ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:03.845453 kubelet[2845]: E0123 01:12:03.789156 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68948fdbd6-mdlvc" podUID="5865bb49-f0fe-4eb4-8f6c-74bc939474ad" Jan 23 01:12:03.847529 kubelet[2845]: I0123 01:12:03.812939 2845 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-q7bt2" podStartSLOduration=140.812916596 podStartE2EDuration="2m20.812916596s" podCreationTimestamp="2026-01-23 01:09:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:12:03.56183563 +0000 UTC m=+141.041920255" watchObservedRunningTime="2026-01-23 01:12:03.812916596 +0000 UTC m=+141.293001241" Jan 23 01:12:03.853018 systemd[1]: Started cri-containerd-a60ac8c6a2b486d258cb54c084506c1965e5c00af3c64647d7b45c72d2df9aac.scope - libcontainer container a60ac8c6a2b486d258cb54c084506c1965e5c00af3c64647d7b45c72d2df9aac. Jan 23 01:12:03.902568 systemd-networkd[1450]: cali81f254fb006: Gained IPv6LL Jan 23 01:12:03.957967 containerd[1546]: time="2026-01-23T01:12:03.957187897Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:03.968917 containerd[1546]: time="2026-01-23T01:12:03.968174949Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:12:03.970077 containerd[1546]: time="2026-01-23T01:12:03.970028620Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:12:03.977863 kubelet[2845]: E0123 01:12:03.976951 2845 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:12:03.977863 kubelet[2845]: E0123 01:12:03.977163 2845 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:12:03.977863 kubelet[2845]: E0123 01:12:03.977533 2845 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-8554d56494-ldgsm_calico-system(8cf6c8e5-97b0-4acf-833b-96387e1e4a45): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:03.999850 containerd[1546]: time="2026-01-23T01:12:03.995003597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:12:04.032088 systemd-resolved[1454]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 01:12:04.049075 containerd[1546]: time="2026-01-23T01:12:04.048486241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-bmvtb,Uid:662d18b3-33cf-4000-b003-c8e7f6b2e810,Namespace:calico-system,Attempt:0,} returns sandbox id \"794aad64994ca0fa939f96fce1e66b50d0ffd1c58c0713acb195d7830196657a\"" Jan 23 01:12:04.160572 containerd[1546]: time="2026-01-23T01:12:04.153905964Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:04.167146 containerd[1546]: time="2026-01-23T01:12:04.166801468Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:12:04.167146 containerd[1546]: time="2026-01-23T01:12:04.167038350Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:12:04.171784 kubelet[2845]: E0123 01:12:04.170078 2845 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:12:04.171784 kubelet[2845]: E0123 01:12:04.170137 2845 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:12:04.171784 kubelet[2845]: E0123 01:12:04.170544 2845 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-8554d56494-ldgsm_calico-system(8cf6c8e5-97b0-4acf-833b-96387e1e4a45): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:04.171937 containerd[1546]: time="2026-01-23T01:12:04.171100433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:12:04.171986 kubelet[2845]: E0123 01:12:04.170727 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8554d56494-ldgsm" podUID="8cf6c8e5-97b0-4acf-833b-96387e1e4a45" Jan 23 01:12:04.289901 containerd[1546]: time="2026-01-23T01:12:04.289737713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86ccb5f87d-dkzhd,Uid:369aa670-4b29-4a0c-8fff-6ae07d46c778,Namespace:calico-system,Attempt:0,} returns sandbox id \"a60ac8c6a2b486d258cb54c084506c1965e5c00af3c64647d7b45c72d2df9aac\"" Jan 23 01:12:04.312440 containerd[1546]: time="2026-01-23T01:12:04.312053224Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:04.326893 containerd[1546]: time="2026-01-23T01:12:04.326567047Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:12:04.326893 containerd[1546]: time="2026-01-23T01:12:04.326827673Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:12:04.328594 kubelet[2845]: E0123 01:12:04.327977 2845 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:12:04.328594 kubelet[2845]: E0123 01:12:04.328045 2845 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:12:04.328950 containerd[1546]: time="2026-01-23T01:12:04.328819141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:12:04.331127 kubelet[2845]: E0123 01:12:04.330821 2845 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-bmvtb_calico-system(662d18b3-33cf-4000-b003-c8e7f6b2e810): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:04.331127 kubelet[2845]: E0123 01:12:04.330965 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-bmvtb" podUID="662d18b3-33cf-4000-b003-c8e7f6b2e810" Jan 23 01:12:04.421055 containerd[1546]: time="2026-01-23T01:12:04.420872946Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:04.427957 containerd[1546]: time="2026-01-23T01:12:04.427464474Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:12:04.427957 containerd[1546]: time="2026-01-23T01:12:04.427797346Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:12:04.430565 kubelet[2845]: E0123 01:12:04.429943 2845 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:12:04.430565 kubelet[2845]: E0123 01:12:04.430116 2845 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:12:04.430565 kubelet[2845]: E0123 01:12:04.430213 2845 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-86ccb5f87d-dkzhd_calico-system(369aa670-4b29-4a0c-8fff-6ae07d46c778): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:04.430565 kubelet[2845]: E0123 01:12:04.430467 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86ccb5f87d-dkzhd" podUID="369aa670-4b29-4a0c-8fff-6ae07d46c778" Jan 23 01:12:04.527469 kubelet[2845]: E0123 01:12:04.526042 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86ccb5f87d-dkzhd" podUID="369aa670-4b29-4a0c-8fff-6ae07d46c778" Jan 23 01:12:04.530516 kubelet[2845]: E0123 01:12:04.529200 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-bmvtb" podUID="662d18b3-33cf-4000-b003-c8e7f6b2e810" Jan 23 01:12:04.534082 kubelet[2845]: E0123 01:12:04.533457 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:12:04.534082 kubelet[2845]: E0123 01:12:04.533795 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68948fdbd6-mdlvc" podUID="5865bb49-f0fe-4eb4-8f6c-74bc939474ad" Jan 23 01:12:04.534082 kubelet[2845]: E0123 01:12:04.533894 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68948fdbd6-4jxhx" podUID="3b1f5033-cf51-4add-93c1-34dedb396092" Jan 23 01:12:05.550987 kubelet[2845]: E0123 01:12:05.543754 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:12:05.559454 kubelet[2845]: E0123 01:12:05.559204 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:12:05.574217 kubelet[2845]: E0123 01:12:05.574166 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68948fdbd6-mdlvc" podUID="5865bb49-f0fe-4eb4-8f6c-74bc939474ad" Jan 23 01:12:05.575106 kubelet[2845]: E0123 01:12:05.575076 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-bmvtb" podUID="662d18b3-33cf-4000-b003-c8e7f6b2e810" Jan 23 01:12:05.586437 kubelet[2845]: E0123 01:12:05.586020 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86ccb5f87d-dkzhd" podUID="369aa670-4b29-4a0c-8fff-6ae07d46c778" Jan 23 01:12:05.875780 systemd[1]: Started sshd@14-10.0.0.42:22-10.0.0.1:56336.service - OpenSSH per-connection server daemon (10.0.0.1:56336). Jan 23 01:12:06.114161 sshd[5604]: Accepted publickey for core from 10.0.0.1 port 56336 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:12:06.121053 sshd-session[5604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:12:06.183867 systemd-logind[1529]: New session 15 of user core. Jan 23 01:12:06.204067 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 01:12:07.017988 sshd[5607]: Connection closed by 10.0.0.1 port 56336 Jan 23 01:12:07.020070 sshd-session[5604]: pam_unix(sshd:session): session closed for user core Jan 23 01:12:07.034449 systemd[1]: sshd@14-10.0.0.42:22-10.0.0.1:56336.service: Deactivated successfully. Jan 23 01:12:07.048151 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 01:12:07.052429 systemd-logind[1529]: Session 15 logged out. Waiting for processes to exit. Jan 23 01:12:07.066050 systemd-logind[1529]: Removed session 15. Jan 23 01:12:08.544149 kubelet[2845]: E0123 01:12:08.544050 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:12:12.096903 systemd[1]: Started sshd@15-10.0.0.42:22-10.0.0.1:56344.service - OpenSSH per-connection server daemon (10.0.0.1:56344). Jan 23 01:12:12.394151 sshd[5624]: Accepted publickey for core from 10.0.0.1 port 56344 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:12:12.402181 sshd-session[5624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:12:12.448899 systemd-logind[1529]: New session 16 of user core. Jan 23 01:12:12.470218 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 01:12:12.625087 containerd[1546]: time="2026-01-23T01:12:12.615598689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:12:12.772565 containerd[1546]: time="2026-01-23T01:12:12.770789027Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:12.809169 containerd[1546]: time="2026-01-23T01:12:12.807963330Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:12:12.809169 containerd[1546]: time="2026-01-23T01:12:12.808098823Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:12:12.811467 kubelet[2845]: E0123 01:12:12.809920 2845 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:12:12.811467 kubelet[2845]: E0123 01:12:12.809990 2845 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:12:12.811467 kubelet[2845]: E0123 01:12:12.810086 2845 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7r7jt_calico-system(e2553e8f-fa3b-4995-9072-1f7cce3ee2c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:12.848486 containerd[1546]: time="2026-01-23T01:12:12.848076705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:12:13.072071 containerd[1546]: time="2026-01-23T01:12:13.070425631Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:13.095134 containerd[1546]: time="2026-01-23T01:12:13.094442605Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:12:13.096439 containerd[1546]: time="2026-01-23T01:12:13.094970656Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:12:13.102412 kubelet[2845]: E0123 01:12:13.100849 2845 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:12:13.102412 kubelet[2845]: E0123 01:12:13.101945 2845 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:12:13.102412 kubelet[2845]: E0123 01:12:13.102107 2845 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7r7jt_calico-system(e2553e8f-fa3b-4995-9072-1f7cce3ee2c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:13.103092 kubelet[2845]: E0123 01:12:13.102168 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7r7jt" podUID="e2553e8f-fa3b-4995-9072-1f7cce3ee2c8" Jan 23 01:12:13.152865 sshd[5627]: Connection closed by 10.0.0.1 port 56344 Jan 23 01:12:13.158578 sshd-session[5624]: pam_unix(sshd:session): session closed for user core Jan 23 01:12:13.171785 systemd[1]: sshd@15-10.0.0.42:22-10.0.0.1:56344.service: Deactivated successfully. Jan 23 01:12:13.198542 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 01:12:13.212941 systemd-logind[1529]: Session 16 logged out. Waiting for processes to exit. Jan 23 01:12:13.219547 systemd-logind[1529]: Removed session 16. Jan 23 01:12:15.298926 kubelet[2845]: E0123 01:12:15.297787 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:12:15.543170 kubelet[2845]: E0123 01:12:15.540991 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:12:15.550486 containerd[1546]: time="2026-01-23T01:12:15.549947718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:12:15.680873 containerd[1546]: time="2026-01-23T01:12:15.673051449Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:15.698898 containerd[1546]: time="2026-01-23T01:12:15.695600497Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:12:15.698898 containerd[1546]: time="2026-01-23T01:12:15.696019348Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:12:15.711131 kubelet[2845]: E0123 01:12:15.709729 2845 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:12:15.711131 kubelet[2845]: E0123 01:12:15.709866 2845 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:12:15.711131 kubelet[2845]: E0123 01:12:15.709974 2845 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-68948fdbd6-4jxhx_calico-apiserver(3b1f5033-cf51-4add-93c1-34dedb396092): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:15.711131 kubelet[2845]: E0123 01:12:15.710021 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68948fdbd6-4jxhx" podUID="3b1f5033-cf51-4add-93c1-34dedb396092" Jan 23 01:12:18.206621 systemd[1]: Started sshd@16-10.0.0.42:22-10.0.0.1:38128.service - OpenSSH per-connection server daemon (10.0.0.1:38128). Jan 23 01:12:18.420388 sshd[5675]: Accepted publickey for core from 10.0.0.1 port 38128 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:12:18.424151 sshd-session[5675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:12:18.455004 systemd-logind[1529]: New session 17 of user core. Jan 23 01:12:18.473182 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 01:12:18.545220 containerd[1546]: time="2026-01-23T01:12:18.543547269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:12:18.648162 containerd[1546]: time="2026-01-23T01:12:18.647990550Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:18.652445 containerd[1546]: time="2026-01-23T01:12:18.652146098Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:12:18.652445 containerd[1546]: time="2026-01-23T01:12:18.652405041Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:12:18.652826 kubelet[2845]: E0123 01:12:18.652597 2845 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:12:18.652826 kubelet[2845]: E0123 01:12:18.652820 2845 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:12:18.653900 kubelet[2845]: E0123 01:12:18.652916 2845 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-68948fdbd6-mdlvc_calico-apiserver(5865bb49-f0fe-4eb4-8f6c-74bc939474ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:18.653900 kubelet[2845]: E0123 01:12:18.652965 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68948fdbd6-mdlvc" podUID="5865bb49-f0fe-4eb4-8f6c-74bc939474ad" Jan 23 01:12:18.979170 sshd[5678]: Connection closed by 10.0.0.1 port 38128 Jan 23 01:12:18.984004 sshd-session[5675]: pam_unix(sshd:session): session closed for user core Jan 23 01:12:19.007772 systemd[1]: sshd@16-10.0.0.42:22-10.0.0.1:38128.service: Deactivated successfully. Jan 23 01:12:19.012872 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 01:12:19.021484 systemd-logind[1529]: Session 17 logged out. Waiting for processes to exit. Jan 23 01:12:19.025784 systemd-logind[1529]: Removed session 17. Jan 23 01:12:19.545190 containerd[1546]: time="2026-01-23T01:12:19.543020523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:12:19.549447 kubelet[2845]: E0123 01:12:19.548447 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8554d56494-ldgsm" podUID="8cf6c8e5-97b0-4acf-833b-96387e1e4a45" Jan 23 01:12:19.700756 containerd[1546]: time="2026-01-23T01:12:19.683605382Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:19.700756 containerd[1546]: time="2026-01-23T01:12:19.700451133Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:12:19.700756 containerd[1546]: time="2026-01-23T01:12:19.700563393Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:12:19.701589 kubelet[2845]: E0123 01:12:19.701191 2845 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:12:19.701589 kubelet[2845]: E0123 01:12:19.701424 2845 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:12:19.701589 kubelet[2845]: E0123 01:12:19.701517 2845 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-bmvtb_calico-system(662d18b3-33cf-4000-b003-c8e7f6b2e810): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:19.701589 kubelet[2845]: E0123 01:12:19.701558 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-bmvtb" podUID="662d18b3-33cf-4000-b003-c8e7f6b2e810" Jan 23 01:12:20.547876 containerd[1546]: time="2026-01-23T01:12:20.547822766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:12:20.654127 containerd[1546]: time="2026-01-23T01:12:20.653750290Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:20.667487 containerd[1546]: time="2026-01-23T01:12:20.665006475Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:12:20.667487 containerd[1546]: time="2026-01-23T01:12:20.665211298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:12:20.671601 kubelet[2845]: E0123 01:12:20.668048 2845 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:12:20.671601 kubelet[2845]: E0123 01:12:20.668112 2845 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:12:20.673147 kubelet[2845]: E0123 01:12:20.672986 2845 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-86ccb5f87d-dkzhd_calico-system(369aa670-4b29-4a0c-8fff-6ae07d46c778): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:20.675790 kubelet[2845]: E0123 01:12:20.674177 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86ccb5f87d-dkzhd" podUID="369aa670-4b29-4a0c-8fff-6ae07d46c778" Jan 23 01:12:22.539822 kubelet[2845]: E0123 01:12:22.539586 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:12:24.030624 systemd[1]: Started sshd@17-10.0.0.42:22-10.0.0.1:55186.service - OpenSSH per-connection server daemon (10.0.0.1:55186). Jan 23 01:12:24.287593 sshd[5692]: Accepted publickey for core from 10.0.0.1 port 55186 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:12:24.294204 sshd-session[5692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:12:24.311767 systemd-logind[1529]: New session 18 of user core. Jan 23 01:12:24.331087 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 01:12:24.737506 sshd[5697]: Connection closed by 10.0.0.1 port 55186 Jan 23 01:12:24.736849 sshd-session[5692]: pam_unix(sshd:session): session closed for user core Jan 23 01:12:24.747985 systemd[1]: sshd@17-10.0.0.42:22-10.0.0.1:55186.service: Deactivated successfully. Jan 23 01:12:24.755137 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 01:12:24.759834 systemd-logind[1529]: Session 18 logged out. Waiting for processes to exit. Jan 23 01:12:24.768800 systemd-logind[1529]: Removed session 18. Jan 23 01:12:25.539599 kubelet[2845]: E0123 01:12:25.539040 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:12:26.549024 kubelet[2845]: E0123 01:12:26.548603 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7r7jt" podUID="e2553e8f-fa3b-4995-9072-1f7cce3ee2c8" Jan 23 01:12:27.542425 kubelet[2845]: E0123 01:12:27.540478 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68948fdbd6-4jxhx" podUID="3b1f5033-cf51-4add-93c1-34dedb396092" Jan 23 01:12:29.770987 systemd[1]: Started sshd@18-10.0.0.42:22-10.0.0.1:55192.service - OpenSSH per-connection server daemon (10.0.0.1:55192). Jan 23 01:12:29.897755 sshd[5716]: Accepted publickey for core from 10.0.0.1 port 55192 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:12:29.900896 sshd-session[5716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:12:29.917346 systemd-logind[1529]: New session 19 of user core. Jan 23 01:12:29.927836 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 01:12:30.297952 sshd[5719]: Connection closed by 10.0.0.1 port 55192 Jan 23 01:12:30.300875 sshd-session[5716]: pam_unix(sshd:session): session closed for user core Jan 23 01:12:30.336461 systemd[1]: sshd@18-10.0.0.42:22-10.0.0.1:55192.service: Deactivated successfully. Jan 23 01:12:30.346060 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 01:12:30.348194 systemd-logind[1529]: Session 19 logged out. Waiting for processes to exit. Jan 23 01:12:30.355914 systemd[1]: Started sshd@19-10.0.0.42:22-10.0.0.1:55196.service - OpenSSH per-connection server daemon (10.0.0.1:55196). Jan 23 01:12:30.364457 systemd-logind[1529]: Removed session 19. Jan 23 01:12:30.462138 sshd[5736]: Accepted publickey for core from 10.0.0.1 port 55196 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:12:30.468632 sshd-session[5736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:12:30.502546 systemd-logind[1529]: New session 20 of user core. Jan 23 01:12:30.517653 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 01:12:30.992846 sshd[5739]: Connection closed by 10.0.0.1 port 55196 Jan 23 01:12:30.996126 sshd-session[5736]: pam_unix(sshd:session): session closed for user core Jan 23 01:12:31.008790 systemd[1]: sshd@19-10.0.0.42:22-10.0.0.1:55196.service: Deactivated successfully. Jan 23 01:12:31.013914 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 01:12:31.020970 systemd-logind[1529]: Session 20 logged out. Waiting for processes to exit. Jan 23 01:12:31.030192 systemd[1]: Started sshd@20-10.0.0.42:22-10.0.0.1:55202.service - OpenSSH per-connection server daemon (10.0.0.1:55202). Jan 23 01:12:31.035543 systemd-logind[1529]: Removed session 20. Jan 23 01:12:31.195573 sshd[5750]: Accepted publickey for core from 10.0.0.1 port 55202 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:12:31.200549 sshd-session[5750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:12:31.222817 systemd-logind[1529]: New session 21 of user core. Jan 23 01:12:31.242074 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 01:12:31.550639 kubelet[2845]: E0123 01:12:31.549194 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-bmvtb" podUID="662d18b3-33cf-4000-b003-c8e7f6b2e810" Jan 23 01:12:31.553442 containerd[1546]: time="2026-01-23T01:12:31.550449338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:12:31.664581 sshd[5753]: Connection closed by 10.0.0.1 port 55202 Jan 23 01:12:31.666656 sshd-session[5750]: pam_unix(sshd:session): session closed for user core Jan 23 01:12:31.672124 containerd[1546]: time="2026-01-23T01:12:31.671078264Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:31.676943 containerd[1546]: time="2026-01-23T01:12:31.676543598Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:12:31.677121 containerd[1546]: time="2026-01-23T01:12:31.677101139Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:12:31.691584 kubelet[2845]: E0123 01:12:31.691531 2845 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:12:31.691958 kubelet[2845]: E0123 01:12:31.691927 2845 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:12:31.692129 kubelet[2845]: E0123 01:12:31.692100 2845 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-8554d56494-ldgsm_calico-system(8cf6c8e5-97b0-4acf-833b-96387e1e4a45): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:31.692173 systemd[1]: sshd@20-10.0.0.42:22-10.0.0.1:55202.service: Deactivated successfully. Jan 23 01:12:31.700026 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 01:12:31.704763 systemd-logind[1529]: Session 21 logged out. Waiting for processes to exit. Jan 23 01:12:31.709521 containerd[1546]: time="2026-01-23T01:12:31.706542645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:12:31.720395 systemd-logind[1529]: Removed session 21. Jan 23 01:12:31.797633 containerd[1546]: time="2026-01-23T01:12:31.797482238Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:31.801826 containerd[1546]: time="2026-01-23T01:12:31.801056848Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:12:31.801826 containerd[1546]: time="2026-01-23T01:12:31.801207109Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:12:31.807089 kubelet[2845]: E0123 01:12:31.806490 2845 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:12:31.807089 kubelet[2845]: E0123 01:12:31.806991 2845 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:12:31.807193 kubelet[2845]: E0123 01:12:31.807106 2845 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-8554d56494-ldgsm_calico-system(8cf6c8e5-97b0-4acf-833b-96387e1e4a45): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:31.807193 kubelet[2845]: E0123 01:12:31.807166 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8554d56494-ldgsm" podUID="8cf6c8e5-97b0-4acf-833b-96387e1e4a45" Jan 23 01:12:33.543918 kubelet[2845]: E0123 01:12:33.543463 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68948fdbd6-mdlvc" podUID="5865bb49-f0fe-4eb4-8f6c-74bc939474ad" Jan 23 01:12:34.549216 kubelet[2845]: E0123 01:12:34.549074 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86ccb5f87d-dkzhd" podUID="369aa670-4b29-4a0c-8fff-6ae07d46c778" Jan 23 01:12:36.714855 systemd[1]: Started sshd@21-10.0.0.42:22-10.0.0.1:35822.service - OpenSSH per-connection server daemon (10.0.0.1:35822). Jan 23 01:12:36.881064 sshd[5773]: Accepted publickey for core from 10.0.0.1 port 35822 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:12:36.898596 sshd-session[5773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:12:36.918451 systemd-logind[1529]: New session 22 of user core. Jan 23 01:12:36.935813 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 01:12:37.319201 sshd[5776]: Connection closed by 10.0.0.1 port 35822 Jan 23 01:12:37.320796 sshd-session[5773]: pam_unix(sshd:session): session closed for user core Jan 23 01:12:37.331021 systemd[1]: sshd@21-10.0.0.42:22-10.0.0.1:35822.service: Deactivated successfully. Jan 23 01:12:37.337150 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 01:12:37.345107 systemd-logind[1529]: Session 22 logged out. Waiting for processes to exit. Jan 23 01:12:37.352393 systemd-logind[1529]: Removed session 22. Jan 23 01:12:39.549961 containerd[1546]: time="2026-01-23T01:12:39.549490186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:12:39.646965 containerd[1546]: time="2026-01-23T01:12:39.646547960Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:39.656963 containerd[1546]: time="2026-01-23T01:12:39.656597275Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:12:39.656963 containerd[1546]: time="2026-01-23T01:12:39.656935056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:12:39.657606 kubelet[2845]: E0123 01:12:39.657481 2845 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:12:39.657606 kubelet[2845]: E0123 01:12:39.657554 2845 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:12:39.660880 kubelet[2845]: E0123 01:12:39.658155 2845 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7r7jt_calico-system(e2553e8f-fa3b-4995-9072-1f7cce3ee2c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:39.662841 containerd[1546]: time="2026-01-23T01:12:39.662496133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:12:39.758964 containerd[1546]: time="2026-01-23T01:12:39.758603493Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:39.764105 containerd[1546]: time="2026-01-23T01:12:39.763900845Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:12:39.764221 containerd[1546]: time="2026-01-23T01:12:39.764102411Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:12:39.764834 kubelet[2845]: E0123 01:12:39.764584 2845 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:12:39.764834 kubelet[2845]: E0123 01:12:39.764819 2845 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:12:39.765034 kubelet[2845]: E0123 01:12:39.764934 2845 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7r7jt_calico-system(e2553e8f-fa3b-4995-9072-1f7cce3ee2c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:39.765034 kubelet[2845]: E0123 01:12:39.764998 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7r7jt" podUID="e2553e8f-fa3b-4995-9072-1f7cce3ee2c8" Jan 23 01:12:42.349052 systemd[1]: Started sshd@22-10.0.0.42:22-10.0.0.1:35832.service - OpenSSH per-connection server daemon (10.0.0.1:35832). Jan 23 01:12:42.548554 containerd[1546]: time="2026-01-23T01:12:42.546990725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:12:42.559743 sshd[5790]: Accepted publickey for core from 10.0.0.1 port 35832 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:12:42.567076 sshd-session[5790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:12:42.629480 systemd-logind[1529]: New session 23 of user core. Jan 23 01:12:42.639462 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 01:12:42.709616 containerd[1546]: time="2026-01-23T01:12:42.709153583Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:42.716380 containerd[1546]: time="2026-01-23T01:12:42.713377129Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:12:42.716380 containerd[1546]: time="2026-01-23T01:12:42.713491432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:12:42.716539 kubelet[2845]: E0123 01:12:42.713845 2845 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:12:42.716539 kubelet[2845]: E0123 01:12:42.713899 2845 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:12:42.716539 kubelet[2845]: E0123 01:12:42.714079 2845 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-68948fdbd6-4jxhx_calico-apiserver(3b1f5033-cf51-4add-93c1-34dedb396092): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:42.716539 kubelet[2845]: E0123 01:12:42.714125 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68948fdbd6-4jxhx" podUID="3b1f5033-cf51-4add-93c1-34dedb396092" Jan 23 01:12:43.104757 sshd[5793]: Connection closed by 10.0.0.1 port 35832 Jan 23 01:12:43.105163 sshd-session[5790]: pam_unix(sshd:session): session closed for user core Jan 23 01:12:43.120422 systemd[1]: sshd@22-10.0.0.42:22-10.0.0.1:35832.service: Deactivated successfully. Jan 23 01:12:43.140490 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 01:12:43.149978 systemd-logind[1529]: Session 23 logged out. Waiting for processes to exit. Jan 23 01:12:43.165418 systemd-logind[1529]: Removed session 23. Jan 23 01:12:44.906584 containerd[1546]: time="2026-01-23T01:12:44.906140890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:12:45.028441 containerd[1546]: time="2026-01-23T01:12:45.026833152Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:45.041523 containerd[1546]: time="2026-01-23T01:12:45.039801994Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:12:45.042477 containerd[1546]: time="2026-01-23T01:12:45.041816655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:12:45.054119 kubelet[2845]: E0123 01:12:45.053618 2845 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:12:45.054119 kubelet[2845]: E0123 01:12:45.053897 2845 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:12:45.054119 kubelet[2845]: E0123 01:12:45.054003 2845 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-bmvtb_calico-system(662d18b3-33cf-4000-b003-c8e7f6b2e810): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:45.054119 kubelet[2845]: E0123 01:12:45.054051 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-bmvtb" podUID="662d18b3-33cf-4000-b003-c8e7f6b2e810" Jan 23 01:12:45.556884 kubelet[2845]: E0123 01:12:45.554990 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8554d56494-ldgsm" podUID="8cf6c8e5-97b0-4acf-833b-96387e1e4a45" Jan 23 01:12:47.550207 containerd[1546]: time="2026-01-23T01:12:47.549973304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:12:47.691469 containerd[1546]: time="2026-01-23T01:12:47.691152185Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:47.719880 containerd[1546]: time="2026-01-23T01:12:47.719179150Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:12:47.719880 containerd[1546]: time="2026-01-23T01:12:47.719824445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:12:47.724442 kubelet[2845]: E0123 01:12:47.722472 2845 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:12:47.724442 kubelet[2845]: E0123 01:12:47.722541 2845 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:12:47.724442 kubelet[2845]: E0123 01:12:47.722746 2845 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-86ccb5f87d-dkzhd_calico-system(369aa670-4b29-4a0c-8fff-6ae07d46c778): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:47.724442 kubelet[2845]: E0123 01:12:47.722803 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86ccb5f87d-dkzhd" podUID="369aa670-4b29-4a0c-8fff-6ae07d46c778" Jan 23 01:12:48.138941 systemd[1]: Started sshd@23-10.0.0.42:22-10.0.0.1:48570.service - OpenSSH per-connection server daemon (10.0.0.1:48570). Jan 23 01:12:48.336904 sshd[5837]: Accepted publickey for core from 10.0.0.1 port 48570 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:12:48.345611 sshd-session[5837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:12:48.388104 systemd-logind[1529]: New session 24 of user core. Jan 23 01:12:48.420101 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 01:12:48.552215 containerd[1546]: time="2026-01-23T01:12:48.549854176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:12:48.686474 containerd[1546]: time="2026-01-23T01:12:48.683964296Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:48.694622 containerd[1546]: time="2026-01-23T01:12:48.694562312Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:12:48.702169 containerd[1546]: time="2026-01-23T01:12:48.694759616Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:12:48.702843 kubelet[2845]: E0123 01:12:48.702796 2845 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:12:48.702968 kubelet[2845]: E0123 01:12:48.702948 2845 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:12:48.703121 kubelet[2845]: E0123 01:12:48.703096 2845 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-68948fdbd6-mdlvc_calico-apiserver(5865bb49-f0fe-4eb4-8f6c-74bc939474ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:48.703209 kubelet[2845]: E0123 01:12:48.703188 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68948fdbd6-mdlvc" podUID="5865bb49-f0fe-4eb4-8f6c-74bc939474ad" Jan 23 01:12:48.997581 sshd[5840]: Connection closed by 10.0.0.1 port 48570 Jan 23 01:12:49.000044 sshd-session[5837]: pam_unix(sshd:session): session closed for user core Jan 23 01:12:49.017476 systemd[1]: sshd@23-10.0.0.42:22-10.0.0.1:48570.service: Deactivated successfully. Jan 23 01:12:49.022155 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 01:12:49.031961 systemd-logind[1529]: Session 24 logged out. Waiting for processes to exit. Jan 23 01:12:49.041017 systemd-logind[1529]: Removed session 24. Jan 23 01:12:53.542917 kubelet[2845]: E0123 01:12:53.541893 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7r7jt" podUID="e2553e8f-fa3b-4995-9072-1f7cce3ee2c8" Jan 23 01:12:54.044884 systemd[1]: Started sshd@24-10.0.0.42:22-10.0.0.1:51846.service - OpenSSH per-connection server daemon (10.0.0.1:51846). Jan 23 01:12:54.222999 sshd[5853]: Accepted publickey for core from 10.0.0.1 port 51846 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:12:54.227865 sshd-session[5853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:12:54.251363 systemd-logind[1529]: New session 25 of user core. Jan 23 01:12:54.281782 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 01:12:54.755492 sshd[5856]: Connection closed by 10.0.0.1 port 51846 Jan 23 01:12:54.762138 sshd-session[5853]: pam_unix(sshd:session): session closed for user core Jan 23 01:12:54.788605 systemd[1]: sshd@24-10.0.0.42:22-10.0.0.1:51846.service: Deactivated successfully. Jan 23 01:12:54.799514 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 01:12:54.823126 systemd-logind[1529]: Session 25 logged out. Waiting for processes to exit. Jan 23 01:12:54.833159 systemd-logind[1529]: Removed session 25. Jan 23 01:12:56.557412 kubelet[2845]: E0123 01:12:56.552555 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-bmvtb" podUID="662d18b3-33cf-4000-b003-c8e7f6b2e810" Jan 23 01:12:57.545571 kubelet[2845]: E0123 01:12:57.545515 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68948fdbd6-4jxhx" podUID="3b1f5033-cf51-4add-93c1-34dedb396092" Jan 23 01:12:57.549173 kubelet[2845]: E0123 01:12:57.547854 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8554d56494-ldgsm" podUID="8cf6c8e5-97b0-4acf-833b-96387e1e4a45" Jan 23 01:12:59.810112 systemd[1]: Started sshd@25-10.0.0.42:22-10.0.0.1:51850.service - OpenSSH per-connection server daemon (10.0.0.1:51850). Jan 23 01:13:00.054870 sshd[5873]: Accepted publickey for core from 10.0.0.1 port 51850 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:13:00.072952 sshd-session[5873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:13:00.109112 systemd-logind[1529]: New session 26 of user core. Jan 23 01:13:00.126949 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 01:13:00.546892 kubelet[2845]: E0123 01:13:00.546107 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68948fdbd6-mdlvc" podUID="5865bb49-f0fe-4eb4-8f6c-74bc939474ad" Jan 23 01:13:00.774029 sshd[5876]: Connection closed by 10.0.0.1 port 51850 Jan 23 01:13:00.787035 sshd-session[5873]: pam_unix(sshd:session): session closed for user core Jan 23 01:13:00.802907 systemd-logind[1529]: Session 26 logged out. Waiting for processes to exit. Jan 23 01:13:00.804167 systemd[1]: sshd@25-10.0.0.42:22-10.0.0.1:51850.service: Deactivated successfully. Jan 23 01:13:00.815582 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 01:13:00.826902 systemd-logind[1529]: Removed session 26. Jan 23 01:13:01.544333 kubelet[2845]: E0123 01:13:01.543455 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86ccb5f87d-dkzhd" podUID="369aa670-4b29-4a0c-8fff-6ae07d46c778" Jan 23 01:13:04.566506 kubelet[2845]: E0123 01:13:04.566183 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7r7jt" podUID="e2553e8f-fa3b-4995-9072-1f7cce3ee2c8" Jan 23 01:13:05.826103 systemd[1]: Started sshd@26-10.0.0.42:22-10.0.0.1:52142.service - OpenSSH per-connection server daemon (10.0.0.1:52142). Jan 23 01:13:06.090948 sshd[5890]: Accepted publickey for core from 10.0.0.1 port 52142 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:13:06.093133 sshd-session[5890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:13:06.114653 systemd-logind[1529]: New session 27 of user core. Jan 23 01:13:06.129581 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 23 01:13:06.691854 sshd[5893]: Connection closed by 10.0.0.1 port 52142 Jan 23 01:13:06.697813 sshd-session[5890]: pam_unix(sshd:session): session closed for user core Jan 23 01:13:06.713498 systemd[1]: sshd@26-10.0.0.42:22-10.0.0.1:52142.service: Deactivated successfully. Jan 23 01:13:06.726134 systemd[1]: session-27.scope: Deactivated successfully. Jan 23 01:13:06.732584 systemd-logind[1529]: Session 27 logged out. Waiting for processes to exit. Jan 23 01:13:06.736632 systemd-logind[1529]: Removed session 27. Jan 23 01:13:07.554489 kubelet[2845]: E0123 01:13:07.554077 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-bmvtb" podUID="662d18b3-33cf-4000-b003-c8e7f6b2e810" Jan 23 01:13:09.547165 kubelet[2845]: E0123 01:13:09.546161 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8554d56494-ldgsm" podUID="8cf6c8e5-97b0-4acf-833b-96387e1e4a45" Jan 23 01:13:11.746622 systemd[1]: Started sshd@27-10.0.0.42:22-10.0.0.1:52150.service - OpenSSH per-connection server daemon (10.0.0.1:52150). Jan 23 01:13:12.073651 sshd[5907]: Accepted publickey for core from 10.0.0.1 port 52150 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:13:12.101090 sshd-session[5907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:13:12.141475 systemd-logind[1529]: New session 28 of user core. Jan 23 01:13:12.159922 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 23 01:13:12.594148 kubelet[2845]: E0123 01:13:12.574193 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68948fdbd6-4jxhx" podUID="3b1f5033-cf51-4add-93c1-34dedb396092" Jan 23 01:13:12.773110 sshd[5910]: Connection closed by 10.0.0.1 port 52150 Jan 23 01:13:12.800061 sshd-session[5907]: pam_unix(sshd:session): session closed for user core Jan 23 01:13:12.823609 systemd[1]: sshd@27-10.0.0.42:22-10.0.0.1:52150.service: Deactivated successfully. Jan 23 01:13:12.843551 systemd[1]: session-28.scope: Deactivated successfully. Jan 23 01:13:12.861580 systemd-logind[1529]: Session 28 logged out. Waiting for processes to exit. Jan 23 01:13:12.890629 systemd-logind[1529]: Removed session 28. Jan 23 01:13:14.543395 kubelet[2845]: E0123 01:13:14.543158 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:13:15.598195 kubelet[2845]: E0123 01:13:15.584899 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68948fdbd6-mdlvc" podUID="5865bb49-f0fe-4eb4-8f6c-74bc939474ad" Jan 23 01:13:16.581146 kubelet[2845]: E0123 01:13:16.575023 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86ccb5f87d-dkzhd" podUID="369aa670-4b29-4a0c-8fff-6ae07d46c778" Jan 23 01:13:17.840489 systemd[1]: Started sshd@28-10.0.0.42:22-10.0.0.1:48848.service - OpenSSH per-connection server daemon (10.0.0.1:48848). Jan 23 01:13:18.101120 sshd[5955]: Accepted publickey for core from 10.0.0.1 port 48848 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:13:18.107664 sshd-session[5955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:13:18.144595 systemd-logind[1529]: New session 29 of user core. Jan 23 01:13:18.159187 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 23 01:13:18.571530 kubelet[2845]: E0123 01:13:18.570206 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:13:18.614374 kubelet[2845]: E0123 01:13:18.595090 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-bmvtb" podUID="662d18b3-33cf-4000-b003-c8e7f6b2e810" Jan 23 01:13:18.649330 kubelet[2845]: E0123 01:13:18.648967 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7r7jt" podUID="e2553e8f-fa3b-4995-9072-1f7cce3ee2c8" Jan 23 01:13:18.907640 sshd[5958]: Connection closed by 10.0.0.1 port 48848 Jan 23 01:13:18.908912 sshd-session[5955]: pam_unix(sshd:session): session closed for user core Jan 23 01:13:18.931569 systemd[1]: sshd@28-10.0.0.42:22-10.0.0.1:48848.service: Deactivated successfully. Jan 23 01:13:18.948141 systemd[1]: session-29.scope: Deactivated successfully. Jan 23 01:13:18.973141 systemd-logind[1529]: Session 29 logged out. Waiting for processes to exit. Jan 23 01:13:19.010875 systemd-logind[1529]: Removed session 29. Jan 23 01:13:21.540049 kubelet[2845]: E0123 01:13:21.539920 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:13:21.554106 containerd[1546]: time="2026-01-23T01:13:21.554053854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:13:21.638582 containerd[1546]: time="2026-01-23T01:13:21.636877429Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:13:21.651994 containerd[1546]: time="2026-01-23T01:13:21.651108520Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:13:21.651994 containerd[1546]: time="2026-01-23T01:13:21.651466819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:13:21.652158 kubelet[2845]: E0123 01:13:21.651816 2845 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:13:21.652158 kubelet[2845]: E0123 01:13:21.651878 2845 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:13:21.652158 kubelet[2845]: E0123 01:13:21.651973 2845 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-8554d56494-ldgsm_calico-system(8cf6c8e5-97b0-4acf-833b-96387e1e4a45): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:13:21.658137 containerd[1546]: time="2026-01-23T01:13:21.657814048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:13:21.755560 containerd[1546]: time="2026-01-23T01:13:21.753091446Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:13:21.764009 containerd[1546]: time="2026-01-23T01:13:21.762619424Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:13:21.768912 containerd[1546]: time="2026-01-23T01:13:21.763626963Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:13:21.769032 kubelet[2845]: E0123 01:13:21.767991 2845 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:13:21.769032 kubelet[2845]: E0123 01:13:21.768052 2845 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:13:21.769032 kubelet[2845]: E0123 01:13:21.768153 2845 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-8554d56494-ldgsm_calico-system(8cf6c8e5-97b0-4acf-833b-96387e1e4a45): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:13:21.769165 kubelet[2845]: E0123 01:13:21.768212 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8554d56494-ldgsm" podUID="8cf6c8e5-97b0-4acf-833b-96387e1e4a45" Jan 23 01:13:23.546818 containerd[1546]: time="2026-01-23T01:13:23.542534903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:13:23.650411 containerd[1546]: time="2026-01-23T01:13:23.649141001Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:13:23.658840 containerd[1546]: time="2026-01-23T01:13:23.658191418Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:13:23.661116 containerd[1546]: time="2026-01-23T01:13:23.660575760Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:13:23.663109 kubelet[2845]: E0123 01:13:23.661525 2845 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:13:23.663109 kubelet[2845]: E0123 01:13:23.661604 2845 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:13:23.665189 kubelet[2845]: E0123 01:13:23.663978 2845 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-68948fdbd6-4jxhx_calico-apiserver(3b1f5033-cf51-4add-93c1-34dedb396092): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:13:23.665189 kubelet[2845]: E0123 01:13:23.664032 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68948fdbd6-4jxhx" podUID="3b1f5033-cf51-4add-93c1-34dedb396092" Jan 23 01:13:23.971552 systemd[1]: Started sshd@29-10.0.0.42:22-10.0.0.1:43672.service - OpenSSH per-connection server daemon (10.0.0.1:43672). Jan 23 01:13:24.188467 sshd[5990]: Accepted publickey for core from 10.0.0.1 port 43672 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:13:24.198214 sshd-session[5990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:13:24.232444 systemd-logind[1529]: New session 30 of user core. Jan 23 01:13:24.253644 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 23 01:13:24.858053 sshd[5995]: Connection closed by 10.0.0.1 port 43672 Jan 23 01:13:24.857990 sshd-session[5990]: pam_unix(sshd:session): session closed for user core Jan 23 01:13:24.873429 systemd[1]: sshd@29-10.0.0.42:22-10.0.0.1:43672.service: Deactivated successfully. Jan 23 01:13:24.880470 systemd[1]: session-30.scope: Deactivated successfully. Jan 23 01:13:24.888183 systemd-logind[1529]: Session 30 logged out. Waiting for processes to exit. Jan 23 01:13:24.898092 systemd-logind[1529]: Removed session 30. Jan 23 01:13:27.546536 kubelet[2845]: E0123 01:13:27.545625 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:13:27.547446 kubelet[2845]: E0123 01:13:27.546542 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:13:29.553345 containerd[1546]: time="2026-01-23T01:13:29.551893597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:13:29.644585 containerd[1546]: time="2026-01-23T01:13:29.642174880Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:13:29.656499 containerd[1546]: time="2026-01-23T01:13:29.654809791Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:13:29.656499 containerd[1546]: time="2026-01-23T01:13:29.654939002Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:13:29.656499 containerd[1546]: time="2026-01-23T01:13:29.656060587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:13:29.656678 kubelet[2845]: E0123 01:13:29.655454 2845 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:13:29.656678 kubelet[2845]: E0123 01:13:29.655524 2845 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:13:29.657549 kubelet[2845]: E0123 01:13:29.657006 2845 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-86ccb5f87d-dkzhd_calico-system(369aa670-4b29-4a0c-8fff-6ae07d46c778): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:13:29.657549 kubelet[2845]: E0123 01:13:29.657069 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86ccb5f87d-dkzhd" podUID="369aa670-4b29-4a0c-8fff-6ae07d46c778" Jan 23 01:13:29.747489 containerd[1546]: time="2026-01-23T01:13:29.745485543Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:13:29.750568 containerd[1546]: time="2026-01-23T01:13:29.750440093Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:13:29.750676 containerd[1546]: time="2026-01-23T01:13:29.750644705Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:13:29.753139 kubelet[2845]: E0123 01:13:29.751673 2845 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:13:29.753139 kubelet[2845]: E0123 01:13:29.751846 2845 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:13:29.753139 kubelet[2845]: E0123 01:13:29.751952 2845 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-68948fdbd6-mdlvc_calico-apiserver(5865bb49-f0fe-4eb4-8f6c-74bc939474ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:13:29.753139 kubelet[2845]: E0123 01:13:29.751998 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68948fdbd6-mdlvc" podUID="5865bb49-f0fe-4eb4-8f6c-74bc939474ad" Jan 23 01:13:29.895641 systemd[1]: Started sshd@30-10.0.0.42:22-10.0.0.1:43678.service - OpenSSH per-connection server daemon (10.0.0.1:43678). Jan 23 01:13:30.092972 sshd[6017]: Accepted publickey for core from 10.0.0.1 port 43678 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:13:30.096085 sshd-session[6017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:13:30.117664 systemd-logind[1529]: New session 31 of user core. Jan 23 01:13:30.127933 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 23 01:13:30.544600 containerd[1546]: time="2026-01-23T01:13:30.541606638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:13:30.596177 sshd[6020]: Connection closed by 10.0.0.1 port 43678 Jan 23 01:13:30.597862 sshd-session[6017]: pam_unix(sshd:session): session closed for user core Jan 23 01:13:30.616675 systemd[1]: sshd@30-10.0.0.42:22-10.0.0.1:43678.service: Deactivated successfully. Jan 23 01:13:30.628985 systemd[1]: session-31.scope: Deactivated successfully. Jan 23 01:13:30.635379 containerd[1546]: time="2026-01-23T01:13:30.634554439Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:13:30.638815 systemd-logind[1529]: Session 31 logged out. Waiting for processes to exit. Jan 23 01:13:30.644908 containerd[1546]: time="2026-01-23T01:13:30.644861193Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:13:30.645067 containerd[1546]: time="2026-01-23T01:13:30.644999732Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:13:30.655005 kubelet[2845]: E0123 01:13:30.652945 2845 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:13:30.655005 kubelet[2845]: E0123 01:13:30.653183 2845 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:13:30.655005 kubelet[2845]: E0123 01:13:30.653454 2845 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7r7jt_calico-system(e2553e8f-fa3b-4995-9072-1f7cce3ee2c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:13:30.659037 containerd[1546]: time="2026-01-23T01:13:30.658471278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:13:30.658928 systemd-logind[1529]: Removed session 31. Jan 23 01:13:30.749439 containerd[1546]: time="2026-01-23T01:13:30.749025987Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:13:30.755618 containerd[1546]: time="2026-01-23T01:13:30.755178202Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:13:30.755892 containerd[1546]: time="2026-01-23T01:13:30.755645244Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:13:30.758697 kubelet[2845]: E0123 01:13:30.756084 2845 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:13:30.758697 kubelet[2845]: E0123 01:13:30.757184 2845 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:13:30.758697 kubelet[2845]: E0123 01:13:30.757495 2845 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7r7jt_calico-system(e2553e8f-fa3b-4995-9072-1f7cce3ee2c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:13:30.759866 kubelet[2845]: E0123 01:13:30.757569 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7r7jt" podUID="e2553e8f-fa3b-4995-9072-1f7cce3ee2c8" Jan 23 01:13:32.559084 containerd[1546]: time="2026-01-23T01:13:32.558513726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:13:32.659849 containerd[1546]: time="2026-01-23T01:13:32.658521472Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:13:32.668059 containerd[1546]: time="2026-01-23T01:13:32.667814279Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:13:32.668059 containerd[1546]: time="2026-01-23T01:13:32.667933772Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:13:32.670081 kubelet[2845]: E0123 01:13:32.669418 2845 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:13:32.670081 kubelet[2845]: E0123 01:13:32.669577 2845 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:13:32.670081 kubelet[2845]: E0123 01:13:32.669672 2845 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-bmvtb_calico-system(662d18b3-33cf-4000-b003-c8e7f6b2e810): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:13:32.670081 kubelet[2845]: E0123 01:13:32.669824 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-bmvtb" podUID="662d18b3-33cf-4000-b003-c8e7f6b2e810" Jan 23 01:13:33.539527 kubelet[2845]: E0123 01:13:33.538613 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:13:35.652572 systemd[1]: Started sshd@31-10.0.0.42:22-10.0.0.1:42662.service - OpenSSH per-connection server daemon (10.0.0.1:42662). Jan 23 01:13:35.915831 sshd[6034]: Accepted publickey for core from 10.0.0.1 port 42662 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:13:35.948158 sshd-session[6034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:13:36.017064 systemd-logind[1529]: New session 32 of user core. Jan 23 01:13:36.069705 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 23 01:13:36.649030 sshd[6037]: Connection closed by 10.0.0.1 port 42662 Jan 23 01:13:36.652808 sshd-session[6034]: pam_unix(sshd:session): session closed for user core Jan 23 01:13:36.665671 systemd[1]: sshd@31-10.0.0.42:22-10.0.0.1:42662.service: Deactivated successfully. Jan 23 01:13:36.675090 systemd[1]: session-32.scope: Deactivated successfully. Jan 23 01:13:36.692621 systemd-logind[1529]: Session 32 logged out. Waiting for processes to exit. Jan 23 01:13:36.698701 systemd-logind[1529]: Removed session 32. Jan 23 01:13:37.550158 kubelet[2845]: E0123 01:13:37.546079 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68948fdbd6-4jxhx" podUID="3b1f5033-cf51-4add-93c1-34dedb396092" Jan 23 01:13:37.553676 kubelet[2845]: E0123 01:13:37.552059 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8554d56494-ldgsm" podUID="8cf6c8e5-97b0-4acf-833b-96387e1e4a45" Jan 23 01:13:41.720109 systemd[1]: Started sshd@32-10.0.0.42:22-10.0.0.1:42668.service - OpenSSH per-connection server daemon (10.0.0.1:42668). Jan 23 01:13:41.966179 sshd[6050]: Accepted publickey for core from 10.0.0.1 port 42668 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:13:41.970948 sshd-session[6050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:13:42.040058 systemd-logind[1529]: New session 33 of user core. Jan 23 01:13:42.062483 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 23 01:13:42.701532 sshd[6053]: Connection closed by 10.0.0.1 port 42668 Jan 23 01:13:42.704889 sshd-session[6050]: pam_unix(sshd:session): session closed for user core Jan 23 01:13:42.727662 systemd[1]: sshd@32-10.0.0.42:22-10.0.0.1:42668.service: Deactivated successfully. Jan 23 01:13:42.732591 systemd[1]: session-33.scope: Deactivated successfully. Jan 23 01:13:42.750192 systemd-logind[1529]: Session 33 logged out. Waiting for processes to exit. Jan 23 01:13:42.770585 systemd-logind[1529]: Removed session 33. Jan 23 01:13:43.548467 kubelet[2845]: E0123 01:13:43.547610 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86ccb5f87d-dkzhd" podUID="369aa670-4b29-4a0c-8fff-6ae07d46c778" Jan 23 01:13:43.548467 kubelet[2845]: E0123 01:13:43.547986 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68948fdbd6-mdlvc" podUID="5865bb49-f0fe-4eb4-8f6c-74bc939474ad" Jan 23 01:13:43.555051 kubelet[2845]: E0123 01:13:43.554968 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7r7jt" podUID="e2553e8f-fa3b-4995-9072-1f7cce3ee2c8" Jan 23 01:13:44.545513 kubelet[2845]: E0123 01:13:44.545114 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:13:46.556390 kubelet[2845]: E0123 01:13:46.554980 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-bmvtb" podUID="662d18b3-33cf-4000-b003-c8e7f6b2e810" Jan 23 01:13:47.763069 systemd[1]: Started sshd@33-10.0.0.42:22-10.0.0.1:37694.service - OpenSSH per-connection server daemon (10.0.0.1:37694). Jan 23 01:13:48.402446 sshd[6094]: Accepted publickey for core from 10.0.0.1 port 37694 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:13:48.409020 sshd-session[6094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:13:48.452671 systemd-logind[1529]: New session 34 of user core. Jan 23 01:13:48.470693 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 23 01:13:49.262847 sshd[6097]: Connection closed by 10.0.0.1 port 37694 Jan 23 01:13:49.265011 sshd-session[6094]: pam_unix(sshd:session): session closed for user core Jan 23 01:13:49.292923 systemd[1]: sshd@33-10.0.0.42:22-10.0.0.1:37694.service: Deactivated successfully. Jan 23 01:13:49.314031 systemd[1]: session-34.scope: Deactivated successfully. Jan 23 01:13:49.341701 systemd-logind[1529]: Session 34 logged out. Waiting for processes to exit. Jan 23 01:13:49.382626 systemd-logind[1529]: Removed session 34. Jan 23 01:13:50.566070 kubelet[2845]: E0123 01:13:50.565632 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8554d56494-ldgsm" podUID="8cf6c8e5-97b0-4acf-833b-96387e1e4a45" Jan 23 01:13:51.554433 kubelet[2845]: E0123 01:13:51.552492 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68948fdbd6-4jxhx" podUID="3b1f5033-cf51-4add-93c1-34dedb396092" Jan 23 01:13:52.543419 kubelet[2845]: E0123 01:13:52.542688 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:13:54.046645 containerd[1546]: time="2026-01-23T01:13:54.036036937Z" level=warning msg="container event discarded" container=aede2888962bfd7871a6d9bce88e5d95bf111ffec54c2d4f838254e1e19d4cff type=CONTAINER_CREATED_EVENT Jan 23 01:13:54.131569 containerd[1546]: time="2026-01-23T01:13:54.131161956Z" level=warning msg="container event discarded" container=aede2888962bfd7871a6d9bce88e5d95bf111ffec54c2d4f838254e1e19d4cff type=CONTAINER_STARTED_EVENT Jan 23 01:13:54.133615 containerd[1546]: time="2026-01-23T01:13:54.132873810Z" level=warning msg="container event discarded" container=6bac153a415ff5a55b88ef25029b82b52e2dfcab536997a00c03aeb0bf8794c6 type=CONTAINER_CREATED_EVENT Jan 23 01:13:54.133615 containerd[1546]: time="2026-01-23T01:13:54.132899798Z" level=warning msg="container event discarded" container=6bac153a415ff5a55b88ef25029b82b52e2dfcab536997a00c03aeb0bf8794c6 type=CONTAINER_STARTED_EVENT Jan 23 01:13:54.133615 containerd[1546]: time="2026-01-23T01:13:54.132914015Z" level=warning msg="container event discarded" container=086073ed754cb6500dbd81b2ba6c01cf9abc8d7088fb76b2250c6f8c5130a195 type=CONTAINER_CREATED_EVENT Jan 23 01:13:54.133615 containerd[1546]: time="2026-01-23T01:13:54.132923703Z" level=warning msg="container event discarded" container=086073ed754cb6500dbd81b2ba6c01cf9abc8d7088fb76b2250c6f8c5130a195 type=CONTAINER_STARTED_EVENT Jan 23 01:13:54.133615 containerd[1546]: time="2026-01-23T01:13:54.132935024Z" level=warning msg="container event discarded" container=54ff2fe23986bb6e5d1a1188fcd6d34c3d67d01ff613e2423baeda10ca8e8b09 type=CONTAINER_CREATED_EVENT Jan 23 01:13:54.133615 containerd[1546]: time="2026-01-23T01:13:54.132944241Z" level=warning msg="container event discarded" container=b3b74848d01d001f5f811e2fe5f94fcb2b97cdb05d836e7084d8d6083686dbb0 type=CONTAINER_CREATED_EVENT Jan 23 01:13:54.133615 containerd[1546]: time="2026-01-23T01:13:54.132954872Z" level=warning msg="container event discarded" container=743be9c9ad518476c537f4ea17e5ff61921d118ad3017a3040af09b78af7868f type=CONTAINER_CREATED_EVENT Jan 23 01:13:54.307631 systemd[1]: Started sshd@34-10.0.0.42:22-10.0.0.1:39954.service - OpenSSH per-connection server daemon (10.0.0.1:39954). Jan 23 01:13:54.333199 containerd[1546]: time="2026-01-23T01:13:54.331556521Z" level=warning msg="container event discarded" container=b3b74848d01d001f5f811e2fe5f94fcb2b97cdb05d836e7084d8d6083686dbb0 type=CONTAINER_STARTED_EVENT Jan 23 01:13:54.374734 containerd[1546]: time="2026-01-23T01:13:54.374564676Z" level=warning msg="container event discarded" container=54ff2fe23986bb6e5d1a1188fcd6d34c3d67d01ff613e2423baeda10ca8e8b09 type=CONTAINER_STARTED_EVENT Jan 23 01:13:54.394142 containerd[1546]: time="2026-01-23T01:13:54.390212079Z" level=warning msg="container event discarded" container=743be9c9ad518476c537f4ea17e5ff61921d118ad3017a3040af09b78af7868f type=CONTAINER_STARTED_EVENT Jan 23 01:13:54.497535 sshd[6112]: Accepted publickey for core from 10.0.0.1 port 39954 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:13:54.499672 sshd-session[6112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:13:54.547050 systemd-logind[1529]: New session 35 of user core. Jan 23 01:13:54.561868 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 23 01:13:55.298656 sshd[6115]: Connection closed by 10.0.0.1 port 39954 Jan 23 01:13:55.300208 sshd-session[6112]: pam_unix(sshd:session): session closed for user core Jan 23 01:13:55.315130 systemd[1]: Started sshd@35-10.0.0.42:22-10.0.0.1:39964.service - OpenSSH per-connection server daemon (10.0.0.1:39964). Jan 23 01:13:55.326955 systemd[1]: sshd@34-10.0.0.42:22-10.0.0.1:39954.service: Deactivated successfully. Jan 23 01:13:55.334917 systemd[1]: session-35.scope: Deactivated successfully. Jan 23 01:13:55.340627 systemd-logind[1529]: Session 35 logged out. Waiting for processes to exit. Jan 23 01:13:55.354703 systemd-logind[1529]: Removed session 35. Jan 23 01:13:55.525958 sshd[6127]: Accepted publickey for core from 10.0.0.1 port 39964 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:13:55.537562 sshd-session[6127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:13:55.568179 kubelet[2845]: E0123 01:13:55.561558 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68948fdbd6-mdlvc" podUID="5865bb49-f0fe-4eb4-8f6c-74bc939474ad" Jan 23 01:13:55.566548 systemd-logind[1529]: New session 36 of user core. Jan 23 01:13:55.593114 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 23 01:13:56.545924 kubelet[2845]: E0123 01:13:56.544570 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86ccb5f87d-dkzhd" podUID="369aa670-4b29-4a0c-8fff-6ae07d46c778" Jan 23 01:13:57.393542 sshd[6133]: Connection closed by 10.0.0.1 port 39964 Jan 23 01:13:57.395051 sshd-session[6127]: pam_unix(sshd:session): session closed for user core Jan 23 01:13:57.429511 systemd[1]: Started sshd@36-10.0.0.42:22-10.0.0.1:39980.service - OpenSSH per-connection server daemon (10.0.0.1:39980). Jan 23 01:13:57.430680 systemd[1]: sshd@35-10.0.0.42:22-10.0.0.1:39964.service: Deactivated successfully. Jan 23 01:13:57.439010 systemd[1]: session-36.scope: Deactivated successfully. Jan 23 01:13:57.447117 systemd-logind[1529]: Session 36 logged out. Waiting for processes to exit. Jan 23 01:13:57.455600 systemd-logind[1529]: Removed session 36. Jan 23 01:13:57.548202 kubelet[2845]: E0123 01:13:57.548035 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-bmvtb" podUID="662d18b3-33cf-4000-b003-c8e7f6b2e810" Jan 23 01:13:57.659134 sshd[6144]: Accepted publickey for core from 10.0.0.1 port 39980 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:13:57.669527 sshd-session[6144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:13:57.697435 systemd-logind[1529]: New session 37 of user core. Jan 23 01:13:57.713013 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 23 01:13:58.551495 kubelet[2845]: E0123 01:13:58.550745 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7r7jt" podUID="e2553e8f-fa3b-4995-9072-1f7cce3ee2c8" Jan 23 01:14:00.416446 sshd[6150]: Connection closed by 10.0.0.1 port 39980 Jan 23 01:14:00.421539 sshd-session[6144]: pam_unix(sshd:session): session closed for user core Jan 23 01:14:00.453723 systemd[1]: sshd@36-10.0.0.42:22-10.0.0.1:39980.service: Deactivated successfully. Jan 23 01:14:00.471060 systemd[1]: session-37.scope: Deactivated successfully. Jan 23 01:14:00.474189 systemd[1]: session-37.scope: Consumed 1.580s CPU time, 44.8M memory peak. Jan 23 01:14:00.483079 systemd-logind[1529]: Session 37 logged out. Waiting for processes to exit. Jan 23 01:14:00.510098 systemd[1]: Started sshd@37-10.0.0.42:22-10.0.0.1:39982.service - OpenSSH per-connection server daemon (10.0.0.1:39982). Jan 23 01:14:00.515465 systemd-logind[1529]: Removed session 37. Jan 23 01:14:00.778597 sshd[6173]: Accepted publickey for core from 10.0.0.1 port 39982 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:14:00.787674 sshd-session[6173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:14:00.815163 systemd-logind[1529]: New session 38 of user core. Jan 23 01:14:00.832557 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 23 01:14:02.059071 sshd[6176]: Connection closed by 10.0.0.1 port 39982 Jan 23 01:14:02.064761 sshd-session[6173]: pam_unix(sshd:session): session closed for user core Jan 23 01:14:02.100209 systemd[1]: sshd@37-10.0.0.42:22-10.0.0.1:39982.service: Deactivated successfully. Jan 23 01:14:02.119925 systemd[1]: session-38.scope: Deactivated successfully. Jan 23 01:14:02.133468 systemd-logind[1529]: Session 38 logged out. Waiting for processes to exit. Jan 23 01:14:02.142604 systemd[1]: Started sshd@38-10.0.0.42:22-10.0.0.1:39998.service - OpenSSH per-connection server daemon (10.0.0.1:39998). Jan 23 01:14:02.158475 systemd-logind[1529]: Removed session 38. Jan 23 01:14:02.361738 sshd[6189]: Accepted publickey for core from 10.0.0.1 port 39998 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:14:02.366029 sshd-session[6189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:14:02.391998 systemd-logind[1529]: New session 39 of user core. Jan 23 01:14:02.399698 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 23 01:14:02.902468 sshd[6192]: Connection closed by 10.0.0.1 port 39998 Jan 23 01:14:02.906131 sshd-session[6189]: pam_unix(sshd:session): session closed for user core Jan 23 01:14:02.928622 systemd[1]: sshd@38-10.0.0.42:22-10.0.0.1:39998.service: Deactivated successfully. Jan 23 01:14:02.937570 systemd[1]: session-39.scope: Deactivated successfully. Jan 23 01:14:02.948030 systemd-logind[1529]: Session 39 logged out. Waiting for processes to exit. Jan 23 01:14:02.953099 systemd-logind[1529]: Removed session 39. Jan 23 01:14:04.553443 kubelet[2845]: E0123 01:14:04.553110 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8554d56494-ldgsm" podUID="8cf6c8e5-97b0-4acf-833b-96387e1e4a45" Jan 23 01:14:05.558430 kubelet[2845]: E0123 01:14:05.556048 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68948fdbd6-4jxhx" podUID="3b1f5033-cf51-4add-93c1-34dedb396092" Jan 23 01:14:07.961001 systemd[1]: Started sshd@39-10.0.0.42:22-10.0.0.1:48878.service - OpenSSH per-connection server daemon (10.0.0.1:48878). Jan 23 01:14:08.250985 sshd[6206]: Accepted publickey for core from 10.0.0.1 port 48878 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:14:08.257070 sshd-session[6206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:14:08.272624 systemd-logind[1529]: New session 40 of user core. Jan 23 01:14:08.292506 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 23 01:14:08.854971 sshd[6209]: Connection closed by 10.0.0.1 port 48878 Jan 23 01:14:08.858626 sshd-session[6206]: pam_unix(sshd:session): session closed for user core Jan 23 01:14:08.873211 systemd[1]: sshd@39-10.0.0.42:22-10.0.0.1:48878.service: Deactivated successfully. Jan 23 01:14:08.875708 systemd-logind[1529]: Session 40 logged out. Waiting for processes to exit. Jan 23 01:14:08.891738 systemd[1]: session-40.scope: Deactivated successfully. Jan 23 01:14:08.907412 systemd-logind[1529]: Removed session 40. Jan 23 01:14:09.574007 kubelet[2845]: E0123 01:14:09.573207 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68948fdbd6-mdlvc" podUID="5865bb49-f0fe-4eb4-8f6c-74bc939474ad" Jan 23 01:14:09.599008 kubelet[2845]: E0123 01:14:09.598790 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7r7jt" podUID="e2553e8f-fa3b-4995-9072-1f7cce3ee2c8" Jan 23 01:14:11.553418 kubelet[2845]: E0123 01:14:11.550160 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86ccb5f87d-dkzhd" podUID="369aa670-4b29-4a0c-8fff-6ae07d46c778" Jan 23 01:14:12.560114 kubelet[2845]: E0123 01:14:12.558547 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-bmvtb" podUID="662d18b3-33cf-4000-b003-c8e7f6b2e810" Jan 23 01:14:13.902695 systemd[1]: Started sshd@40-10.0.0.42:22-10.0.0.1:56364.service - OpenSSH per-connection server daemon (10.0.0.1:56364). Jan 23 01:14:14.048217 sshd[6222]: Accepted publickey for core from 10.0.0.1 port 56364 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:14:14.051358 sshd-session[6222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:14:14.067531 systemd-logind[1529]: New session 41 of user core. Jan 23 01:14:14.075708 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 23 01:14:14.355147 sshd[6225]: Connection closed by 10.0.0.1 port 56364 Jan 23 01:14:14.356529 sshd-session[6222]: pam_unix(sshd:session): session closed for user core Jan 23 01:14:14.366040 systemd[1]: sshd@40-10.0.0.42:22-10.0.0.1:56364.service: Deactivated successfully. Jan 23 01:14:14.373929 systemd[1]: session-41.scope: Deactivated successfully. Jan 23 01:14:14.378221 systemd-logind[1529]: Session 41 logged out. Waiting for processes to exit. Jan 23 01:14:14.381743 systemd-logind[1529]: Removed session 41. Jan 23 01:14:15.545183 kubelet[2845]: E0123 01:14:15.545068 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8554d56494-ldgsm" podUID="8cf6c8e5-97b0-4acf-833b-96387e1e4a45" Jan 23 01:14:19.382462 systemd[1]: Started sshd@41-10.0.0.42:22-10.0.0.1:56368.service - OpenSSH per-connection server daemon (10.0.0.1:56368). Jan 23 01:14:19.493791 sshd[6265]: Accepted publickey for core from 10.0.0.1 port 56368 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:14:19.496759 sshd-session[6265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:14:19.506294 systemd-logind[1529]: New session 42 of user core. Jan 23 01:14:19.514668 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 23 01:14:19.543062 kubelet[2845]: E0123 01:14:19.542610 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68948fdbd6-4jxhx" podUID="3b1f5033-cf51-4add-93c1-34dedb396092" Jan 23 01:14:19.839133 sshd[6268]: Connection closed by 10.0.0.1 port 56368 Jan 23 01:14:19.843469 sshd-session[6265]: pam_unix(sshd:session): session closed for user core Jan 23 01:14:19.856679 systemd-logind[1529]: Session 42 logged out. Waiting for processes to exit. Jan 23 01:14:19.860192 systemd[1]: sshd@41-10.0.0.42:22-10.0.0.1:56368.service: Deactivated successfully. Jan 23 01:14:19.865908 systemd[1]: session-42.scope: Deactivated successfully. Jan 23 01:14:19.871062 systemd-logind[1529]: Removed session 42. Jan 23 01:14:22.547198 kubelet[2845]: E0123 01:14:22.547097 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68948fdbd6-mdlvc" podUID="5865bb49-f0fe-4eb4-8f6c-74bc939474ad" Jan 23 01:14:22.548118 kubelet[2845]: E0123 01:14:22.547221 2845 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:14:23.550410 kubelet[2845]: E0123 01:14:23.550033 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7r7jt" podUID="e2553e8f-fa3b-4995-9072-1f7cce3ee2c8" Jan 23 01:14:24.860102 systemd[1]: Started sshd@42-10.0.0.42:22-10.0.0.1:42988.service - OpenSSH per-connection server daemon (10.0.0.1:42988). Jan 23 01:14:25.000107 sshd[6283]: Accepted publickey for core from 10.0.0.1 port 42988 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:14:25.018375 sshd-session[6283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:14:25.044174 systemd-logind[1529]: New session 43 of user core. Jan 23 01:14:25.051625 systemd[1]: Started session-43.scope - Session 43 of User core. Jan 23 01:14:25.279027 sshd[6287]: Connection closed by 10.0.0.1 port 42988 Jan 23 01:14:25.279556 sshd-session[6283]: pam_unix(sshd:session): session closed for user core Jan 23 01:14:25.287670 systemd[1]: sshd@42-10.0.0.42:22-10.0.0.1:42988.service: Deactivated successfully. Jan 23 01:14:25.296470 systemd[1]: session-43.scope: Deactivated successfully. Jan 23 01:14:25.302027 systemd-logind[1529]: Session 43 logged out. Waiting for processes to exit. Jan 23 01:14:25.306054 systemd-logind[1529]: Removed session 43. Jan 23 01:14:25.539141 kubelet[2845]: E0123 01:14:25.538900 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86ccb5f87d-dkzhd" podUID="369aa670-4b29-4a0c-8fff-6ae07d46c778" Jan 23 01:14:26.544701 kubelet[2845]: E0123 01:14:26.542758 2845 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-bmvtb" podUID="662d18b3-33cf-4000-b003-c8e7f6b2e810"