Dec 12 18:33:21.085856 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 12 18:33:21.085912 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:33:21.085925 kernel: BIOS-provided physical RAM map: Dec 12 18:33:21.085939 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 12 18:33:21.085956 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 12 18:33:21.085972 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 12 18:33:21.085985 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Dec 12 18:33:21.085994 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 12 18:33:21.086003 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Dec 12 18:33:21.086011 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Dec 12 18:33:21.086024 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Dec 12 18:33:21.086034 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Dec 12 18:33:21.086047 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Dec 12 18:33:21.086055 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Dec 12 18:33:21.086066 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Dec 12 18:33:21.086074 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Dec 12 18:33:21.086086 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Dec 12 18:33:21.086104 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Dec 12 18:33:21.086112 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Dec 12 18:33:21.086121 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Dec 12 18:33:21.086130 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Dec 12 18:33:21.086139 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Dec 12 18:33:21.086154 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 12 18:33:21.086163 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 12 18:33:21.086173 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Dec 12 18:33:21.086196 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 12 18:33:21.086209 kernel: NX (Execute Disable) protection: active Dec 12 18:33:21.086219 kernel: APIC: Static calls initialized Dec 12 18:33:21.086232 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Dec 12 18:33:21.086248 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Dec 12 18:33:21.086259 kernel: extended physical RAM map: Dec 12 18:33:21.086277 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 12 18:33:21.086294 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 12 18:33:21.086305 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 12 18:33:21.086314 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Dec 12 18:33:21.086324 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 12 18:33:21.086333 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Dec 12 18:33:21.086346 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Dec 12 18:33:21.086365 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Dec 12 18:33:21.086385 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Dec 12 18:33:21.086399 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Dec 12 18:33:21.086409 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Dec 12 18:33:21.086419 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Dec 12 18:33:21.086436 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Dec 12 18:33:21.086459 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Dec 12 18:33:21.086470 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Dec 12 18:33:21.086479 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Dec 12 18:33:21.086489 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Dec 12 18:33:21.086499 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Dec 12 18:33:21.086516 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Dec 12 18:33:21.086534 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Dec 12 18:33:21.086571 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Dec 12 18:33:21.086584 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Dec 12 18:33:21.086602 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Dec 12 18:33:21.086620 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 12 18:33:21.086635 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 12 18:33:21.086645 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Dec 12 18:33:21.086655 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 12 18:33:21.086664 kernel: efi: EFI v2.7 by EDK II Dec 12 18:33:21.086679 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Dec 12 18:33:21.086708 kernel: random: crng init done Dec 12 18:33:21.086736 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Dec 12 18:33:21.086748 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Dec 12 18:33:21.086779 kernel: secureboot: Secure boot disabled Dec 12 18:33:21.086808 kernel: SMBIOS 2.8 present. Dec 12 18:33:21.086818 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Dec 12 18:33:21.086838 kernel: DMI: Memory slots populated: 1/1 Dec 12 18:33:21.086868 kernel: Hypervisor detected: KVM Dec 12 18:33:21.086896 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Dec 12 18:33:21.086907 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 12 18:33:21.086916 kernel: kvm-clock: using sched offset of 5816104628 cycles Dec 12 18:33:21.086934 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 12 18:33:21.086952 kernel: tsc: Detected 2794.750 MHz processor Dec 12 18:33:21.086965 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 12 18:33:21.086975 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 12 18:33:21.086985 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Dec 12 18:33:21.086995 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 12 18:33:21.087012 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 12 18:33:21.087033 kernel: Using GB pages for direct mapping Dec 12 18:33:21.087048 kernel: ACPI: Early table checksum verification disabled Dec 12 18:33:21.087059 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Dec 12 18:33:21.087069 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Dec 12 18:33:21.087080 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:33:21.087094 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:33:21.087113 kernel: ACPI: FACS 0x000000009CBDD000 000040 Dec 12 18:33:21.087128 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:33:21.087143 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:33:21.087154 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:33:21.087164 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:33:21.087196 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Dec 12 18:33:21.087215 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Dec 12 18:33:21.087227 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Dec 12 18:33:21.087237 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Dec 12 18:33:21.087247 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Dec 12 18:33:21.087261 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Dec 12 18:33:21.087279 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Dec 12 18:33:21.087297 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Dec 12 18:33:21.087309 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Dec 12 18:33:21.087320 kernel: No NUMA configuration found Dec 12 18:33:21.087330 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Dec 12 18:33:21.087339 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Dec 12 18:33:21.087352 kernel: Zone ranges: Dec 12 18:33:21.087371 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 12 18:33:21.087388 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Dec 12 18:33:21.087403 kernel: Normal empty Dec 12 18:33:21.087413 kernel: Device empty Dec 12 18:33:21.087423 kernel: Movable zone start for each node Dec 12 18:33:21.087438 kernel: Early memory node ranges Dec 12 18:33:21.087457 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 12 18:33:21.087471 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Dec 12 18:33:21.087482 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Dec 12 18:33:21.087492 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Dec 12 18:33:21.087502 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Dec 12 18:33:21.087517 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Dec 12 18:33:21.087536 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Dec 12 18:33:21.087578 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Dec 12 18:33:21.087588 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Dec 12 18:33:21.087602 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 12 18:33:21.087636 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 12 18:33:21.087652 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Dec 12 18:33:21.087663 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 12 18:33:21.087673 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Dec 12 18:33:21.087691 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Dec 12 18:33:21.087710 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 12 18:33:21.087723 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Dec 12 18:33:21.087738 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Dec 12 18:33:21.087749 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 12 18:33:21.087759 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 12 18:33:21.087769 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 12 18:33:21.087780 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 12 18:33:21.087795 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 12 18:33:21.087814 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 12 18:33:21.087832 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 12 18:33:21.087843 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 12 18:33:21.087853 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 12 18:33:21.087864 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 12 18:33:21.087874 kernel: TSC deadline timer available Dec 12 18:33:21.087884 kernel: CPU topo: Max. logical packages: 1 Dec 12 18:33:21.087894 kernel: CPU topo: Max. logical dies: 1 Dec 12 18:33:21.087908 kernel: CPU topo: Max. dies per package: 1 Dec 12 18:33:21.087919 kernel: CPU topo: Max. threads per core: 1 Dec 12 18:33:21.087930 kernel: CPU topo: Num. cores per package: 4 Dec 12 18:33:21.087940 kernel: CPU topo: Num. threads per package: 4 Dec 12 18:33:21.087950 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Dec 12 18:33:21.087961 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 12 18:33:21.087972 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 12 18:33:21.087982 kernel: kvm-guest: setup PV sched yield Dec 12 18:33:21.087993 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Dec 12 18:33:21.088007 kernel: Booting paravirtualized kernel on KVM Dec 12 18:33:21.088018 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 12 18:33:21.088029 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 12 18:33:21.088039 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Dec 12 18:33:21.088049 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Dec 12 18:33:21.088066 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 12 18:33:21.088085 kernel: kvm-guest: PV spinlocks enabled Dec 12 18:33:21.088099 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 12 18:33:21.088112 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:33:21.088128 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 12 18:33:21.088148 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 18:33:21.088167 kernel: Fallback order for Node 0: 0 Dec 12 18:33:21.088194 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Dec 12 18:33:21.088205 kernel: Policy zone: DMA32 Dec 12 18:33:21.088215 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 18:33:21.088225 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 12 18:33:21.088240 kernel: ftrace: allocating 40103 entries in 157 pages Dec 12 18:33:21.088260 kernel: ftrace: allocated 157 pages with 5 groups Dec 12 18:33:21.088280 kernel: Dynamic Preempt: voluntary Dec 12 18:33:21.088290 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 18:33:21.088301 kernel: rcu: RCU event tracing is enabled. Dec 12 18:33:21.088311 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 12 18:33:21.088330 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 18:33:21.088348 kernel: Rude variant of Tasks RCU enabled. Dec 12 18:33:21.088362 kernel: Tracing variant of Tasks RCU enabled. Dec 12 18:33:21.088373 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 18:33:21.088384 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 12 18:33:21.088400 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 18:33:21.088419 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 18:33:21.088438 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 18:33:21.088449 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 12 18:33:21.088459 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 18:33:21.088470 kernel: Console: colour dummy device 80x25 Dec 12 18:33:21.088490 kernel: printk: legacy console [ttyS0] enabled Dec 12 18:33:21.088510 kernel: ACPI: Core revision 20240827 Dec 12 18:33:21.088526 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 12 18:33:21.088541 kernel: APIC: Switch to symmetric I/O mode setup Dec 12 18:33:21.088574 kernel: x2apic enabled Dec 12 18:33:21.088585 kernel: APIC: Switched APIC routing to: physical x2apic Dec 12 18:33:21.088595 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 12 18:33:21.088605 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 12 18:33:21.088615 kernel: kvm-guest: setup PV IPIs Dec 12 18:33:21.088624 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 12 18:33:21.088634 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Dec 12 18:33:21.088644 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Dec 12 18:33:21.088659 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 12 18:33:21.088670 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 12 18:33:21.088680 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 12 18:33:21.088692 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 12 18:33:21.088702 kernel: Spectre V2 : Mitigation: Retpolines Dec 12 18:33:21.088717 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 12 18:33:21.088737 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 12 18:33:21.088756 kernel: active return thunk: retbleed_return_thunk Dec 12 18:33:21.088775 kernel: RETBleed: Mitigation: untrained return thunk Dec 12 18:33:21.088786 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 12 18:33:21.088796 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 12 18:33:21.088806 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 12 18:33:21.088825 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 12 18:33:21.088844 kernel: active return thunk: srso_return_thunk Dec 12 18:33:21.088857 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 12 18:33:21.088868 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 12 18:33:21.088879 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 12 18:33:21.088893 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 12 18:33:21.088903 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 12 18:33:21.088914 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 12 18:33:21.088924 kernel: Freeing SMP alternatives memory: 32K Dec 12 18:33:21.088935 kernel: pid_max: default: 32768 minimum: 301 Dec 12 18:33:21.088945 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 18:33:21.088956 kernel: landlock: Up and running. Dec 12 18:33:21.088967 kernel: SELinux: Initializing. Dec 12 18:33:21.088977 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 18:33:21.088992 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 18:33:21.089002 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 12 18:33:21.089013 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 12 18:33:21.089030 kernel: ... version: 0 Dec 12 18:33:21.089049 kernel: ... bit width: 48 Dec 12 18:33:21.089064 kernel: ... generic registers: 6 Dec 12 18:33:21.089076 kernel: ... value mask: 0000ffffffffffff Dec 12 18:33:21.089086 kernel: ... max period: 00007fffffffffff Dec 12 18:33:21.089096 kernel: ... fixed-purpose events: 0 Dec 12 18:33:21.089114 kernel: ... event mask: 000000000000003f Dec 12 18:33:21.089133 kernel: signal: max sigframe size: 1776 Dec 12 18:33:21.089150 kernel: rcu: Hierarchical SRCU implementation. Dec 12 18:33:21.089162 kernel: rcu: Max phase no-delay instances is 400. Dec 12 18:33:21.089172 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 18:33:21.089194 kernel: smp: Bringing up secondary CPUs ... Dec 12 18:33:21.089216 kernel: smpboot: x86: Booting SMP configuration: Dec 12 18:33:21.089235 kernel: .... node #0, CPUs: #1 #2 #3 Dec 12 18:33:21.089255 kernel: smp: Brought up 1 node, 4 CPUs Dec 12 18:33:21.089270 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Dec 12 18:33:21.089293 kernel: Memory: 2414476K/2565800K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 145388K reserved, 0K cma-reserved) Dec 12 18:33:21.089318 kernel: devtmpfs: initialized Dec 12 18:33:21.089332 kernel: x86/mm: Memory block size: 128MB Dec 12 18:33:21.089347 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Dec 12 18:33:21.089358 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Dec 12 18:33:21.089373 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Dec 12 18:33:21.089400 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Dec 12 18:33:21.089418 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Dec 12 18:33:21.089433 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Dec 12 18:33:21.089443 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 18:33:21.089461 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 12 18:33:21.089479 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 18:33:21.089492 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 18:33:21.089503 kernel: audit: initializing netlink subsys (disabled) Dec 12 18:33:21.089513 kernel: audit: type=2000 audit(1765564396.726:1): state=initialized audit_enabled=0 res=1 Dec 12 18:33:21.089523 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 18:33:21.089542 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 12 18:33:21.089583 kernel: cpuidle: using governor menu Dec 12 18:33:21.089595 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 18:33:21.089605 kernel: dca service started, version 1.12.1 Dec 12 18:33:21.089616 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Dec 12 18:33:21.089626 kernel: PCI: Using configuration type 1 for base access Dec 12 18:33:21.089637 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 12 18:33:21.089647 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 18:33:21.089657 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 18:33:21.089671 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 18:33:21.089682 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 18:33:21.089693 kernel: ACPI: Added _OSI(Module Device) Dec 12 18:33:21.089703 kernel: ACPI: Added _OSI(Processor Device) Dec 12 18:33:21.089714 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 18:33:21.089728 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 18:33:21.089748 kernel: ACPI: Interpreter enabled Dec 12 18:33:21.089764 kernel: ACPI: PM: (supports S0 S3 S5) Dec 12 18:33:21.089775 kernel: ACPI: Using IOAPIC for interrupt routing Dec 12 18:33:21.089786 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 12 18:33:21.089801 kernel: PCI: Using E820 reservations for host bridge windows Dec 12 18:33:21.089817 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 12 18:33:21.089836 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 18:33:21.090246 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 18:33:21.090466 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 12 18:33:21.090712 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 12 18:33:21.090732 kernel: PCI host bridge to bus 0000:00 Dec 12 18:33:21.090959 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 12 18:33:21.091165 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 12 18:33:21.091428 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 12 18:33:21.091654 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Dec 12 18:33:21.091830 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Dec 12 18:33:21.091991 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Dec 12 18:33:21.092201 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 18:33:21.092771 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 12 18:33:21.093018 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Dec 12 18:33:21.093231 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Dec 12 18:33:21.093414 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Dec 12 18:33:21.093667 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Dec 12 18:33:21.093891 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 12 18:33:21.094160 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 12 18:33:21.094594 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Dec 12 18:33:21.094795 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Dec 12 18:33:21.094965 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Dec 12 18:33:21.095147 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 12 18:33:21.095322 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Dec 12 18:33:21.095472 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Dec 12 18:33:21.095653 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Dec 12 18:33:21.096009 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 12 18:33:21.096637 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Dec 12 18:33:21.096872 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Dec 12 18:33:21.097111 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Dec 12 18:33:21.097617 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Dec 12 18:33:21.097840 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 12 18:33:21.097987 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 12 18:33:21.098160 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 12 18:33:21.098333 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Dec 12 18:33:21.098491 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Dec 12 18:33:21.098682 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 12 18:33:21.098805 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Dec 12 18:33:21.098821 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 12 18:33:21.098830 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 12 18:33:21.098838 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 12 18:33:21.098846 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 12 18:33:21.098855 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 12 18:33:21.098862 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 12 18:33:21.098870 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 12 18:33:21.098878 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 12 18:33:21.098888 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 12 18:33:21.098901 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 12 18:33:21.098911 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 12 18:33:21.098921 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 12 18:33:21.098931 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 12 18:33:21.098941 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 12 18:33:21.098951 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 12 18:33:21.098962 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 12 18:33:21.098980 kernel: iommu: Default domain type: Translated Dec 12 18:33:21.098990 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 12 18:33:21.099004 kernel: efivars: Registered efivars operations Dec 12 18:33:21.099014 kernel: PCI: Using ACPI for IRQ routing Dec 12 18:33:21.099024 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 12 18:33:21.099035 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Dec 12 18:33:21.099045 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Dec 12 18:33:21.099055 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Dec 12 18:33:21.099066 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Dec 12 18:33:21.099076 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Dec 12 18:33:21.099086 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Dec 12 18:33:21.099100 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Dec 12 18:33:21.099116 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Dec 12 18:33:21.099281 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 12 18:33:21.099439 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 12 18:33:21.099615 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 12 18:33:21.099629 kernel: vgaarb: loaded Dec 12 18:33:21.099639 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 12 18:33:21.099650 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 12 18:33:21.099670 kernel: clocksource: Switched to clocksource kvm-clock Dec 12 18:33:21.099681 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 18:33:21.099692 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 18:33:21.099704 kernel: pnp: PnP ACPI init Dec 12 18:33:21.099907 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Dec 12 18:33:21.099930 kernel: pnp: PnP ACPI: found 6 devices Dec 12 18:33:21.099941 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 12 18:33:21.099952 kernel: NET: Registered PF_INET protocol family Dec 12 18:33:21.099967 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 12 18:33:21.099978 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 12 18:33:21.099990 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 18:33:21.100001 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 18:33:21.100012 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 12 18:33:21.100023 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 12 18:33:21.100041 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 18:33:21.100053 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 18:33:21.100064 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 18:33:21.100079 kernel: NET: Registered PF_XDP protocol family Dec 12 18:33:21.100251 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Dec 12 18:33:21.100414 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Dec 12 18:33:21.100602 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 12 18:33:21.100753 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 12 18:33:21.100896 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 12 18:33:21.101048 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Dec 12 18:33:21.101211 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Dec 12 18:33:21.101372 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Dec 12 18:33:21.101389 kernel: PCI: CLS 0 bytes, default 64 Dec 12 18:33:21.101401 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Dec 12 18:33:21.101417 kernel: Initialise system trusted keyrings Dec 12 18:33:21.101430 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 12 18:33:21.101444 kernel: Key type asymmetric registered Dec 12 18:33:21.101455 kernel: Asymmetric key parser 'x509' registered Dec 12 18:33:21.101467 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 12 18:33:21.101478 kernel: io scheduler mq-deadline registered Dec 12 18:33:21.101489 kernel: io scheduler kyber registered Dec 12 18:33:21.101500 kernel: io scheduler bfq registered Dec 12 18:33:21.101512 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 12 18:33:21.101523 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 12 18:33:21.101534 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 12 18:33:21.101566 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 12 18:33:21.101578 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 18:33:21.101589 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 12 18:33:21.101600 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 12 18:33:21.101611 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 12 18:33:21.101622 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 12 18:33:21.101793 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 12 18:33:21.101813 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 12 18:33:21.101961 kernel: rtc_cmos 00:04: registered as rtc0 Dec 12 18:33:21.102121 kernel: rtc_cmos 00:04: setting system clock to 2025-12-12T18:33:20 UTC (1765564400) Dec 12 18:33:21.102301 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Dec 12 18:33:21.102319 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 12 18:33:21.102331 kernel: efifb: probing for efifb Dec 12 18:33:21.102342 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Dec 12 18:33:21.102354 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Dec 12 18:33:21.102365 kernel: efifb: scrolling: redraw Dec 12 18:33:21.102376 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 12 18:33:21.102392 kernel: Console: switching to colour frame buffer device 160x50 Dec 12 18:33:21.102405 kernel: fb0: EFI VGA frame buffer device Dec 12 18:33:21.102417 kernel: pstore: Using crash dump compression: deflate Dec 12 18:33:21.102430 kernel: pstore: Registered efi_pstore as persistent store backend Dec 12 18:33:21.102442 kernel: NET: Registered PF_INET6 protocol family Dec 12 18:33:21.102456 kernel: Segment Routing with IPv6 Dec 12 18:33:21.102467 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 18:33:21.102478 kernel: NET: Registered PF_PACKET protocol family Dec 12 18:33:21.102489 kernel: Key type dns_resolver registered Dec 12 18:33:21.102506 kernel: IPI shorthand broadcast: enabled Dec 12 18:33:21.102517 kernel: sched_clock: Marking stable (4453002563, 338484970)->(4932617466, -141129933) Dec 12 18:33:21.102528 kernel: registered taskstats version 1 Dec 12 18:33:21.102539 kernel: Loading compiled-in X.509 certificates Dec 12 18:33:21.102580 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 12 18:33:21.102592 kernel: Demotion targets for Node 0: null Dec 12 18:33:21.102603 kernel: Key type .fscrypt registered Dec 12 18:33:21.102614 kernel: Key type fscrypt-provisioning registered Dec 12 18:33:21.102625 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 18:33:21.102641 kernel: ima: Allocated hash algorithm: sha1 Dec 12 18:33:21.102652 kernel: ima: No architecture policies found Dec 12 18:33:21.102663 kernel: clk: Disabling unused clocks Dec 12 18:33:21.102674 kernel: Warning: unable to open an initial console. Dec 12 18:33:21.102686 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 12 18:33:21.102697 kernel: Write protecting the kernel read-only data: 40960k Dec 12 18:33:21.102709 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 12 18:33:21.102720 kernel: Run /init as init process Dec 12 18:33:21.102731 kernel: with arguments: Dec 12 18:33:21.102745 kernel: /init Dec 12 18:33:21.102756 kernel: with environment: Dec 12 18:33:21.102768 kernel: HOME=/ Dec 12 18:33:21.102779 kernel: TERM=linux Dec 12 18:33:21.102796 systemd[1]: Successfully made /usr/ read-only. Dec 12 18:33:21.102813 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:33:21.102826 systemd[1]: Detected virtualization kvm. Dec 12 18:33:21.102837 systemd[1]: Detected architecture x86-64. Dec 12 18:33:21.102852 systemd[1]: Running in initrd. Dec 12 18:33:21.102863 systemd[1]: No hostname configured, using default hostname. Dec 12 18:33:21.102875 systemd[1]: Hostname set to . Dec 12 18:33:21.102887 systemd[1]: Initializing machine ID from VM UUID. Dec 12 18:33:21.102899 systemd[1]: Queued start job for default target initrd.target. Dec 12 18:33:21.102911 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:33:21.102923 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:33:21.102935 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 18:33:21.102951 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:33:21.102963 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 18:33:21.102976 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 18:33:21.102989 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 12 18:33:21.103002 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 12 18:33:21.103013 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:33:21.103025 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:33:21.103041 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:33:21.103053 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:33:21.103064 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:33:21.103076 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:33:21.103088 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:33:21.103100 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:33:21.103111 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 18:33:21.103124 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 18:33:21.103139 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:33:21.103151 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:33:21.103163 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:33:21.103175 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:33:21.103202 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 18:33:21.103214 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:33:21.103226 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 18:33:21.103238 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 18:33:21.103249 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 18:33:21.103265 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:33:21.103279 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:33:21.103291 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:33:21.103303 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 18:33:21.103315 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:33:21.103371 systemd-journald[203]: Collecting audit messages is disabled. Dec 12 18:33:21.103401 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 18:33:21.103413 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 18:33:21.103429 systemd-journald[203]: Journal started Dec 12 18:33:21.103458 systemd-journald[203]: Runtime Journal (/run/log/journal/44b5d5b5d7cb46f8bdca5758b8c2f82d) is 6M, max 48.1M, 42.1M free. Dec 12 18:33:21.092824 systemd-modules-load[204]: Inserted module 'overlay' Dec 12 18:33:21.109624 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:33:21.120831 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:33:21.128252 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:33:21.134822 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 18:33:21.144495 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 18:33:21.148774 kernel: Bridge firewalling registered Dec 12 18:33:21.148902 systemd-modules-load[204]: Inserted module 'br_netfilter' Dec 12 18:33:21.160899 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:33:21.163695 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:33:21.167638 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:33:21.171603 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:33:21.191119 systemd-tmpfiles[218]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 18:33:21.195481 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:33:21.197783 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:33:21.199720 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:33:21.202834 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:33:21.230200 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:33:21.236763 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 18:33:21.286198 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:33:21.289824 systemd-resolved[239]: Positive Trust Anchors: Dec 12 18:33:21.289843 systemd-resolved[239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:33:21.289873 systemd-resolved[239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:33:21.293127 systemd-resolved[239]: Defaulting to hostname 'linux'. Dec 12 18:33:21.294846 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:33:21.298707 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:33:21.443601 kernel: SCSI subsystem initialized Dec 12 18:33:21.454615 kernel: Loading iSCSI transport class v2.0-870. Dec 12 18:33:21.469605 kernel: iscsi: registered transport (tcp) Dec 12 18:33:21.496873 kernel: iscsi: registered transport (qla4xxx) Dec 12 18:33:21.496964 kernel: QLogic iSCSI HBA Driver Dec 12 18:33:21.524137 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:33:21.548197 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:33:21.553907 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:33:21.625955 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 18:33:21.629585 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 18:33:21.691759 kernel: raid6: avx2x4 gen() 23055 MB/s Dec 12 18:33:21.708618 kernel: raid6: avx2x2 gen() 25754 MB/s Dec 12 18:33:21.727230 kernel: raid6: avx2x1 gen() 21166 MB/s Dec 12 18:33:21.727318 kernel: raid6: using algorithm avx2x2 gen() 25754 MB/s Dec 12 18:33:21.744688 kernel: raid6: .... xor() 17910 MB/s, rmw enabled Dec 12 18:33:21.744771 kernel: raid6: using avx2x2 recovery algorithm Dec 12 18:33:21.769594 kernel: xor: automatically using best checksumming function avx Dec 12 18:33:22.020591 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 18:33:22.032955 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:33:22.036813 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:33:22.074694 systemd-udevd[453]: Using default interface naming scheme 'v255'. Dec 12 18:33:22.083593 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:33:22.089111 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 18:33:22.120912 dracut-pre-trigger[456]: rd.md=0: removing MD RAID activation Dec 12 18:33:22.160132 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:33:22.164116 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:33:22.254256 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:33:22.262676 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 18:33:22.351572 kernel: cryptd: max_cpu_qlen set to 1000 Dec 12 18:33:22.351649 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 12 18:33:22.357567 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 12 18:33:22.374053 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 18:33:22.374128 kernel: GPT:9289727 != 19775487 Dec 12 18:33:22.374145 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 18:33:22.374176 kernel: GPT:9289727 != 19775487 Dec 12 18:33:22.374190 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 18:33:22.374204 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 12 18:33:22.374220 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 18:33:22.401069 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:33:22.404681 kernel: AES CTR mode by8 optimization enabled Dec 12 18:33:22.404709 kernel: libata version 3.00 loaded. Dec 12 18:33:22.401271 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:33:22.410252 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:33:22.417784 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:33:22.423596 kernel: ahci 0000:00:1f.2: version 3.0 Dec 12 18:33:22.423875 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 12 18:33:22.429318 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 12 18:33:22.429526 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 12 18:33:22.429724 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 12 18:33:22.436926 kernel: scsi host0: ahci Dec 12 18:33:22.424434 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:33:22.735213 kernel: scsi host1: ahci Dec 12 18:33:22.738881 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 12 18:33:22.742572 kernel: scsi host2: ahci Dec 12 18:33:22.744579 kernel: scsi host3: ahci Dec 12 18:33:22.747568 kernel: scsi host4: ahci Dec 12 18:33:22.753607 kernel: scsi host5: ahci Dec 12 18:33:22.753894 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Dec 12 18:33:22.753912 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Dec 12 18:33:22.753926 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Dec 12 18:33:22.753940 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Dec 12 18:33:22.757183 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Dec 12 18:33:22.757227 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Dec 12 18:33:22.771465 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:33:22.783253 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 12 18:33:22.792851 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 12 18:33:22.795093 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 12 18:33:22.807442 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 18:33:22.808650 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 18:33:22.842949 disk-uuid[614]: Primary Header is updated. Dec 12 18:33:22.842949 disk-uuid[614]: Secondary Entries is updated. Dec 12 18:33:22.842949 disk-uuid[614]: Secondary Header is updated. Dec 12 18:33:22.849584 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 18:33:22.854566 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 18:33:23.065220 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 12 18:33:23.065303 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 12 18:33:23.066516 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 12 18:33:23.068591 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 12 18:33:23.068618 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 12 18:33:23.069590 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 12 18:33:23.072448 kernel: ata3.00: LPM support broken, forcing max_power Dec 12 18:33:23.072475 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 12 18:33:23.073503 kernel: ata3.00: applying bridge limits Dec 12 18:33:23.075578 kernel: ata3.00: LPM support broken, forcing max_power Dec 12 18:33:23.075679 kernel: ata3.00: configured for UDMA/100 Dec 12 18:33:23.078592 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 12 18:33:23.145432 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 12 18:33:23.145833 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 12 18:33:23.177611 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 12 18:33:23.554256 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 18:33:23.558980 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:33:23.563282 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:33:23.567627 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:33:23.572289 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 18:33:23.596263 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:33:23.855591 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 18:33:23.856802 disk-uuid[615]: The operation has completed successfully. Dec 12 18:33:23.897892 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 18:33:23.898027 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 18:33:23.934935 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 12 18:33:23.965978 sh[643]: Success Dec 12 18:33:23.986950 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 18:33:23.987032 kernel: device-mapper: uevent: version 1.0.3 Dec 12 18:33:23.988779 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 18:33:23.998573 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 12 18:33:24.029939 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 12 18:33:24.033903 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 12 18:33:24.049806 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 12 18:33:24.056760 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (655) Dec 12 18:33:24.056797 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 12 18:33:24.059956 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:33:24.066154 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 18:33:24.066202 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 18:33:24.067686 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 12 18:33:24.070044 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:33:24.072856 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 18:33:24.074079 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 18:33:24.077303 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 18:33:24.109606 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (689) Dec 12 18:33:24.113553 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:33:24.113677 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:33:24.117613 kernel: BTRFS info (device vda6): turning on async discard Dec 12 18:33:24.117644 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 18:33:24.124590 kernel: BTRFS info (device vda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:33:24.125202 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 18:33:24.126946 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 18:33:24.344123 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:33:24.351076 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:33:24.447075 systemd-networkd[824]: lo: Link UP Dec 12 18:33:24.447450 systemd-networkd[824]: lo: Gained carrier Dec 12 18:33:24.449483 systemd-networkd[824]: Enumeration completed Dec 12 18:33:24.449699 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:33:24.450009 systemd-networkd[824]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:33:24.450014 systemd-networkd[824]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:33:24.452928 systemd-networkd[824]: eth0: Link UP Dec 12 18:33:24.453603 systemd[1]: Reached target network.target - Network. Dec 12 18:33:24.453664 systemd-networkd[824]: eth0: Gained carrier Dec 12 18:33:24.453679 systemd-networkd[824]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:33:24.524129 ignition[733]: Ignition 2.22.0 Dec 12 18:33:24.524141 ignition[733]: Stage: fetch-offline Dec 12 18:33:24.524624 systemd-networkd[824]: eth0: DHCPv4 address 10.0.0.38/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 18:33:24.524186 ignition[733]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:33:24.524195 ignition[733]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 18:33:24.524321 ignition[733]: parsed url from cmdline: "" Dec 12 18:33:24.524324 ignition[733]: no config URL provided Dec 12 18:33:24.524331 ignition[733]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:33:24.524340 ignition[733]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:33:24.524365 ignition[733]: op(1): [started] loading QEMU firmware config module Dec 12 18:33:24.524370 ignition[733]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 12 18:33:24.538383 ignition[733]: op(1): [finished] loading QEMU firmware config module Dec 12 18:33:24.622107 ignition[733]: parsing config with SHA512: 79a4c94864443c2d8b3ed27f995390ac8f713e76a4606bf2476efe2eba0c711db84e4362348410d9ed7a504ffeaa3b9106269197fc2e1e76caa3340aa383d16f Dec 12 18:33:24.635173 unknown[733]: fetched base config from "system" Dec 12 18:33:24.635192 unknown[733]: fetched user config from "qemu" Dec 12 18:33:24.638385 ignition[733]: fetch-offline: fetch-offline passed Dec 12 18:33:24.638449 ignition[733]: Ignition finished successfully Dec 12 18:33:24.644151 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:33:24.646730 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 12 18:33:24.647759 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 18:33:24.769334 ignition[838]: Ignition 2.22.0 Dec 12 18:33:24.769350 ignition[838]: Stage: kargs Dec 12 18:33:24.769532 ignition[838]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:33:24.769566 ignition[838]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 18:33:24.770592 ignition[838]: kargs: kargs passed Dec 12 18:33:24.778038 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 18:33:24.770642 ignition[838]: Ignition finished successfully Dec 12 18:33:24.783033 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 18:33:24.895877 ignition[846]: Ignition 2.22.0 Dec 12 18:33:24.895890 ignition[846]: Stage: disks Dec 12 18:33:24.896086 ignition[846]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:33:24.896132 ignition[846]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 18:33:24.899385 ignition[846]: disks: disks passed Dec 12 18:33:24.899440 ignition[846]: Ignition finished successfully Dec 12 18:33:24.904833 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 18:33:24.906746 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 18:33:24.911890 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 18:33:24.915797 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:33:24.919591 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:33:24.923196 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:33:24.929517 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 18:33:24.973557 systemd-fsck[855]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 12 18:33:24.983842 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 18:33:24.991992 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 18:33:25.194592 kernel: EXT4-fs (vda9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 12 18:33:25.195302 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 18:33:25.196024 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 18:33:25.199835 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:33:25.205567 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 18:33:25.209295 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 18:33:25.209361 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 18:33:25.209395 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:33:25.233120 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 18:33:25.237578 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (863) Dec 12 18:33:25.239520 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 18:33:25.245866 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:33:25.245895 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:33:25.245910 kernel: BTRFS info (device vda6): turning on async discard Dec 12 18:33:25.248010 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 18:33:25.249792 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:33:25.290145 initrd-setup-root[887]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 18:33:25.295617 initrd-setup-root[894]: cut: /sysroot/etc/group: No such file or directory Dec 12 18:33:25.302961 initrd-setup-root[901]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 18:33:25.310583 initrd-setup-root[908]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 18:33:25.416062 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 18:33:25.417780 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 18:33:25.423288 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 18:33:25.444401 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 18:33:25.446811 kernel: BTRFS info (device vda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:33:25.461757 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 18:33:25.527437 ignition[976]: INFO : Ignition 2.22.0 Dec 12 18:33:25.527437 ignition[976]: INFO : Stage: mount Dec 12 18:33:25.530814 ignition[976]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:33:25.530814 ignition[976]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 18:33:25.530814 ignition[976]: INFO : mount: mount passed Dec 12 18:33:25.530814 ignition[976]: INFO : Ignition finished successfully Dec 12 18:33:25.540721 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 18:33:25.542302 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 18:33:26.197466 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:33:26.233596 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (989) Dec 12 18:33:26.233694 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:33:26.237299 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:33:26.245828 kernel: BTRFS info (device vda6): turning on async discard Dec 12 18:33:26.245908 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 18:33:26.251109 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:33:26.296506 ignition[1006]: INFO : Ignition 2.22.0 Dec 12 18:33:26.296506 ignition[1006]: INFO : Stage: files Dec 12 18:33:26.300872 ignition[1006]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:33:26.300872 ignition[1006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 18:33:26.300872 ignition[1006]: DEBUG : files: compiled without relabeling support, skipping Dec 12 18:33:26.309072 ignition[1006]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 18:33:26.309072 ignition[1006]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 18:33:26.317227 ignition[1006]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 18:33:26.320459 ignition[1006]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 18:33:26.323231 ignition[1006]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 18:33:26.321335 unknown[1006]: wrote ssh authorized keys file for user: core Dec 12 18:33:26.328155 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 12 18:33:26.328155 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 12 18:33:26.368854 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 18:33:26.494458 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 12 18:33:26.494458 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 12 18:33:26.501634 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 18:33:26.501634 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:33:26.501634 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:33:26.501634 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:33:26.501634 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:33:26.501634 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:33:26.501634 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:33:26.526290 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:33:26.526290 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:33:26.526290 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 12 18:33:26.526290 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 12 18:33:26.526290 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 12 18:33:26.526290 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Dec 12 18:33:26.529818 systemd-networkd[824]: eth0: Gained IPv6LL Dec 12 18:33:26.953642 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 12 18:33:27.731210 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 12 18:33:27.731210 ignition[1006]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 12 18:33:27.737779 ignition[1006]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:33:27.810757 ignition[1006]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:33:27.810757 ignition[1006]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 12 18:33:27.810757 ignition[1006]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 12 18:33:27.819517 ignition[1006]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 18:33:27.819517 ignition[1006]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 18:33:27.819517 ignition[1006]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 12 18:33:27.819517 ignition[1006]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 12 18:33:27.851887 ignition[1006]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 18:33:27.862480 ignition[1006]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 18:33:27.865689 ignition[1006]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 12 18:33:27.865689 ignition[1006]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 12 18:33:27.865689 ignition[1006]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 18:33:27.865689 ignition[1006]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:33:27.865689 ignition[1006]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:33:27.865689 ignition[1006]: INFO : files: files passed Dec 12 18:33:27.865689 ignition[1006]: INFO : Ignition finished successfully Dec 12 18:33:27.882571 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 18:33:27.888266 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 18:33:27.892072 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 18:33:27.928228 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 18:33:27.928444 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 18:33:27.934508 initrd-setup-root-after-ignition[1034]: grep: /sysroot/oem/oem-release: No such file or directory Dec 12 18:33:27.939412 initrd-setup-root-after-ignition[1037]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:33:27.939412 initrd-setup-root-after-ignition[1037]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:33:27.937143 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:33:27.949091 initrd-setup-root-after-ignition[1041]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:33:27.938225 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 18:33:27.942832 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 18:33:28.023345 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 18:33:28.023528 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 18:33:28.027707 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 18:33:28.031895 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 18:33:28.033598 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 18:33:28.035257 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 18:33:28.085413 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:33:28.089834 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 18:33:28.123808 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:33:28.125985 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:33:28.129904 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 18:33:28.131924 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 18:33:28.132149 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:33:28.140315 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 18:33:28.142121 systemd[1]: Stopped target basic.target - Basic System. Dec 12 18:33:28.145262 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 18:33:28.146786 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:33:28.150165 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 18:33:28.151076 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:33:28.151659 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 18:33:28.162594 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:33:28.166430 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 18:33:28.170381 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 18:33:28.173641 systemd[1]: Stopped target swap.target - Swaps. Dec 12 18:33:28.175352 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 18:33:28.175489 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:33:28.182932 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:33:28.184888 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:33:28.188572 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 18:33:28.190576 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:33:28.192270 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 18:33:28.192420 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 18:33:28.201422 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 18:33:28.201630 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:33:28.203439 systemd[1]: Stopped target paths.target - Path Units. Dec 12 18:33:28.208464 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 18:33:28.213674 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:33:28.227185 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 18:33:28.229078 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 18:33:28.232450 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 18:33:28.232584 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:33:28.237693 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 18:33:28.237852 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:33:28.241331 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 18:33:28.241510 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:33:28.245842 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 18:33:28.246112 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 18:33:28.254299 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 18:33:28.256769 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 18:33:28.263798 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 18:33:28.264586 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:33:28.265811 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 18:33:28.265968 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:33:28.276873 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 18:33:28.281385 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 18:33:28.326108 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 18:33:28.332358 ignition[1061]: INFO : Ignition 2.22.0 Dec 12 18:33:28.332358 ignition[1061]: INFO : Stage: umount Dec 12 18:33:28.332358 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:33:28.332358 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 18:33:28.342969 ignition[1061]: INFO : umount: umount passed Dec 12 18:33:28.342969 ignition[1061]: INFO : Ignition finished successfully Dec 12 18:33:28.335290 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 18:33:28.335516 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 18:33:28.337773 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 18:33:28.337956 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 18:33:28.341627 systemd[1]: Stopped target network.target - Network. Dec 12 18:33:28.343101 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 18:33:28.343222 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 18:33:28.344082 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 18:33:28.344168 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 18:33:28.348689 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 18:33:28.348788 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 18:33:28.353272 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 18:33:28.353351 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 18:33:28.354796 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 18:33:28.354874 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 18:33:28.355611 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 18:33:28.361253 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 18:33:28.374921 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 18:33:28.375131 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 18:33:28.380824 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 12 18:33:28.381197 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 18:33:28.381359 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 18:33:28.386807 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 12 18:33:28.387797 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 18:33:28.392719 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 18:33:28.392786 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:33:28.400268 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 18:33:28.402265 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 18:33:28.402361 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:33:28.403264 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 18:33:28.403327 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:33:28.414123 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 18:33:28.414213 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 18:33:28.416087 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 18:33:28.416157 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:33:28.424662 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:33:28.431136 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 12 18:33:28.431212 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:33:28.445376 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 18:33:28.445589 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:33:28.449495 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 18:33:28.449575 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 18:33:28.452012 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 18:33:28.452053 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:33:28.456878 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 18:33:28.456934 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:33:28.464183 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 18:33:28.464253 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 18:33:28.473255 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 18:33:28.473386 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:33:28.480707 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 18:33:28.482612 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 18:33:28.482691 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:33:28.488186 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 18:33:28.488283 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:33:28.496479 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 12 18:33:28.496532 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:33:28.498687 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 18:33:28.498749 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:33:28.502390 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:33:28.502449 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:33:28.510028 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 12 18:33:28.510096 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Dec 12 18:33:28.510145 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 12 18:33:28.510195 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:33:28.510676 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 18:33:28.510807 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 18:33:28.511231 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 18:33:28.511351 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 18:33:28.517078 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 18:33:28.525644 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 18:33:28.554152 systemd[1]: Switching root. Dec 12 18:33:28.591144 systemd-journald[203]: Journal stopped Dec 12 18:33:30.948595 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Dec 12 18:33:30.948693 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 18:33:30.948714 kernel: SELinux: policy capability open_perms=1 Dec 12 18:33:30.948734 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 18:33:30.948749 kernel: SELinux: policy capability always_check_network=0 Dec 12 18:33:30.948765 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 18:33:30.948781 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 18:33:30.948797 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 18:33:30.948815 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 18:33:30.948835 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 18:33:30.948850 kernel: audit: type=1403 audit(1765564409.412:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 12 18:33:30.948873 systemd[1]: Successfully loaded SELinux policy in 77.337ms. Dec 12 18:33:30.948896 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.293ms. Dec 12 18:33:30.948914 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:33:30.948929 systemd[1]: Detected virtualization kvm. Dec 12 18:33:30.948961 systemd[1]: Detected architecture x86-64. Dec 12 18:33:30.948983 systemd[1]: Detected first boot. Dec 12 18:33:30.948999 systemd[1]: Initializing machine ID from VM UUID. Dec 12 18:33:30.949023 zram_generator::config[1107]: No configuration found. Dec 12 18:33:30.949038 kernel: Guest personality initialized and is inactive Dec 12 18:33:30.949056 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 12 18:33:30.949072 kernel: Initialized host personality Dec 12 18:33:30.949111 kernel: NET: Registered PF_VSOCK protocol family Dec 12 18:33:30.949140 systemd[1]: Populated /etc with preset unit settings. Dec 12 18:33:30.949164 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 12 18:33:30.949182 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 18:33:30.949205 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 18:33:30.949219 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 18:33:30.949232 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 18:33:30.949247 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 18:33:30.949259 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 18:33:30.949276 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 18:33:30.949292 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 18:33:30.949305 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 18:33:30.949317 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 18:33:30.949329 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 18:33:30.949340 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:33:30.949355 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:33:30.949367 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 18:33:30.949380 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 18:33:30.949392 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 18:33:30.949404 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:33:30.949416 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 12 18:33:30.949429 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:33:30.949443 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:33:30.949461 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 18:33:30.949473 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 18:33:30.949485 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 18:33:30.949498 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 18:33:30.949510 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:33:30.949522 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:33:30.949534 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:33:30.949560 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:33:30.949573 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 18:33:30.949588 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 18:33:30.949600 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 18:33:30.949612 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:33:30.949625 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:33:30.949637 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:33:30.949649 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 18:33:30.949661 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 18:33:30.949673 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 18:33:30.949685 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 18:33:30.949699 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:33:30.949711 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 18:33:30.949723 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 18:33:30.949735 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 18:33:30.949747 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 18:33:30.949759 systemd[1]: Reached target machines.target - Containers. Dec 12 18:33:30.949772 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 18:33:30.949784 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:33:30.949798 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:33:30.949810 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 18:33:30.949822 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:33:30.949834 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:33:30.949846 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:33:30.949859 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 18:33:30.949870 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:33:30.949882 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 18:33:30.949895 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 18:33:30.949909 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 18:33:30.949921 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 18:33:30.949933 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 18:33:30.949952 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:33:30.949965 kernel: fuse: init (API version 7.41) Dec 12 18:33:30.949979 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:33:30.949990 kernel: loop: module loaded Dec 12 18:33:30.950001 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:33:30.950013 kernel: ACPI: bus type drm_connector registered Dec 12 18:33:30.950026 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:33:30.950041 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 18:33:30.950053 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 18:33:30.950064 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:33:30.950078 systemd[1]: verity-setup.service: Deactivated successfully. Dec 12 18:33:30.950090 systemd[1]: Stopped verity-setup.service. Dec 12 18:33:30.950127 systemd-journald[1182]: Collecting audit messages is disabled. Dec 12 18:33:30.950152 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:33:30.950167 systemd-journald[1182]: Journal started Dec 12 18:33:30.950190 systemd-journald[1182]: Runtime Journal (/run/log/journal/44b5d5b5d7cb46f8bdca5758b8c2f82d) is 6M, max 48.1M, 42.1M free. Dec 12 18:33:30.576166 systemd[1]: Queued start job for default target multi-user.target. Dec 12 18:33:30.604225 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 12 18:33:30.604842 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 18:33:30.956976 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:33:30.959276 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 18:33:30.961392 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 18:33:30.964490 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 18:33:30.966676 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 18:33:30.968633 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 18:33:30.970904 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 18:33:30.973261 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 18:33:30.975938 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:33:30.978743 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 18:33:30.978979 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 18:33:30.982237 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:33:30.982964 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:33:30.985667 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:33:30.985888 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:33:30.988156 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:33:30.988386 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:33:30.990869 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 18:33:30.991092 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 18:33:30.993251 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:33:30.993463 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:33:30.995785 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:33:30.998062 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:33:31.000786 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 18:33:31.003390 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 18:33:31.019470 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:33:31.023061 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 18:33:31.028172 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 18:33:31.030648 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 18:33:31.030775 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:33:31.034640 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 18:33:31.039597 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 18:33:31.041415 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:33:31.043070 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 18:33:31.048142 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 18:33:31.052660 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:33:31.054080 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 18:33:31.056038 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:33:31.079343 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:33:31.083994 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 18:33:31.097039 systemd-journald[1182]: Time spent on flushing to /var/log/journal/44b5d5b5d7cb46f8bdca5758b8c2f82d is 20.431ms for 1069 entries. Dec 12 18:33:31.097039 systemd-journald[1182]: System Journal (/var/log/journal/44b5d5b5d7cb46f8bdca5758b8c2f82d) is 8M, max 195.6M, 187.6M free. Dec 12 18:33:31.147338 systemd-journald[1182]: Received client request to flush runtime journal. Dec 12 18:33:31.147406 kernel: loop0: detected capacity change from 0 to 110984 Dec 12 18:33:31.098656 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 18:33:31.103294 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:33:31.106539 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 18:33:31.109488 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 18:33:31.116758 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 18:33:31.125334 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 18:33:31.132463 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 18:33:31.151027 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 18:33:31.151807 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Dec 12 18:33:31.151824 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Dec 12 18:33:31.158011 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:33:31.225313 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 18:33:31.235956 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:33:31.240596 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 18:33:31.244019 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 18:33:31.260596 kernel: loop1: detected capacity change from 0 to 128560 Dec 12 18:33:31.275836 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 18:33:31.284348 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:33:31.311602 kernel: loop2: detected capacity change from 0 to 219144 Dec 12 18:33:31.313584 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Dec 12 18:33:31.313609 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Dec 12 18:33:31.318159 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:33:31.351590 kernel: loop3: detected capacity change from 0 to 110984 Dec 12 18:33:31.419726 kernel: loop4: detected capacity change from 0 to 128560 Dec 12 18:33:31.441956 kernel: loop5: detected capacity change from 0 to 219144 Dec 12 18:33:31.474518 (sd-merge)[1254]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 12 18:33:31.475303 (sd-merge)[1254]: Merged extensions into '/usr'. Dec 12 18:33:31.528359 systemd[1]: Reload requested from client PID 1226 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 18:33:31.528376 systemd[1]: Reloading... Dec 12 18:33:31.635644 zram_generator::config[1280]: No configuration found. Dec 12 18:33:31.865659 ldconfig[1221]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 18:33:31.909136 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 18:33:31.909234 systemd[1]: Reloading finished in 379 ms. Dec 12 18:33:31.957535 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 18:33:31.961910 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 18:33:31.981867 systemd[1]: Starting ensure-sysext.service... Dec 12 18:33:31.985033 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:33:32.047684 systemd[1]: Reload requested from client PID 1317 ('systemctl') (unit ensure-sysext.service)... Dec 12 18:33:32.047706 systemd[1]: Reloading... Dec 12 18:33:32.053207 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 18:33:32.054457 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 18:33:32.054963 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 18:33:32.055539 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 12 18:33:32.056839 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 12 18:33:32.057240 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Dec 12 18:33:32.057335 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Dec 12 18:33:32.065243 systemd-tmpfiles[1318]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:33:32.065419 systemd-tmpfiles[1318]: Skipping /boot Dec 12 18:33:32.079467 systemd-tmpfiles[1318]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:33:32.079486 systemd-tmpfiles[1318]: Skipping /boot Dec 12 18:33:32.160597 zram_generator::config[1345]: No configuration found. Dec 12 18:33:32.392698 systemd[1]: Reloading finished in 344 ms. Dec 12 18:33:32.415269 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 18:33:32.444277 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:33:32.457656 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:33:32.462376 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 18:33:32.487704 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 18:33:32.492510 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:33:32.496758 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:33:32.503692 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 18:33:32.509400 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:33:32.509703 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:33:32.516206 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:33:32.523473 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:33:32.534227 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:33:32.537777 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:33:32.537957 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:33:32.538102 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:33:32.539758 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 18:33:32.542901 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:33:32.543212 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:33:32.546283 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:33:32.546513 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:33:32.559569 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:33:32.559917 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:33:32.564487 systemd-udevd[1391]: Using default interface naming scheme 'v255'. Dec 12 18:33:32.568212 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 18:33:32.576652 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:33:32.577099 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:33:32.578992 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:33:32.583782 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:33:32.588894 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:33:32.600012 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:33:32.602171 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:33:32.602229 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:33:32.603835 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 18:33:32.608795 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 18:33:32.611637 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:33:32.612204 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:33:32.615372 systemd[1]: Finished ensure-sysext.service. Dec 12 18:33:32.657616 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 18:33:32.660520 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:33:32.661802 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:33:32.664336 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:33:32.664854 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:33:32.667922 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:33:32.668609 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:33:32.669319 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 18:33:32.680682 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:33:32.681336 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:33:32.716717 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 12 18:33:32.720736 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:33:32.722636 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:33:32.722721 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:33:32.728584 augenrules[1460]: No rules Dec 12 18:33:32.728733 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 12 18:33:32.730596 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 18:33:32.731013 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:33:32.732570 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:33:32.769082 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 18:33:32.897136 kernel: mousedev: PS/2 mouse device common for all mice Dec 12 18:33:32.906602 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 12 18:33:32.917605 kernel: ACPI: button: Power Button [PWRF] Dec 12 18:33:32.931589 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 18:33:32.948333 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 18:33:32.995582 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Dec 12 18:33:32.996023 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 12 18:33:33.021174 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 18:33:33.037517 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 12 18:33:33.070128 systemd-networkd[1458]: lo: Link UP Dec 12 18:33:33.070532 systemd-networkd[1458]: lo: Gained carrier Dec 12 18:33:33.072840 systemd-networkd[1458]: Enumeration completed Dec 12 18:33:33.073061 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:33:33.073843 systemd-networkd[1458]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:33:33.073953 systemd-networkd[1458]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:33:33.074956 systemd-networkd[1458]: eth0: Link UP Dec 12 18:33:33.075260 systemd-networkd[1458]: eth0: Gained carrier Dec 12 18:33:33.075369 systemd-networkd[1458]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:33:33.077464 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 18:33:33.080986 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 18:33:33.088105 systemd-resolved[1387]: Positive Trust Anchors: Dec 12 18:33:33.088120 systemd-resolved[1387]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:33:33.088150 systemd-resolved[1387]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:33:33.094338 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 12 18:33:33.098021 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 18:33:33.102711 systemd-networkd[1458]: eth0: DHCPv4 address 10.0.0.38/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 18:33:33.104754 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Dec 12 18:33:33.105207 systemd-resolved[1387]: Defaulting to hostname 'linux'. Dec 12 18:33:33.879984 systemd-timesyncd[1464]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 12 18:33:33.880249 systemd-timesyncd[1464]: Initial clock synchronization to Fri 2025-12-12 18:33:33.879454 UTC. Dec 12 18:33:33.880723 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:33:33.886007 systemd-resolved[1387]: Clock change detected. Flushing caches. Dec 12 18:33:33.895523 systemd[1]: Reached target network.target - Network. Dec 12 18:33:33.898355 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:33:33.903316 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:33:33.906426 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 18:33:33.941224 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:33:33.941532 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:33:33.976064 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:33:34.000659 kernel: kvm_amd: TSC scaling supported Dec 12 18:33:34.000770 kernel: kvm_amd: Nested Virtualization enabled Dec 12 18:33:34.000790 kernel: kvm_amd: Nested Paging enabled Dec 12 18:33:34.000807 kernel: kvm_amd: LBR virtualization supported Dec 12 18:33:34.001511 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 12 18:33:34.003319 kernel: kvm_amd: Virtual GIF supported Dec 12 18:33:34.052963 kernel: EDAC MC: Ver: 3.0.0 Dec 12 18:33:34.091774 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:33:34.116497 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:33:34.118577 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 18:33:34.120872 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 18:33:34.123264 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 12 18:33:34.125657 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 18:33:34.127933 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 18:33:34.130448 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 18:33:34.132898 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 18:33:34.132965 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:33:34.134671 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:33:34.137717 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 18:33:34.142033 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 18:33:34.146789 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 18:33:34.149401 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 18:33:34.151759 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 18:33:34.158898 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 18:33:34.161576 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 18:33:34.164687 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 18:33:34.167575 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:33:34.169437 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:33:34.171337 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:33:34.171365 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:33:34.172622 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 18:33:34.176076 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 18:33:34.178803 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 18:33:34.182605 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 18:33:34.186136 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 18:33:34.210959 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 18:33:34.215044 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 12 18:33:34.217844 jq[1521]: false Dec 12 18:33:34.220540 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 18:33:34.221503 extend-filesystems[1522]: Found /dev/vda6 Dec 12 18:33:34.225671 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 18:33:34.226348 extend-filesystems[1522]: Found /dev/vda9 Dec 12 18:33:34.234589 extend-filesystems[1522]: Checking size of /dev/vda9 Dec 12 18:33:34.230317 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 18:33:34.229548 oslogin_cache_refresh[1523]: Refreshing passwd entry cache Dec 12 18:33:34.236978 google_oslogin_nss_cache[1523]: oslogin_cache_refresh[1523]: Refreshing passwd entry cache Dec 12 18:33:34.235069 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 18:33:34.241381 google_oslogin_nss_cache[1523]: oslogin_cache_refresh[1523]: Failure getting users, quitting Dec 12 18:33:34.241482 oslogin_cache_refresh[1523]: Failure getting users, quitting Dec 12 18:33:34.241617 oslogin_cache_refresh[1523]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:33:34.242300 google_oslogin_nss_cache[1523]: oslogin_cache_refresh[1523]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:33:34.242300 google_oslogin_nss_cache[1523]: oslogin_cache_refresh[1523]: Refreshing group entry cache Dec 12 18:33:34.241681 oslogin_cache_refresh[1523]: Refreshing group entry cache Dec 12 18:33:34.246763 google_oslogin_nss_cache[1523]: oslogin_cache_refresh[1523]: Failure getting groups, quitting Dec 12 18:33:34.246763 google_oslogin_nss_cache[1523]: oslogin_cache_refresh[1523]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:33:34.246748 oslogin_cache_refresh[1523]: Failure getting groups, quitting Dec 12 18:33:34.246767 oslogin_cache_refresh[1523]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:33:34.252299 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 18:33:34.255936 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 18:33:34.256940 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 18:33:34.257825 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 18:33:34.262794 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 18:33:34.272513 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 18:33:34.278599 jq[1544]: true Dec 12 18:33:34.281212 extend-filesystems[1522]: Resized partition /dev/vda9 Dec 12 18:33:34.301571 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 18:33:34.302013 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 18:33:34.302613 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 12 18:33:34.303021 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 12 18:33:34.305761 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 18:33:34.306145 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 18:33:34.311477 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 18:33:34.311857 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 18:33:34.343472 (ntainerd)[1553]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 12 18:33:34.348137 jq[1552]: true Dec 12 18:33:34.355952 extend-filesystems[1562]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 18:33:34.376262 update_engine[1542]: I20251212 18:33:34.364311 1542 main.cc:92] Flatcar Update Engine starting Dec 12 18:33:34.364511 systemd-logind[1540]: Watching system buttons on /dev/input/event2 (Power Button) Dec 12 18:33:34.364536 systemd-logind[1540]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 12 18:33:34.365893 systemd-logind[1540]: New seat seat0. Dec 12 18:33:34.371132 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 18:33:34.389233 tar[1549]: linux-amd64/LICENSE Dec 12 18:33:34.391547 tar[1549]: linux-amd64/helm Dec 12 18:33:34.482290 sshd_keygen[1545]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 18:33:34.532762 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 18:33:34.539738 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 18:33:34.544248 dbus-daemon[1519]: [system] SELinux support is enabled Dec 12 18:33:34.548080 update_engine[1542]: I20251212 18:33:34.547801 1542 update_check_scheduler.cc:74] Next update check in 6m51s Dec 12 18:33:34.551505 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 18:33:34.564942 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 12 18:33:34.567602 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 18:33:34.567632 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 18:33:34.569959 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 18:33:34.569980 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 18:33:34.573442 systemd[1]: Started update-engine.service - Update Engine. Dec 12 18:33:34.573841 dbus-daemon[1519]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 12 18:33:34.579449 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 18:33:34.585131 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 18:33:34.585484 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 18:33:34.591038 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 18:33:34.687968 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 18:33:34.696389 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 18:33:34.730808 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 12 18:33:34.733757 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 18:33:34.746622 locksmithd[1590]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 18:33:34.832248 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 12 18:33:34.879258 extend-filesystems[1562]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 12 18:33:34.879258 extend-filesystems[1562]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 12 18:33:34.879258 extend-filesystems[1562]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 12 18:33:34.866724 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 18:33:34.886281 bash[1580]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:33:34.886413 extend-filesystems[1522]: Resized filesystem in /dev/vda9 Dec 12 18:33:34.870519 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 18:33:34.870929 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 18:33:34.875656 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 12 18:33:35.093721 containerd[1553]: time="2025-12-12T18:33:35Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 18:33:35.094977 containerd[1553]: time="2025-12-12T18:33:35.094937561Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 12 18:33:35.160135 containerd[1553]: time="2025-12-12T18:33:35.160044196Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="18.485µs" Dec 12 18:33:35.160135 containerd[1553]: time="2025-12-12T18:33:35.160115940Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 18:33:35.160135 containerd[1553]: time="2025-12-12T18:33:35.160147630Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 18:33:35.160469 containerd[1553]: time="2025-12-12T18:33:35.160443394Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 18:33:35.160494 containerd[1553]: time="2025-12-12T18:33:35.160467740Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 18:33:35.160513 containerd[1553]: time="2025-12-12T18:33:35.160503076Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:33:35.160626 containerd[1553]: time="2025-12-12T18:33:35.160601871Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:33:35.160626 containerd[1553]: time="2025-12-12T18:33:35.160619504Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:33:35.161079 containerd[1553]: time="2025-12-12T18:33:35.161043650Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:33:35.161079 containerd[1553]: time="2025-12-12T18:33:35.161063747Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:33:35.161121 containerd[1553]: time="2025-12-12T18:33:35.161077764Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:33:35.161121 containerd[1553]: time="2025-12-12T18:33:35.161089345Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 18:33:35.161252 containerd[1553]: time="2025-12-12T18:33:35.161216584Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 18:33:35.161618 containerd[1553]: time="2025-12-12T18:33:35.161579044Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:33:35.161656 containerd[1553]: time="2025-12-12T18:33:35.161621724Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:33:35.161656 containerd[1553]: time="2025-12-12T18:33:35.161635039Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 18:33:35.161701 containerd[1553]: time="2025-12-12T18:33:35.161683980Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 18:33:35.162242 containerd[1553]: time="2025-12-12T18:33:35.162195910Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 18:33:35.162332 containerd[1553]: time="2025-12-12T18:33:35.162302270Z" level=info msg="metadata content store policy set" policy=shared Dec 12 18:33:35.173390 systemd-networkd[1458]: eth0: Gained IPv6LL Dec 12 18:33:35.178888 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 18:33:35.179626 containerd[1553]: time="2025-12-12T18:33:35.179409255Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 18:33:35.179626 containerd[1553]: time="2025-12-12T18:33:35.179519662Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 18:33:35.179626 containerd[1553]: time="2025-12-12T18:33:35.179549709Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 18:33:35.179626 containerd[1553]: time="2025-12-12T18:33:35.179568864Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 18:33:35.179626 containerd[1553]: time="2025-12-12T18:33:35.179594072Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 18:33:35.179784 containerd[1553]: time="2025-12-12T18:33:35.179634197Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 18:33:35.179784 containerd[1553]: time="2025-12-12T18:33:35.179665686Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 18:33:35.179784 containerd[1553]: time="2025-12-12T18:33:35.179680985Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 18:33:35.179784 containerd[1553]: time="2025-12-12T18:33:35.179696654Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 18:33:35.179784 containerd[1553]: time="2025-12-12T18:33:35.179731369Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 18:33:35.179784 containerd[1553]: time="2025-12-12T18:33:35.179758750Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 18:33:35.179784 containerd[1553]: time="2025-12-12T18:33:35.179780772Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 18:33:35.180721 containerd[1553]: time="2025-12-12T18:33:35.180543191Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 18:33:35.180721 containerd[1553]: time="2025-12-12T18:33:35.180578518Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 18:33:35.180721 containerd[1553]: time="2025-12-12T18:33:35.180600879Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 18:33:35.180721 containerd[1553]: time="2025-12-12T18:33:35.180629112Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 18:33:35.180721 containerd[1553]: time="2025-12-12T18:33:35.180652616Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 18:33:35.180721 containerd[1553]: time="2025-12-12T18:33:35.180679226Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 18:33:35.180721 containerd[1553]: time="2025-12-12T18:33:35.180696308Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 18:33:35.180721 containerd[1553]: time="2025-12-12T18:33:35.180719061Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 18:33:35.180956 containerd[1553]: time="2025-12-12T18:33:35.180733348Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 18:33:35.180956 containerd[1553]: time="2025-12-12T18:33:35.180746392Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 18:33:35.180956 containerd[1553]: time="2025-12-12T18:33:35.180762583Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 18:33:35.180956 containerd[1553]: time="2025-12-12T18:33:35.180871928Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 18:33:35.180956 containerd[1553]: time="2025-12-12T18:33:35.180900151Z" level=info msg="Start snapshots syncer" Dec 12 18:33:35.182148 containerd[1553]: time="2025-12-12T18:33:35.182021273Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 18:33:35.182124 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 18:33:35.184558 containerd[1553]: time="2025-12-12T18:33:35.184500471Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 18:33:35.184825 containerd[1553]: time="2025-12-12T18:33:35.184589368Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 18:33:35.184825 containerd[1553]: time="2025-12-12T18:33:35.184656414Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 18:33:35.184825 containerd[1553]: time="2025-12-12T18:33:35.184798280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 18:33:35.184977 containerd[1553]: time="2025-12-12T18:33:35.184825260Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 18:33:35.184977 containerd[1553]: time="2025-12-12T18:33:35.184841611Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 18:33:35.184977 containerd[1553]: time="2025-12-12T18:33:35.184853974Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 18:33:35.184977 containerd[1553]: time="2025-12-12T18:33:35.184874222Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 18:33:35.184977 containerd[1553]: time="2025-12-12T18:33:35.184885784Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 18:33:35.184977 containerd[1553]: time="2025-12-12T18:33:35.184900090Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 18:33:35.185168 containerd[1553]: time="2025-12-12T18:33:35.185032429Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 18:33:35.185168 containerd[1553]: time="2025-12-12T18:33:35.185142345Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 18:33:35.185168 containerd[1553]: time="2025-12-12T18:33:35.185162052Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 18:33:35.185307 containerd[1553]: time="2025-12-12T18:33:35.185283189Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:33:35.185340 containerd[1553]: time="2025-12-12T18:33:35.185311031Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:33:35.185340 containerd[1553]: time="2025-12-12T18:33:35.185320399Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:33:35.185340 containerd[1553]: time="2025-12-12T18:33:35.185332491Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:33:35.185469 containerd[1553]: time="2025-12-12T18:33:35.185342700Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 18:33:35.185469 containerd[1553]: time="2025-12-12T18:33:35.185363630Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 18:33:35.185469 containerd[1553]: time="2025-12-12T18:33:35.185386442Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 18:33:35.185469 containerd[1553]: time="2025-12-12T18:33:35.185428872Z" level=info msg="runtime interface created" Dec 12 18:33:35.185469 containerd[1553]: time="2025-12-12T18:33:35.185434362Z" level=info msg="created NRI interface" Dec 12 18:33:35.185469 containerd[1553]: time="2025-12-12T18:33:35.185445663Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 18:33:35.185469 containerd[1553]: time="2025-12-12T18:33:35.185459259Z" level=info msg="Connect containerd service" Dec 12 18:33:35.185677 containerd[1553]: time="2025-12-12T18:33:35.185489586Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 18:33:35.186567 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 12 18:33:35.190490 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:33:35.199656 containerd[1553]: time="2025-12-12T18:33:35.190976564Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 18:33:35.200801 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 18:33:35.274127 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 18:33:35.278569 tar[1549]: linux-amd64/README.md Dec 12 18:33:35.289713 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 12 18:33:35.290213 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 12 18:33:35.296367 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 18:33:35.331321 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 18:33:35.478279 containerd[1553]: time="2025-12-12T18:33:35.478115048Z" level=info msg="Start subscribing containerd event" Dec 12 18:33:35.478279 containerd[1553]: time="2025-12-12T18:33:35.478194738Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 18:33:35.478469 containerd[1553]: time="2025-12-12T18:33:35.478303342Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 18:33:35.478469 containerd[1553]: time="2025-12-12T18:33:35.478207512Z" level=info msg="Start recovering state" Dec 12 18:33:35.478525 containerd[1553]: time="2025-12-12T18:33:35.478500521Z" level=info msg="Start event monitor" Dec 12 18:33:35.478553 containerd[1553]: time="2025-12-12T18:33:35.478524666Z" level=info msg="Start cni network conf syncer for default" Dec 12 18:33:35.478553 containerd[1553]: time="2025-12-12T18:33:35.478544624Z" level=info msg="Start streaming server" Dec 12 18:33:35.478605 containerd[1553]: time="2025-12-12T18:33:35.478562197Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 18:33:35.478605 containerd[1553]: time="2025-12-12T18:33:35.478573929Z" level=info msg="runtime interface starting up..." Dec 12 18:33:35.478605 containerd[1553]: time="2025-12-12T18:33:35.478586532Z" level=info msg="starting plugins..." Dec 12 18:33:35.478684 containerd[1553]: time="2025-12-12T18:33:35.478609956Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 18:33:35.478935 containerd[1553]: time="2025-12-12T18:33:35.478800203Z" level=info msg="containerd successfully booted in 0.385912s" Dec 12 18:33:35.479136 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 18:33:35.870745 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 18:33:35.878936 systemd[1]: Started sshd@0-10.0.0.38:22-10.0.0.1:38736.service - OpenSSH per-connection server daemon (10.0.0.1:38736). Dec 12 18:33:35.974301 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 38736 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:33:35.976602 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:33:35.992365 systemd-logind[1540]: New session 1 of user core. Dec 12 18:33:35.994149 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 18:33:35.997842 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 18:33:36.041570 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 18:33:36.047580 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 18:33:36.072216 (systemd)[1653]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 18:33:36.079746 systemd-logind[1540]: New session c1 of user core. Dec 12 18:33:36.337210 systemd[1653]: Queued start job for default target default.target. Dec 12 18:33:36.417076 systemd[1653]: Created slice app.slice - User Application Slice. Dec 12 18:33:36.417120 systemd[1653]: Reached target paths.target - Paths. Dec 12 18:33:36.417187 systemd[1653]: Reached target timers.target - Timers. Dec 12 18:33:36.423080 systemd[1653]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 18:33:36.438863 systemd[1653]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 18:33:36.439108 systemd[1653]: Reached target sockets.target - Sockets. Dec 12 18:33:36.439174 systemd[1653]: Reached target basic.target - Basic System. Dec 12 18:33:36.439266 systemd[1653]: Reached target default.target - Main User Target. Dec 12 18:33:36.439309 systemd[1653]: Startup finished in 351ms. Dec 12 18:33:36.440090 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 18:33:36.461713 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 18:33:36.536480 systemd[1]: Started sshd@1-10.0.0.38:22-10.0.0.1:38740.service - OpenSSH per-connection server daemon (10.0.0.1:38740). Dec 12 18:33:36.600847 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 38740 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:33:37.017952 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:33:37.025954 systemd-logind[1540]: New session 2 of user core. Dec 12 18:33:37.036225 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 18:33:37.099420 sshd[1667]: Connection closed by 10.0.0.1 port 38740 Dec 12 18:33:37.100250 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Dec 12 18:33:37.112152 systemd[1]: sshd@1-10.0.0.38:22-10.0.0.1:38740.service: Deactivated successfully. Dec 12 18:33:37.115161 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:33:37.118622 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 18:33:37.122283 systemd-logind[1540]: Session 2 logged out. Waiting for processes to exit. Dec 12 18:33:37.134466 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 18:33:37.139722 (kubelet)[1675]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:33:37.143121 systemd[1]: Started sshd@2-10.0.0.38:22-10.0.0.1:38742.service - OpenSSH per-connection server daemon (10.0.0.1:38742). Dec 12 18:33:37.148113 systemd[1]: Startup finished in 4.527s (kernel) + 8.706s (initrd) + 7.037s (userspace) = 20.271s. Dec 12 18:33:37.150192 systemd-logind[1540]: Removed session 2. Dec 12 18:33:37.230256 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 38742 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:33:37.232156 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:33:37.240498 systemd-logind[1540]: New session 3 of user core. Dec 12 18:33:37.252234 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 18:33:37.310389 sshd[1692]: Connection closed by 10.0.0.1 port 38742 Dec 12 18:33:37.311075 sshd-session[1680]: pam_unix(sshd:session): session closed for user core Dec 12 18:33:37.314633 systemd[1]: sshd@2-10.0.0.38:22-10.0.0.1:38742.service: Deactivated successfully. Dec 12 18:33:37.317332 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 18:33:37.319560 systemd-logind[1540]: Session 3 logged out. Waiting for processes to exit. Dec 12 18:33:37.321404 systemd-logind[1540]: Removed session 3. Dec 12 18:33:38.093990 kubelet[1675]: E1212 18:33:38.093878 1675 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:33:38.099132 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:33:38.099348 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:33:38.099780 systemd[1]: kubelet.service: Consumed 2.329s CPU time, 258.6M memory peak. Dec 12 18:33:47.328837 systemd[1]: Started sshd@3-10.0.0.38:22-10.0.0.1:59994.service - OpenSSH per-connection server daemon (10.0.0.1:59994). Dec 12 18:33:47.389651 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 59994 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:33:47.391354 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:33:47.395674 systemd-logind[1540]: New session 4 of user core. Dec 12 18:33:47.404139 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 18:33:47.461495 sshd[1703]: Connection closed by 10.0.0.1 port 59994 Dec 12 18:33:47.461884 sshd-session[1700]: pam_unix(sshd:session): session closed for user core Dec 12 18:33:47.475518 systemd[1]: sshd@3-10.0.0.38:22-10.0.0.1:59994.service: Deactivated successfully. Dec 12 18:33:47.478310 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 18:33:47.479315 systemd-logind[1540]: Session 4 logged out. Waiting for processes to exit. Dec 12 18:33:47.483413 systemd[1]: Started sshd@4-10.0.0.38:22-10.0.0.1:59996.service - OpenSSH per-connection server daemon (10.0.0.1:59996). Dec 12 18:33:47.484298 systemd-logind[1540]: Removed session 4. Dec 12 18:33:47.543303 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 59996 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:33:47.545456 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:33:47.550706 systemd-logind[1540]: New session 5 of user core. Dec 12 18:33:47.564142 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 18:33:47.615473 sshd[1712]: Connection closed by 10.0.0.1 port 59996 Dec 12 18:33:47.616017 sshd-session[1709]: pam_unix(sshd:session): session closed for user core Dec 12 18:33:47.632977 systemd[1]: sshd@4-10.0.0.38:22-10.0.0.1:59996.service: Deactivated successfully. Dec 12 18:33:47.634870 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 18:33:47.635625 systemd-logind[1540]: Session 5 logged out. Waiting for processes to exit. Dec 12 18:33:47.638418 systemd[1]: Started sshd@5-10.0.0.38:22-10.0.0.1:60000.service - OpenSSH per-connection server daemon (10.0.0.1:60000). Dec 12 18:33:47.639147 systemd-logind[1540]: Removed session 5. Dec 12 18:33:47.700299 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 60000 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:33:47.701871 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:33:47.707020 systemd-logind[1540]: New session 6 of user core. Dec 12 18:33:47.719163 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 18:33:47.775233 sshd[1721]: Connection closed by 10.0.0.1 port 60000 Dec 12 18:33:47.776245 sshd-session[1718]: pam_unix(sshd:session): session closed for user core Dec 12 18:33:47.785742 systemd[1]: sshd@5-10.0.0.38:22-10.0.0.1:60000.service: Deactivated successfully. Dec 12 18:33:47.787809 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 18:33:47.788721 systemd-logind[1540]: Session 6 logged out. Waiting for processes to exit. Dec 12 18:33:47.791461 systemd[1]: Started sshd@6-10.0.0.38:22-10.0.0.1:60004.service - OpenSSH per-connection server daemon (10.0.0.1:60004). Dec 12 18:33:47.792084 systemd-logind[1540]: Removed session 6. Dec 12 18:33:47.863568 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 60004 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:33:47.865545 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:33:47.871601 systemd-logind[1540]: New session 7 of user core. Dec 12 18:33:47.893150 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 18:33:47.955092 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 18:33:47.955501 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:33:47.983158 sudo[1731]: pam_unix(sudo:session): session closed for user root Dec 12 18:33:47.985398 sshd[1730]: Connection closed by 10.0.0.1 port 60004 Dec 12 18:33:47.985799 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Dec 12 18:33:48.000958 systemd[1]: sshd@6-10.0.0.38:22-10.0.0.1:60004.service: Deactivated successfully. Dec 12 18:33:48.002988 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 18:33:48.003870 systemd-logind[1540]: Session 7 logged out. Waiting for processes to exit. Dec 12 18:33:48.006772 systemd[1]: Started sshd@7-10.0.0.38:22-10.0.0.1:60006.service - OpenSSH per-connection server daemon (10.0.0.1:60006). Dec 12 18:33:48.007511 systemd-logind[1540]: Removed session 7. Dec 12 18:33:48.074885 sshd[1737]: Accepted publickey for core from 10.0.0.1 port 60006 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:33:48.076963 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:33:48.082645 systemd-logind[1540]: New session 8 of user core. Dec 12 18:33:48.098380 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 18:33:48.101297 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 18:33:48.103276 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:33:48.160248 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 18:33:48.160687 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:33:48.192161 sudo[1745]: pam_unix(sudo:session): session closed for user root Dec 12 18:33:48.200181 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 18:33:48.200570 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:33:48.213006 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:33:48.269267 augenrules[1767]: No rules Dec 12 18:33:48.271407 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:33:48.271788 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:33:48.273168 sudo[1744]: pam_unix(sudo:session): session closed for user root Dec 12 18:33:48.274808 sshd[1741]: Connection closed by 10.0.0.1 port 60006 Dec 12 18:33:48.277127 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Dec 12 18:33:48.289693 systemd[1]: sshd@7-10.0.0.38:22-10.0.0.1:60006.service: Deactivated successfully. Dec 12 18:33:48.291985 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 18:33:48.292957 systemd-logind[1540]: Session 8 logged out. Waiting for processes to exit. Dec 12 18:33:48.296305 systemd[1]: Started sshd@8-10.0.0.38:22-10.0.0.1:60014.service - OpenSSH per-connection server daemon (10.0.0.1:60014). Dec 12 18:33:48.297263 systemd-logind[1540]: Removed session 8. Dec 12 18:33:48.364353 sshd[1776]: Accepted publickey for core from 10.0.0.1 port 60014 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:33:48.366354 sshd-session[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:33:48.371386 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:33:48.374673 systemd-logind[1540]: New session 9 of user core. Dec 12 18:33:48.384097 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 18:33:48.384419 (kubelet)[1784]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:33:48.442049 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 18:33:48.442444 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:33:48.443615 kubelet[1784]: E1212 18:33:48.443571 1784 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:33:48.451463 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:33:48.451669 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:33:48.452121 systemd[1]: kubelet.service: Consumed 291ms CPU time, 110.1M memory peak. Dec 12 18:33:49.445988 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 18:33:49.464537 (dockerd)[1815]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 18:33:49.742969 dockerd[1815]: time="2025-12-12T18:33:49.742773058Z" level=info msg="Starting up" Dec 12 18:33:49.743993 dockerd[1815]: time="2025-12-12T18:33:49.743962649Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 18:33:49.759975 dockerd[1815]: time="2025-12-12T18:33:49.759922474Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 18:33:51.024503 dockerd[1815]: time="2025-12-12T18:33:51.024427220Z" level=info msg="Loading containers: start." Dec 12 18:33:51.255966 kernel: Initializing XFRM netlink socket Dec 12 18:33:52.109370 systemd-networkd[1458]: docker0: Link UP Dec 12 18:33:52.492531 dockerd[1815]: time="2025-12-12T18:33:52.492446946Z" level=info msg="Loading containers: done." Dec 12 18:33:52.511645 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2966918243-merged.mount: Deactivated successfully. Dec 12 18:33:52.598103 dockerd[1815]: time="2025-12-12T18:33:52.598042818Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 18:33:52.598273 dockerd[1815]: time="2025-12-12T18:33:52.598148647Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 18:33:52.598299 dockerd[1815]: time="2025-12-12T18:33:52.598284281Z" level=info msg="Initializing buildkit" Dec 12 18:33:52.963312 dockerd[1815]: time="2025-12-12T18:33:52.963225586Z" level=info msg="Completed buildkit initialization" Dec 12 18:33:52.971193 dockerd[1815]: time="2025-12-12T18:33:52.971105180Z" level=info msg="Daemon has completed initialization" Dec 12 18:33:52.971365 dockerd[1815]: time="2025-12-12T18:33:52.971216239Z" level=info msg="API listen on /run/docker.sock" Dec 12 18:33:52.971426 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 18:33:54.154141 containerd[1553]: time="2025-12-12T18:33:54.154065621Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 12 18:33:56.961902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2413204625.mount: Deactivated successfully. Dec 12 18:33:58.611285 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 12 18:33:58.613716 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:33:59.021790 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:33:59.036470 (kubelet)[2062]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:33:59.141969 kubelet[2062]: E1212 18:33:59.141884 2062 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:33:59.146456 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:33:59.146703 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:33:59.147325 systemd[1]: kubelet.service: Consumed 402ms CPU time, 112M memory peak. Dec 12 18:34:00.558476 containerd[1553]: time="2025-12-12T18:34:00.558391180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:00.595156 containerd[1553]: time="2025-12-12T18:34:00.595087876Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068073" Dec 12 18:34:00.598302 containerd[1553]: time="2025-12-12T18:34:00.598256808Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:00.607545 containerd[1553]: time="2025-12-12T18:34:00.607265830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:00.609700 containerd[1553]: time="2025-12-12T18:34:00.609056497Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 6.454914913s" Dec 12 18:34:00.609700 containerd[1553]: time="2025-12-12T18:34:00.609112963Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Dec 12 18:34:00.610505 containerd[1553]: time="2025-12-12T18:34:00.610460750Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 12 18:34:02.269697 containerd[1553]: time="2025-12-12T18:34:02.269618510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:02.274106 containerd[1553]: time="2025-12-12T18:34:02.274051552Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162440" Dec 12 18:34:02.275677 containerd[1553]: time="2025-12-12T18:34:02.275631395Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:02.278937 containerd[1553]: time="2025-12-12T18:34:02.278872582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:02.279969 containerd[1553]: time="2025-12-12T18:34:02.279891352Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.669377923s" Dec 12 18:34:02.279969 containerd[1553]: time="2025-12-12T18:34:02.279959259Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Dec 12 18:34:02.280549 containerd[1553]: time="2025-12-12T18:34:02.280513969Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 12 18:34:03.768730 containerd[1553]: time="2025-12-12T18:34:03.768654773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:03.769700 containerd[1553]: time="2025-12-12T18:34:03.769634610Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725927" Dec 12 18:34:03.770995 containerd[1553]: time="2025-12-12T18:34:03.770957040Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:03.774315 containerd[1553]: time="2025-12-12T18:34:03.774270292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:03.775513 containerd[1553]: time="2025-12-12T18:34:03.775484218Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 1.494940303s" Dec 12 18:34:03.775559 containerd[1553]: time="2025-12-12T18:34:03.775515827Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Dec 12 18:34:03.776019 containerd[1553]: time="2025-12-12T18:34:03.775976732Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 12 18:34:06.366071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2021290558.mount: Deactivated successfully. Dec 12 18:34:07.338445 containerd[1553]: time="2025-12-12T18:34:07.338279427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:07.343260 containerd[1553]: time="2025-12-12T18:34:07.343157449Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965293" Dec 12 18:34:07.347531 containerd[1553]: time="2025-12-12T18:34:07.347415154Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:07.353136 containerd[1553]: time="2025-12-12T18:34:07.353033050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:07.354111 containerd[1553]: time="2025-12-12T18:34:07.353739622Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 3.577720711s" Dec 12 18:34:07.354111 containerd[1553]: time="2025-12-12T18:34:07.353785430Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Dec 12 18:34:07.354437 containerd[1553]: time="2025-12-12T18:34:07.354390117Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 12 18:34:08.235721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1214974084.mount: Deactivated successfully. Dec 12 18:34:09.361226 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 12 18:34:09.363547 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:34:09.755822 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:34:09.773436 (kubelet)[2147]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:34:10.057180 kubelet[2147]: E1212 18:34:10.056984 2147 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:34:10.063022 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:34:10.063230 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:34:10.063660 systemd[1]: kubelet.service: Consumed 299ms CPU time, 109.1M memory peak. Dec 12 18:34:11.439733 containerd[1553]: time="2025-12-12T18:34:11.439642427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:11.440715 containerd[1553]: time="2025-12-12T18:34:11.440670104Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Dec 12 18:34:11.443421 containerd[1553]: time="2025-12-12T18:34:11.442830990Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:11.447394 containerd[1553]: time="2025-12-12T18:34:11.447265294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:11.450565 containerd[1553]: time="2025-12-12T18:34:11.450377921Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 4.095926247s" Dec 12 18:34:11.450565 containerd[1553]: time="2025-12-12T18:34:11.450451691Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Dec 12 18:34:11.452611 containerd[1553]: time="2025-12-12T18:34:11.452583080Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 12 18:34:12.455104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2418890667.mount: Deactivated successfully. Dec 12 18:34:12.466184 containerd[1553]: time="2025-12-12T18:34:12.466112817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:12.468138 containerd[1553]: time="2025-12-12T18:34:12.468081532Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Dec 12 18:34:12.469326 containerd[1553]: time="2025-12-12T18:34:12.469279451Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:12.471895 containerd[1553]: time="2025-12-12T18:34:12.471844661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:12.472783 containerd[1553]: time="2025-12-12T18:34:12.472734294Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.020123171s" Dec 12 18:34:12.472783 containerd[1553]: time="2025-12-12T18:34:12.472766285Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Dec 12 18:34:12.473286 containerd[1553]: time="2025-12-12T18:34:12.473245156Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 12 18:34:13.569638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1679761274.mount: Deactivated successfully. Dec 12 18:34:15.872418 containerd[1553]: time="2025-12-12T18:34:15.872312179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:15.873171 containerd[1553]: time="2025-12-12T18:34:15.873129589Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166814" Dec 12 18:34:15.874706 containerd[1553]: time="2025-12-12T18:34:15.874644835Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:15.878542 containerd[1553]: time="2025-12-12T18:34:15.878466287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:15.880063 containerd[1553]: time="2025-12-12T18:34:15.880031417Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.406753619s" Dec 12 18:34:15.880119 containerd[1553]: time="2025-12-12T18:34:15.880067696Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Dec 12 18:34:18.820318 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:34:18.820478 systemd[1]: kubelet.service: Consumed 299ms CPU time, 109.1M memory peak. Dec 12 18:34:18.822541 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:34:18.850700 systemd[1]: Reload requested from client PID 2284 ('systemctl') (unit session-9.scope)... Dec 12 18:34:18.850713 systemd[1]: Reloading... Dec 12 18:34:18.959946 zram_generator::config[2326]: No configuration found. Dec 12 18:34:19.272785 systemd[1]: Reloading finished in 421 ms. Dec 12 18:34:19.373837 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 18:34:19.373998 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 18:34:19.374432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:34:19.374514 systemd[1]: kubelet.service: Consumed 157ms CPU time, 98.2M memory peak. Dec 12 18:34:19.376668 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:34:19.586228 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:34:19.590610 (kubelet)[2375]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:34:19.633046 kubelet[2375]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:34:19.633046 kubelet[2375]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:34:19.633451 kubelet[2375]: I1212 18:34:19.633100 2375 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:34:19.905766 update_engine[1542]: I20251212 18:34:19.905627 1542 update_attempter.cc:509] Updating boot flags... Dec 12 18:34:20.393474 kubelet[2375]: I1212 18:34:20.393437 2375 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 12 18:34:20.393737 kubelet[2375]: I1212 18:34:20.393723 2375 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:34:20.393929 kubelet[2375]: I1212 18:34:20.393894 2375 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 12 18:34:20.394017 kubelet[2375]: I1212 18:34:20.393994 2375 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:34:20.395415 kubelet[2375]: I1212 18:34:20.395397 2375 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 18:34:21.045379 kubelet[2375]: E1212 18:34:21.045313 2375 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 18:34:21.045950 kubelet[2375]: I1212 18:34:21.045921 2375 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:34:21.050089 kubelet[2375]: I1212 18:34:21.050053 2375 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:34:21.055283 kubelet[2375]: I1212 18:34:21.055237 2375 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 12 18:34:21.056691 kubelet[2375]: I1212 18:34:21.056644 2375 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:34:21.056876 kubelet[2375]: I1212 18:34:21.056681 2375 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:34:21.057031 kubelet[2375]: I1212 18:34:21.056888 2375 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:34:21.057031 kubelet[2375]: I1212 18:34:21.056901 2375 container_manager_linux.go:306] "Creating device plugin manager" Dec 12 18:34:21.057080 kubelet[2375]: I1212 18:34:21.057063 2375 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 12 18:34:21.365350 kubelet[2375]: I1212 18:34:21.365295 2375 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:34:21.367945 kubelet[2375]: I1212 18:34:21.367889 2375 kubelet.go:475] "Attempting to sync node with API server" Dec 12 18:34:21.368015 kubelet[2375]: I1212 18:34:21.367954 2375 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:34:21.368015 kubelet[2375]: I1212 18:34:21.368011 2375 kubelet.go:387] "Adding apiserver pod source" Dec 12 18:34:21.368091 kubelet[2375]: I1212 18:34:21.368050 2375 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:34:21.384420 kubelet[2375]: E1212 18:34:21.384298 2375 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 18:34:21.384596 kubelet[2375]: E1212 18:34:21.384454 2375 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 18:34:21.386581 kubelet[2375]: I1212 18:34:21.386520 2375 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 18:34:21.387774 kubelet[2375]: I1212 18:34:21.387733 2375 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 18:34:21.387774 kubelet[2375]: I1212 18:34:21.387776 2375 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 12 18:34:21.388022 kubelet[2375]: W1212 18:34:21.387868 2375 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 18:34:21.428746 kubelet[2375]: I1212 18:34:21.428690 2375 server.go:1262] "Started kubelet" Dec 12 18:34:21.430039 kubelet[2375]: I1212 18:34:21.430002 2375 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:34:21.430221 kubelet[2375]: I1212 18:34:21.430042 2375 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:34:21.430221 kubelet[2375]: I1212 18:34:21.430101 2375 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 12 18:34:21.430506 kubelet[2375]: I1212 18:34:21.430380 2375 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:34:21.430579 kubelet[2375]: I1212 18:34:21.430559 2375 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:34:21.433885 kubelet[2375]: I1212 18:34:21.433851 2375 server.go:310] "Adding debug handlers to kubelet server" Dec 12 18:34:21.435231 kubelet[2375]: E1212 18:34:21.435130 2375 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 18:34:21.435231 kubelet[2375]: I1212 18:34:21.435229 2375 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 12 18:34:21.435563 kubelet[2375]: I1212 18:34:21.435531 2375 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 12 18:34:21.435687 kubelet[2375]: I1212 18:34:21.435596 2375 reconciler.go:29] "Reconciler: start to sync state" Dec 12 18:34:21.436067 kubelet[2375]: E1212 18:34:21.436043 2375 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 18:34:21.436332 kubelet[2375]: E1212 18:34:21.436293 2375 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="200ms" Dec 12 18:34:21.436426 kubelet[2375]: E1212 18:34:21.436408 2375 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:34:21.436645 kubelet[2375]: I1212 18:34:21.436623 2375 factory.go:223] Registration of the systemd container factory successfully Dec 12 18:34:21.436737 kubelet[2375]: I1212 18:34:21.436704 2375 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:34:21.437394 kubelet[2375]: I1212 18:34:21.437370 2375 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:34:21.438653 kubelet[2375]: I1212 18:34:21.437900 2375 factory.go:223] Registration of the containerd container factory successfully Dec 12 18:34:21.438653 kubelet[2375]: E1212 18:34:21.435891 2375 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.38:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.38:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18808b85f781b748 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-12 18:34:21.42861908 +0000 UTC m=+1.833591903,LastTimestamp:2025-12-12 18:34:21.42861908 +0000 UTC m=+1.833591903,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 12 18:34:21.438981 kubelet[2375]: I1212 18:34:21.438880 2375 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 12 18:34:21.454642 kubelet[2375]: I1212 18:34:21.454606 2375 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 12 18:34:21.454642 kubelet[2375]: I1212 18:34:21.454646 2375 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 12 18:34:21.454808 kubelet[2375]: I1212 18:34:21.454686 2375 kubelet.go:2427] "Starting kubelet main sync loop" Dec 12 18:34:21.454808 kubelet[2375]: E1212 18:34:21.454726 2375 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:34:21.455163 kubelet[2375]: E1212 18:34:21.455125 2375 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 18:34:21.455830 kubelet[2375]: I1212 18:34:21.455802 2375 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:34:21.455830 kubelet[2375]: I1212 18:34:21.455817 2375 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:34:21.456014 kubelet[2375]: I1212 18:34:21.455858 2375 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:34:21.460071 kubelet[2375]: I1212 18:34:21.460033 2375 policy_none.go:49] "None policy: Start" Dec 12 18:34:21.460071 kubelet[2375]: I1212 18:34:21.460061 2375 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 12 18:34:21.460153 kubelet[2375]: I1212 18:34:21.460075 2375 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 12 18:34:21.461832 kubelet[2375]: I1212 18:34:21.461805 2375 policy_none.go:47] "Start" Dec 12 18:34:21.469280 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 18:34:21.487059 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 18:34:21.490678 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 18:34:21.505129 kubelet[2375]: E1212 18:34:21.505073 2375 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 18:34:21.505426 kubelet[2375]: I1212 18:34:21.505392 2375 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:34:21.505473 kubelet[2375]: I1212 18:34:21.505414 2375 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:34:21.505736 kubelet[2375]: I1212 18:34:21.505707 2375 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:34:21.507033 kubelet[2375]: E1212 18:34:21.507006 2375 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:34:21.507123 kubelet[2375]: E1212 18:34:21.507105 2375 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 12 18:34:21.569777 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Dec 12 18:34:21.591224 kubelet[2375]: E1212 18:34:21.591169 2375 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 18:34:21.595282 systemd[1]: Created slice kubepods-burstable-pod3e7372ec8d0dce286c27ddaf7f44d897.slice - libcontainer container kubepods-burstable-pod3e7372ec8d0dce286c27ddaf7f44d897.slice. Dec 12 18:34:21.597597 kubelet[2375]: E1212 18:34:21.597485 2375 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 18:34:21.607958 kubelet[2375]: I1212 18:34:21.607893 2375 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 18:34:21.608397 kubelet[2375]: E1212 18:34:21.608364 2375 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.38:6443/api/v1/nodes\": dial tcp 10.0.0.38:6443: connect: connection refused" node="localhost" Dec 12 18:34:21.611302 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Dec 12 18:34:21.614494 kubelet[2375]: E1212 18:34:21.614461 2375 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 18:34:21.637147 kubelet[2375]: I1212 18:34:21.636956 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e7372ec8d0dce286c27ddaf7f44d897-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3e7372ec8d0dce286c27ddaf7f44d897\") " pod="kube-system/kube-apiserver-localhost" Dec 12 18:34:21.637147 kubelet[2375]: I1212 18:34:21.637013 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e7372ec8d0dce286c27ddaf7f44d897-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3e7372ec8d0dce286c27ddaf7f44d897\") " pod="kube-system/kube-apiserver-localhost" Dec 12 18:34:21.637324 kubelet[2375]: E1212 18:34:21.637180 2375 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="400ms" Dec 12 18:34:21.737820 kubelet[2375]: I1212 18:34:21.737749 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e7372ec8d0dce286c27ddaf7f44d897-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3e7372ec8d0dce286c27ddaf7f44d897\") " pod="kube-system/kube-apiserver-localhost" Dec 12 18:34:21.737820 kubelet[2375]: I1212 18:34:21.737823 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 18:34:21.738094 kubelet[2375]: I1212 18:34:21.737966 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 18:34:21.738094 kubelet[2375]: I1212 18:34:21.738030 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 18:34:21.738164 kubelet[2375]: I1212 18:34:21.738092 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Dec 12 18:34:21.738164 kubelet[2375]: I1212 18:34:21.738133 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 18:34:21.738235 kubelet[2375]: I1212 18:34:21.738182 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 18:34:21.810526 kubelet[2375]: I1212 18:34:21.810447 2375 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 18:34:21.810993 kubelet[2375]: E1212 18:34:21.810949 2375 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.38:6443/api/v1/nodes\": dial tcp 10.0.0.38:6443: connect: connection refused" node="localhost" Dec 12 18:34:21.896555 kubelet[2375]: E1212 18:34:21.896381 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:21.897525 containerd[1553]: time="2025-12-12T18:34:21.897449321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Dec 12 18:34:21.901828 kubelet[2375]: E1212 18:34:21.901773 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:21.902305 containerd[1553]: time="2025-12-12T18:34:21.902264635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3e7372ec8d0dce286c27ddaf7f44d897,Namespace:kube-system,Attempt:0,}" Dec 12 18:34:21.918348 kubelet[2375]: E1212 18:34:21.918277 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:21.918933 containerd[1553]: time="2025-12-12T18:34:21.918873320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Dec 12 18:34:22.038658 kubelet[2375]: E1212 18:34:22.038587 2375 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="800ms" Dec 12 18:34:22.213576 kubelet[2375]: I1212 18:34:22.213420 2375 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 18:34:22.214083 kubelet[2375]: E1212 18:34:22.213870 2375 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.38:6443/api/v1/nodes\": dial tcp 10.0.0.38:6443: connect: connection refused" node="localhost" Dec 12 18:34:22.429179 kubelet[2375]: E1212 18:34:22.429121 2375 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 18:34:22.451895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount187225838.mount: Deactivated successfully. Dec 12 18:34:22.462745 containerd[1553]: time="2025-12-12T18:34:22.462700287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:34:22.467194 containerd[1553]: time="2025-12-12T18:34:22.467090362Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Dec 12 18:34:22.468946 containerd[1553]: time="2025-12-12T18:34:22.468873581Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:34:22.472064 containerd[1553]: time="2025-12-12T18:34:22.472015979Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:34:22.473418 containerd[1553]: time="2025-12-12T18:34:22.473383612Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 18:34:22.474666 containerd[1553]: time="2025-12-12T18:34:22.474619297Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:34:22.476238 containerd[1553]: time="2025-12-12T18:34:22.476196427Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 18:34:22.477763 containerd[1553]: time="2025-12-12T18:34:22.477710718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:34:22.478452 containerd[1553]: time="2025-12-12T18:34:22.478409077Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 574.040596ms" Dec 12 18:34:22.481274 containerd[1553]: time="2025-12-12T18:34:22.481222534Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 560.378738ms" Dec 12 18:34:22.482324 containerd[1553]: time="2025-12-12T18:34:22.482274070Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 580.771998ms" Dec 12 18:34:22.520686 kubelet[2375]: E1212 18:34:22.520636 2375 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 18:34:22.525956 containerd[1553]: time="2025-12-12T18:34:22.525098643Z" level=info msg="connecting to shim 5aa82e2f960b1b9874aa5abf3bc943b9c5713b51626083cf19777fdc41f0011a" address="unix:///run/containerd/s/4d0fb6dc497bbbe3e4fbc7e50692098f850b6ebbc94c4c9b18a78435c7489ab8" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:34:22.529753 containerd[1553]: time="2025-12-12T18:34:22.529698484Z" level=info msg="connecting to shim 045a44e6aea7a2bf75a8fc2f2f019685ce78f8c4b1e29579969cd8881855cb23" address="unix:///run/containerd/s/3d3c0ae9081b4bb8ede2fa239edb86b098dccea9f119a6923959e3c16f6ec45c" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:34:22.534670 containerd[1553]: time="2025-12-12T18:34:22.534613520Z" level=info msg="connecting to shim 13f537d74090d8f73f4651ef835984c0eb173fc12ebd2f5bf799db82aaee2255" address="unix:///run/containerd/s/2b5b33e489ca27385b0f0b170833575975214cc8cd537edc096363baf1cd9a06" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:34:22.558161 systemd[1]: Started cri-containerd-045a44e6aea7a2bf75a8fc2f2f019685ce78f8c4b1e29579969cd8881855cb23.scope - libcontainer container 045a44e6aea7a2bf75a8fc2f2f019685ce78f8c4b1e29579969cd8881855cb23. Dec 12 18:34:22.562685 systemd[1]: Started cri-containerd-5aa82e2f960b1b9874aa5abf3bc943b9c5713b51626083cf19777fdc41f0011a.scope - libcontainer container 5aa82e2f960b1b9874aa5abf3bc943b9c5713b51626083cf19777fdc41f0011a. Dec 12 18:34:22.563747 kubelet[2375]: E1212 18:34:22.563663 2375 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.38:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.38:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18808b85f781b748 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-12 18:34:21.42861908 +0000 UTC m=+1.833591903,LastTimestamp:2025-12-12 18:34:21.42861908 +0000 UTC m=+1.833591903,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 12 18:34:22.567139 systemd[1]: Started cri-containerd-13f537d74090d8f73f4651ef835984c0eb173fc12ebd2f5bf799db82aaee2255.scope - libcontainer container 13f537d74090d8f73f4651ef835984c0eb173fc12ebd2f5bf799db82aaee2255. Dec 12 18:34:22.670556 containerd[1553]: time="2025-12-12T18:34:22.670100179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3e7372ec8d0dce286c27ddaf7f44d897,Namespace:kube-system,Attempt:0,} returns sandbox id \"5aa82e2f960b1b9874aa5abf3bc943b9c5713b51626083cf19777fdc41f0011a\"" Dec 12 18:34:22.671757 kubelet[2375]: E1212 18:34:22.671498 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:22.684182 containerd[1553]: time="2025-12-12T18:34:22.684117153Z" level=info msg="CreateContainer within sandbox \"5aa82e2f960b1b9874aa5abf3bc943b9c5713b51626083cf19777fdc41f0011a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 18:34:22.684373 containerd[1553]: time="2025-12-12T18:34:22.684120840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"045a44e6aea7a2bf75a8fc2f2f019685ce78f8c4b1e29579969cd8881855cb23\"" Dec 12 18:34:22.685689 kubelet[2375]: E1212 18:34:22.685668 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:22.691099 containerd[1553]: time="2025-12-12T18:34:22.691048259Z" level=info msg="CreateContainer within sandbox \"045a44e6aea7a2bf75a8fc2f2f019685ce78f8c4b1e29579969cd8881855cb23\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 18:34:22.692656 kubelet[2375]: E1212 18:34:22.692614 2375 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 18:34:22.705433 containerd[1553]: time="2025-12-12T18:34:22.705368064Z" level=info msg="Container 664f076605a794c60920da99dbe304710af05fe45eb72b04d0f40d589a5c38ef: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:34:22.709130 containerd[1553]: time="2025-12-12T18:34:22.709097271Z" level=info msg="Container 01b1372d70f24c430af5c9fe4a456564b1e139ed54a7022ebf1f7bdd450367c5: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:34:22.719291 containerd[1553]: time="2025-12-12T18:34:22.718195090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"13f537d74090d8f73f4651ef835984c0eb173fc12ebd2f5bf799db82aaee2255\"" Dec 12 18:34:22.719483 kubelet[2375]: E1212 18:34:22.719442 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:22.721215 containerd[1553]: time="2025-12-12T18:34:22.721043252Z" level=info msg="CreateContainer within sandbox \"5aa82e2f960b1b9874aa5abf3bc943b9c5713b51626083cf19777fdc41f0011a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"664f076605a794c60920da99dbe304710af05fe45eb72b04d0f40d589a5c38ef\"" Dec 12 18:34:22.721742 containerd[1553]: time="2025-12-12T18:34:22.721678041Z" level=info msg="StartContainer for \"664f076605a794c60920da99dbe304710af05fe45eb72b04d0f40d589a5c38ef\"" Dec 12 18:34:22.724215 containerd[1553]: time="2025-12-12T18:34:22.724175660Z" level=info msg="connecting to shim 664f076605a794c60920da99dbe304710af05fe45eb72b04d0f40d589a5c38ef" address="unix:///run/containerd/s/4d0fb6dc497bbbe3e4fbc7e50692098f850b6ebbc94c4c9b18a78435c7489ab8" protocol=ttrpc version=3 Dec 12 18:34:22.732097 containerd[1553]: time="2025-12-12T18:34:22.732050859Z" level=info msg="CreateContainer within sandbox \"13f537d74090d8f73f4651ef835984c0eb173fc12ebd2f5bf799db82aaee2255\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 18:34:22.733307 containerd[1553]: time="2025-12-12T18:34:22.733267348Z" level=info msg="CreateContainer within sandbox \"045a44e6aea7a2bf75a8fc2f2f019685ce78f8c4b1e29579969cd8881855cb23\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"01b1372d70f24c430af5c9fe4a456564b1e139ed54a7022ebf1f7bdd450367c5\"" Dec 12 18:34:22.734744 containerd[1553]: time="2025-12-12T18:34:22.733858013Z" level=info msg="StartContainer for \"01b1372d70f24c430af5c9fe4a456564b1e139ed54a7022ebf1f7bdd450367c5\"" Dec 12 18:34:22.735274 containerd[1553]: time="2025-12-12T18:34:22.735219326Z" level=info msg="connecting to shim 01b1372d70f24c430af5c9fe4a456564b1e139ed54a7022ebf1f7bdd450367c5" address="unix:///run/containerd/s/3d3c0ae9081b4bb8ede2fa239edb86b098dccea9f119a6923959e3c16f6ec45c" protocol=ttrpc version=3 Dec 12 18:34:22.747064 systemd[1]: Started cri-containerd-664f076605a794c60920da99dbe304710af05fe45eb72b04d0f40d589a5c38ef.scope - libcontainer container 664f076605a794c60920da99dbe304710af05fe45eb72b04d0f40d589a5c38ef. Dec 12 18:34:22.752768 systemd[1]: Started cri-containerd-01b1372d70f24c430af5c9fe4a456564b1e139ed54a7022ebf1f7bdd450367c5.scope - libcontainer container 01b1372d70f24c430af5c9fe4a456564b1e139ed54a7022ebf1f7bdd450367c5. Dec 12 18:34:22.774537 kubelet[2375]: E1212 18:34:22.774477 2375 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 18:34:22.839142 kubelet[2375]: E1212 18:34:22.839080 2375 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="1.6s" Dec 12 18:34:23.015825 kubelet[2375]: I1212 18:34:23.015680 2375 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 18:34:23.063798 containerd[1553]: time="2025-12-12T18:34:23.063732805Z" level=info msg="StartContainer for \"664f076605a794c60920da99dbe304710af05fe45eb72b04d0f40d589a5c38ef\" returns successfully" Dec 12 18:34:23.064946 containerd[1553]: time="2025-12-12T18:34:23.064256664Z" level=info msg="StartContainer for \"01b1372d70f24c430af5c9fe4a456564b1e139ed54a7022ebf1f7bdd450367c5\" returns successfully" Dec 12 18:34:23.076337 containerd[1553]: time="2025-12-12T18:34:23.076266955Z" level=info msg="Container 3a462c351ac3784dde0fe31d330925ad535b6b15a09d2a444187161f7ef55b46: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:34:23.094719 containerd[1553]: time="2025-12-12T18:34:23.094666112Z" level=info msg="CreateContainer within sandbox \"13f537d74090d8f73f4651ef835984c0eb173fc12ebd2f5bf799db82aaee2255\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3a462c351ac3784dde0fe31d330925ad535b6b15a09d2a444187161f7ef55b46\"" Dec 12 18:34:23.095326 containerd[1553]: time="2025-12-12T18:34:23.095300680Z" level=info msg="StartContainer for \"3a462c351ac3784dde0fe31d330925ad535b6b15a09d2a444187161f7ef55b46\"" Dec 12 18:34:23.096843 containerd[1553]: time="2025-12-12T18:34:23.096810101Z" level=info msg="connecting to shim 3a462c351ac3784dde0fe31d330925ad535b6b15a09d2a444187161f7ef55b46" address="unix:///run/containerd/s/2b5b33e489ca27385b0f0b170833575975214cc8cd537edc096363baf1cd9a06" protocol=ttrpc version=3 Dec 12 18:34:23.125229 systemd[1]: Started cri-containerd-3a462c351ac3784dde0fe31d330925ad535b6b15a09d2a444187161f7ef55b46.scope - libcontainer container 3a462c351ac3784dde0fe31d330925ad535b6b15a09d2a444187161f7ef55b46. Dec 12 18:34:23.189937 containerd[1553]: time="2025-12-12T18:34:23.189839838Z" level=info msg="StartContainer for \"3a462c351ac3784dde0fe31d330925ad535b6b15a09d2a444187161f7ef55b46\" returns successfully" Dec 12 18:34:23.465057 kubelet[2375]: E1212 18:34:23.465017 2375 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 18:34:23.465506 kubelet[2375]: E1212 18:34:23.465204 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:23.468928 kubelet[2375]: E1212 18:34:23.468526 2375 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 18:34:23.468928 kubelet[2375]: E1212 18:34:23.468627 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:23.470675 kubelet[2375]: E1212 18:34:23.470647 2375 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 18:34:23.470781 kubelet[2375]: E1212 18:34:23.470762 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:24.472871 kubelet[2375]: E1212 18:34:24.472828 2375 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 18:34:24.473335 kubelet[2375]: E1212 18:34:24.472978 2375 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 18:34:24.473335 kubelet[2375]: E1212 18:34:24.473036 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:24.473335 kubelet[2375]: E1212 18:34:24.473177 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:25.267893 kubelet[2375]: E1212 18:34:25.267839 2375 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 12 18:34:25.347245 kubelet[2375]: I1212 18:34:25.347180 2375 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 12 18:34:25.370929 kubelet[2375]: I1212 18:34:25.370863 2375 apiserver.go:52] "Watching apiserver" Dec 12 18:34:25.436256 kubelet[2375]: I1212 18:34:25.436164 2375 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 12 18:34:25.436256 kubelet[2375]: I1212 18:34:25.436226 2375 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 18:34:25.442081 kubelet[2375]: E1212 18:34:25.442044 2375 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 12 18:34:25.442081 kubelet[2375]: I1212 18:34:25.442078 2375 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 18:34:25.443742 kubelet[2375]: E1212 18:34:25.443690 2375 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 12 18:34:25.443742 kubelet[2375]: I1212 18:34:25.443714 2375 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 18:34:25.445150 kubelet[2375]: E1212 18:34:25.445122 2375 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 12 18:34:25.472425 kubelet[2375]: I1212 18:34:25.472384 2375 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 18:34:25.474353 kubelet[2375]: E1212 18:34:25.474314 2375 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 12 18:34:25.474745 kubelet[2375]: E1212 18:34:25.474486 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:25.566401 kubelet[2375]: I1212 18:34:25.566271 2375 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 18:34:25.568566 kubelet[2375]: E1212 18:34:25.568522 2375 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 12 18:34:25.568715 kubelet[2375]: E1212 18:34:25.568699 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:25.748465 kubelet[2375]: I1212 18:34:25.748384 2375 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 18:34:25.750330 kubelet[2375]: E1212 18:34:25.750307 2375 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 12 18:34:25.750509 kubelet[2375]: E1212 18:34:25.750488 2375 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:27.678758 systemd[1]: Reload requested from client PID 2681 ('systemctl') (unit session-9.scope)... Dec 12 18:34:27.678775 systemd[1]: Reloading... Dec 12 18:34:27.838934 zram_generator::config[2724]: No configuration found. Dec 12 18:34:28.808246 systemd[1]: Reloading finished in 1128 ms. Dec 12 18:34:28.894162 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:34:28.909458 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 18:34:28.909827 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:34:28.909881 systemd[1]: kubelet.service: Consumed 1.439s CPU time, 127.1M memory peak. Dec 12 18:34:28.912927 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:34:29.133253 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:34:29.152463 (kubelet)[2769]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:34:29.207621 kubelet[2769]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:34:29.207621 kubelet[2769]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:34:29.208064 kubelet[2769]: I1212 18:34:29.207631 2769 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:34:29.214432 kubelet[2769]: I1212 18:34:29.214400 2769 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 12 18:34:29.214432 kubelet[2769]: I1212 18:34:29.214427 2769 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:34:29.214570 kubelet[2769]: I1212 18:34:29.214458 2769 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 12 18:34:29.214570 kubelet[2769]: I1212 18:34:29.214466 2769 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:34:29.214734 kubelet[2769]: I1212 18:34:29.214663 2769 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 18:34:29.215941 kubelet[2769]: I1212 18:34:29.215903 2769 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 12 18:34:29.218774 kubelet[2769]: I1212 18:34:29.218702 2769 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:34:29.224472 kubelet[2769]: I1212 18:34:29.224427 2769 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:34:29.233405 kubelet[2769]: I1212 18:34:29.233343 2769 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 12 18:34:29.233612 kubelet[2769]: I1212 18:34:29.233592 2769 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:34:29.233820 kubelet[2769]: I1212 18:34:29.233618 2769 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:34:29.233820 kubelet[2769]: I1212 18:34:29.233805 2769 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:34:29.233820 kubelet[2769]: I1212 18:34:29.233814 2769 container_manager_linux.go:306] "Creating device plugin manager" Dec 12 18:34:29.234036 kubelet[2769]: I1212 18:34:29.233838 2769 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 12 18:34:29.234962 kubelet[2769]: I1212 18:34:29.234937 2769 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:34:29.235133 kubelet[2769]: I1212 18:34:29.235110 2769 kubelet.go:475] "Attempting to sync node with API server" Dec 12 18:34:29.235196 kubelet[2769]: I1212 18:34:29.235141 2769 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:34:29.235196 kubelet[2769]: I1212 18:34:29.235171 2769 kubelet.go:387] "Adding apiserver pod source" Dec 12 18:34:29.235196 kubelet[2769]: I1212 18:34:29.235190 2769 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:34:29.236042 kubelet[2769]: I1212 18:34:29.235930 2769 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 18:34:29.236448 kubelet[2769]: I1212 18:34:29.236422 2769 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 18:34:29.236508 kubelet[2769]: I1212 18:34:29.236454 2769 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 12 18:34:29.240940 kubelet[2769]: I1212 18:34:29.239462 2769 server.go:1262] "Started kubelet" Dec 12 18:34:29.240940 kubelet[2769]: I1212 18:34:29.240864 2769 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:34:29.247337 kubelet[2769]: I1212 18:34:29.247241 2769 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:34:29.247337 kubelet[2769]: I1212 18:34:29.247340 2769 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 12 18:34:29.247694 kubelet[2769]: I1212 18:34:29.247660 2769 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:34:29.251625 kubelet[2769]: E1212 18:34:29.251525 2769 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:34:29.252430 kubelet[2769]: I1212 18:34:29.252380 2769 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 12 18:34:29.252486 kubelet[2769]: I1212 18:34:29.252449 2769 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:34:29.252866 kubelet[2769]: I1212 18:34:29.252831 2769 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 12 18:34:29.253180 kubelet[2769]: I1212 18:34:29.253146 2769 reconciler.go:29] "Reconciler: start to sync state" Dec 12 18:34:29.254199 kubelet[2769]: I1212 18:34:29.251074 2769 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:34:29.255427 kubelet[2769]: I1212 18:34:29.255405 2769 server.go:310] "Adding debug handlers to kubelet server" Dec 12 18:34:29.258368 kubelet[2769]: I1212 18:34:29.258329 2769 factory.go:223] Registration of the systemd container factory successfully Dec 12 18:34:29.258506 kubelet[2769]: I1212 18:34:29.258469 2769 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:34:29.260251 kubelet[2769]: I1212 18:34:29.260221 2769 factory.go:223] Registration of the containerd container factory successfully Dec 12 18:34:29.264155 kubelet[2769]: I1212 18:34:29.264112 2769 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 12 18:34:29.265409 kubelet[2769]: I1212 18:34:29.265360 2769 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 12 18:34:29.265409 kubelet[2769]: I1212 18:34:29.265404 2769 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 12 18:34:29.265483 kubelet[2769]: I1212 18:34:29.265443 2769 kubelet.go:2427] "Starting kubelet main sync loop" Dec 12 18:34:29.265547 kubelet[2769]: E1212 18:34:29.265504 2769 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:34:29.303557 kubelet[2769]: I1212 18:34:29.303520 2769 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:34:29.303557 kubelet[2769]: I1212 18:34:29.303537 2769 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:34:29.303557 kubelet[2769]: I1212 18:34:29.303556 2769 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:34:29.303785 kubelet[2769]: I1212 18:34:29.303693 2769 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 18:34:29.303785 kubelet[2769]: I1212 18:34:29.303702 2769 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 18:34:29.303785 kubelet[2769]: I1212 18:34:29.303718 2769 policy_none.go:49] "None policy: Start" Dec 12 18:34:29.303785 kubelet[2769]: I1212 18:34:29.303728 2769 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 12 18:34:29.303785 kubelet[2769]: I1212 18:34:29.303737 2769 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 12 18:34:29.303890 kubelet[2769]: I1212 18:34:29.303818 2769 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 12 18:34:29.303890 kubelet[2769]: I1212 18:34:29.303827 2769 policy_none.go:47] "Start" Dec 12 18:34:29.308788 kubelet[2769]: E1212 18:34:29.308710 2769 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 18:34:29.308942 kubelet[2769]: I1212 18:34:29.308887 2769 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:34:29.308942 kubelet[2769]: I1212 18:34:29.308905 2769 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:34:29.309260 kubelet[2769]: I1212 18:34:29.309228 2769 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:34:29.312235 kubelet[2769]: E1212 18:34:29.312198 2769 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:34:29.366799 kubelet[2769]: I1212 18:34:29.366580 2769 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 18:34:29.367234 kubelet[2769]: I1212 18:34:29.366868 2769 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 18:34:29.367673 kubelet[2769]: I1212 18:34:29.367610 2769 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 18:34:29.417204 kubelet[2769]: I1212 18:34:29.415465 2769 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 18:34:29.426939 kubelet[2769]: I1212 18:34:29.426884 2769 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 12 18:34:29.427111 kubelet[2769]: I1212 18:34:29.427053 2769 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 12 18:34:29.554053 kubelet[2769]: I1212 18:34:29.553999 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 18:34:29.554053 kubelet[2769]: I1212 18:34:29.554039 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 18:34:29.554053 kubelet[2769]: I1212 18:34:29.554059 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 18:34:29.554291 kubelet[2769]: I1212 18:34:29.554099 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Dec 12 18:34:29.554291 kubelet[2769]: I1212 18:34:29.554116 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e7372ec8d0dce286c27ddaf7f44d897-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3e7372ec8d0dce286c27ddaf7f44d897\") " pod="kube-system/kube-apiserver-localhost" Dec 12 18:34:29.554291 kubelet[2769]: I1212 18:34:29.554136 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e7372ec8d0dce286c27ddaf7f44d897-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3e7372ec8d0dce286c27ddaf7f44d897\") " pod="kube-system/kube-apiserver-localhost" Dec 12 18:34:29.554291 kubelet[2769]: I1212 18:34:29.554181 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e7372ec8d0dce286c27ddaf7f44d897-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3e7372ec8d0dce286c27ddaf7f44d897\") " pod="kube-system/kube-apiserver-localhost" Dec 12 18:34:29.554291 kubelet[2769]: I1212 18:34:29.554210 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 18:34:29.554478 kubelet[2769]: I1212 18:34:29.554250 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 18:34:29.676662 kubelet[2769]: E1212 18:34:29.676538 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:29.676662 kubelet[2769]: E1212 18:34:29.676538 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:29.676827 kubelet[2769]: E1212 18:34:29.676538 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:30.236083 kubelet[2769]: I1212 18:34:30.235896 2769 apiserver.go:52] "Watching apiserver" Dec 12 18:34:30.253596 kubelet[2769]: I1212 18:34:30.253537 2769 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 12 18:34:30.284735 kubelet[2769]: I1212 18:34:30.284654 2769 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 18:34:30.286106 kubelet[2769]: I1212 18:34:30.285016 2769 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 18:34:30.286106 kubelet[2769]: E1212 18:34:30.285498 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:30.470540 kubelet[2769]: E1212 18:34:30.470480 2769 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 12 18:34:30.470787 kubelet[2769]: E1212 18:34:30.470733 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:30.473026 kubelet[2769]: E1212 18:34:30.472625 2769 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 12 18:34:30.473026 kubelet[2769]: E1212 18:34:30.472891 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:30.501567 kubelet[2769]: I1212 18:34:30.500457 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.500425068 podStartE2EDuration="1.500425068s" podCreationTimestamp="2025-12-12 18:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:34:30.471607828 +0000 UTC m=+1.314914963" watchObservedRunningTime="2025-12-12 18:34:30.500425068 +0000 UTC m=+1.343732203" Dec 12 18:34:30.523680 kubelet[2769]: I1212 18:34:30.523176 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.523147087 podStartE2EDuration="1.523147087s" podCreationTimestamp="2025-12-12 18:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:34:30.502383857 +0000 UTC m=+1.345690992" watchObservedRunningTime="2025-12-12 18:34:30.523147087 +0000 UTC m=+1.366454222" Dec 12 18:34:30.523680 kubelet[2769]: I1212 18:34:30.523294 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.523287922 podStartE2EDuration="1.523287922s" podCreationTimestamp="2025-12-12 18:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:34:30.514441098 +0000 UTC m=+1.357748243" watchObservedRunningTime="2025-12-12 18:34:30.523287922 +0000 UTC m=+1.366595067" Dec 12 18:34:31.287571 kubelet[2769]: E1212 18:34:31.287531 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:31.288049 kubelet[2769]: E1212 18:34:31.287616 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:33.210444 kubelet[2769]: E1212 18:34:33.210395 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:33.291420 kubelet[2769]: E1212 18:34:33.291387 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:34.625395 kubelet[2769]: I1212 18:34:34.625324 2769 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 18:34:34.626094 containerd[1553]: time="2025-12-12T18:34:34.626050105Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 18:34:34.626534 kubelet[2769]: I1212 18:34:34.626465 2769 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 18:34:36.398508 kubelet[2769]: E1212 18:34:36.398073 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:36.407698 systemd[1]: Created slice kubepods-besteffort-pod40973630_69db_4afe_9fed_5bd630e109f3.slice - libcontainer container kubepods-besteffort-pod40973630_69db_4afe_9fed_5bd630e109f3.slice. Dec 12 18:34:36.424699 systemd[1]: Created slice kubepods-besteffort-podcefcfeda_7bf8_4a99_a05b_8b22828370e2.slice - libcontainer container kubepods-besteffort-podcefcfeda_7bf8_4a99_a05b_8b22828370e2.slice. Dec 12 18:34:36.553590 kubelet[2769]: I1212 18:34:36.553507 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5hbd\" (UniqueName: \"kubernetes.io/projected/cefcfeda-7bf8-4a99-a05b-8b22828370e2-kube-api-access-r5hbd\") pod \"tigera-operator-65cdcdfd6d-295b9\" (UID: \"cefcfeda-7bf8-4a99-a05b-8b22828370e2\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-295b9" Dec 12 18:34:36.553590 kubelet[2769]: I1212 18:34:36.553575 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cefcfeda-7bf8-4a99-a05b-8b22828370e2-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-295b9\" (UID: \"cefcfeda-7bf8-4a99-a05b-8b22828370e2\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-295b9" Dec 12 18:34:36.553590 kubelet[2769]: I1212 18:34:36.553601 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/40973630-69db-4afe-9fed-5bd630e109f3-kube-proxy\") pod \"kube-proxy-k59k4\" (UID: \"40973630-69db-4afe-9fed-5bd630e109f3\") " pod="kube-system/kube-proxy-k59k4" Dec 12 18:34:36.553844 kubelet[2769]: I1212 18:34:36.553650 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40973630-69db-4afe-9fed-5bd630e109f3-xtables-lock\") pod \"kube-proxy-k59k4\" (UID: \"40973630-69db-4afe-9fed-5bd630e109f3\") " pod="kube-system/kube-proxy-k59k4" Dec 12 18:34:36.553844 kubelet[2769]: I1212 18:34:36.553686 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40973630-69db-4afe-9fed-5bd630e109f3-lib-modules\") pod \"kube-proxy-k59k4\" (UID: \"40973630-69db-4afe-9fed-5bd630e109f3\") " pod="kube-system/kube-proxy-k59k4" Dec 12 18:34:36.553844 kubelet[2769]: I1212 18:34:36.553713 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgsh9\" (UniqueName: \"kubernetes.io/projected/40973630-69db-4afe-9fed-5bd630e109f3-kube-api-access-mgsh9\") pod \"kube-proxy-k59k4\" (UID: \"40973630-69db-4afe-9fed-5bd630e109f3\") " pod="kube-system/kube-proxy-k59k4" Dec 12 18:34:36.726452 kubelet[2769]: E1212 18:34:36.726294 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:36.727416 containerd[1553]: time="2025-12-12T18:34:36.727337804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k59k4,Uid:40973630-69db-4afe-9fed-5bd630e109f3,Namespace:kube-system,Attempt:0,}" Dec 12 18:34:36.731690 containerd[1553]: time="2025-12-12T18:34:36.731636289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-295b9,Uid:cefcfeda-7bf8-4a99-a05b-8b22828370e2,Namespace:tigera-operator,Attempt:0,}" Dec 12 18:34:36.758478 containerd[1553]: time="2025-12-12T18:34:36.758424193Z" level=info msg="connecting to shim c79cc3c54e04c029b39269d77be241d78b3c873491109835096850bcfbe7494a" address="unix:///run/containerd/s/3c2b0eafd4322b0828a7b697092749936c4afd4c61f15d40d98905d5f589c639" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:34:36.764651 containerd[1553]: time="2025-12-12T18:34:36.764593466Z" level=info msg="connecting to shim 9f3fbc4b215d1e958925fa454faa247b3e2d2eef95f023052065c722cefbc44d" address="unix:///run/containerd/s/79e9c7e8a918e9bddde45e6fe3514de9236f7c897600c04a87068cf872870925" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:34:36.811065 systemd[1]: Started cri-containerd-c79cc3c54e04c029b39269d77be241d78b3c873491109835096850bcfbe7494a.scope - libcontainer container c79cc3c54e04c029b39269d77be241d78b3c873491109835096850bcfbe7494a. Dec 12 18:34:36.832064 systemd[1]: Started cri-containerd-9f3fbc4b215d1e958925fa454faa247b3e2d2eef95f023052065c722cefbc44d.scope - libcontainer container 9f3fbc4b215d1e958925fa454faa247b3e2d2eef95f023052065c722cefbc44d. Dec 12 18:34:36.863112 containerd[1553]: time="2025-12-12T18:34:36.863051259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k59k4,Uid:40973630-69db-4afe-9fed-5bd630e109f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"c79cc3c54e04c029b39269d77be241d78b3c873491109835096850bcfbe7494a\"" Dec 12 18:34:36.864152 kubelet[2769]: E1212 18:34:36.864117 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:36.873987 containerd[1553]: time="2025-12-12T18:34:36.873563271Z" level=info msg="CreateContainer within sandbox \"c79cc3c54e04c029b39269d77be241d78b3c873491109835096850bcfbe7494a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 18:34:36.886374 containerd[1553]: time="2025-12-12T18:34:36.886319222Z" level=info msg="Container 9652c1a8b7d84c3841696737d102495919d206ee454b6dfaec2f96d6aee8a3e0: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:34:36.887101 containerd[1553]: time="2025-12-12T18:34:36.887077218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-295b9,Uid:cefcfeda-7bf8-4a99-a05b-8b22828370e2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9f3fbc4b215d1e958925fa454faa247b3e2d2eef95f023052065c722cefbc44d\"" Dec 12 18:34:36.888770 containerd[1553]: time="2025-12-12T18:34:36.888736629Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 12 18:34:36.898435 containerd[1553]: time="2025-12-12T18:34:36.898389635Z" level=info msg="CreateContainer within sandbox \"c79cc3c54e04c029b39269d77be241d78b3c873491109835096850bcfbe7494a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9652c1a8b7d84c3841696737d102495919d206ee454b6dfaec2f96d6aee8a3e0\"" Dec 12 18:34:36.899134 containerd[1553]: time="2025-12-12T18:34:36.898976909Z" level=info msg="StartContainer for \"9652c1a8b7d84c3841696737d102495919d206ee454b6dfaec2f96d6aee8a3e0\"" Dec 12 18:34:36.900371 containerd[1553]: time="2025-12-12T18:34:36.900330415Z" level=info msg="connecting to shim 9652c1a8b7d84c3841696737d102495919d206ee454b6dfaec2f96d6aee8a3e0" address="unix:///run/containerd/s/3c2b0eafd4322b0828a7b697092749936c4afd4c61f15d40d98905d5f589c639" protocol=ttrpc version=3 Dec 12 18:34:36.922067 systemd[1]: Started cri-containerd-9652c1a8b7d84c3841696737d102495919d206ee454b6dfaec2f96d6aee8a3e0.scope - libcontainer container 9652c1a8b7d84c3841696737d102495919d206ee454b6dfaec2f96d6aee8a3e0. Dec 12 18:34:37.121765 containerd[1553]: time="2025-12-12T18:34:37.121726822Z" level=info msg="StartContainer for \"9652c1a8b7d84c3841696737d102495919d206ee454b6dfaec2f96d6aee8a3e0\" returns successfully" Dec 12 18:34:37.301660 kubelet[2769]: E1212 18:34:37.301627 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:37.301660 kubelet[2769]: E1212 18:34:37.301663 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:37.432165 kubelet[2769]: I1212 18:34:37.432026 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k59k4" podStartSLOduration=2.432007205 podStartE2EDuration="2.432007205s" podCreationTimestamp="2025-12-12 18:34:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:34:37.431201419 +0000 UTC m=+8.274508554" watchObservedRunningTime="2025-12-12 18:34:37.432007205 +0000 UTC m=+8.275314340" Dec 12 18:34:38.289672 kubelet[2769]: E1212 18:34:38.289623 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:38.303900 kubelet[2769]: E1212 18:34:38.303854 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:39.396703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4201994657.mount: Deactivated successfully. Dec 12 18:34:40.390306 containerd[1553]: time="2025-12-12T18:34:40.390214550Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:40.411159 containerd[1553]: time="2025-12-12T18:34:40.411059638Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Dec 12 18:34:40.413329 containerd[1553]: time="2025-12-12T18:34:40.413290531Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:40.450617 containerd[1553]: time="2025-12-12T18:34:40.450499546Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:40.451320 containerd[1553]: time="2025-12-12T18:34:40.451238866Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.562461019s" Dec 12 18:34:40.451320 containerd[1553]: time="2025-12-12T18:34:40.451300782Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 12 18:34:40.465180 containerd[1553]: time="2025-12-12T18:34:40.465129323Z" level=info msg="CreateContainer within sandbox \"9f3fbc4b215d1e958925fa454faa247b3e2d2eef95f023052065c722cefbc44d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 12 18:34:40.508787 containerd[1553]: time="2025-12-12T18:34:40.508705424Z" level=info msg="Container cbc61c90e6c00c1e5a08cd281ef60cdedbf26cbb58974322b99181aa2f340759: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:34:40.525945 containerd[1553]: time="2025-12-12T18:34:40.525846392Z" level=info msg="CreateContainer within sandbox \"9f3fbc4b215d1e958925fa454faa247b3e2d2eef95f023052065c722cefbc44d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"cbc61c90e6c00c1e5a08cd281ef60cdedbf26cbb58974322b99181aa2f340759\"" Dec 12 18:34:40.526620 containerd[1553]: time="2025-12-12T18:34:40.526564691Z" level=info msg="StartContainer for \"cbc61c90e6c00c1e5a08cd281ef60cdedbf26cbb58974322b99181aa2f340759\"" Dec 12 18:34:40.527815 containerd[1553]: time="2025-12-12T18:34:40.527780997Z" level=info msg="connecting to shim cbc61c90e6c00c1e5a08cd281ef60cdedbf26cbb58974322b99181aa2f340759" address="unix:///run/containerd/s/79e9c7e8a918e9bddde45e6fe3514de9236f7c897600c04a87068cf872870925" protocol=ttrpc version=3 Dec 12 18:34:40.577063 systemd[1]: Started cri-containerd-cbc61c90e6c00c1e5a08cd281ef60cdedbf26cbb58974322b99181aa2f340759.scope - libcontainer container cbc61c90e6c00c1e5a08cd281ef60cdedbf26cbb58974322b99181aa2f340759. Dec 12 18:34:40.722313 containerd[1553]: time="2025-12-12T18:34:40.722120740Z" level=info msg="StartContainer for \"cbc61c90e6c00c1e5a08cd281ef60cdedbf26cbb58974322b99181aa2f340759\" returns successfully" Dec 12 18:34:41.323886 kubelet[2769]: I1212 18:34:41.323797 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-295b9" podStartSLOduration=2.75940195 podStartE2EDuration="6.323774492s" podCreationTimestamp="2025-12-12 18:34:35 +0000 UTC" firstStartedPulling="2025-12-12 18:34:36.888264331 +0000 UTC m=+7.731571466" lastFinishedPulling="2025-12-12 18:34:40.452636873 +0000 UTC m=+11.295944008" observedRunningTime="2025-12-12 18:34:41.323655779 +0000 UTC m=+12.166962915" watchObservedRunningTime="2025-12-12 18:34:41.323774492 +0000 UTC m=+12.167081627" Dec 12 18:34:46.567684 sudo[1793]: pam_unix(sudo:session): session closed for user root Dec 12 18:34:46.569765 sshd[1785]: Connection closed by 10.0.0.1 port 60014 Dec 12 18:34:46.606207 sshd-session[1776]: pam_unix(sshd:session): session closed for user core Dec 12 18:34:46.611502 systemd[1]: sshd@8-10.0.0.38:22-10.0.0.1:60014.service: Deactivated successfully. Dec 12 18:34:46.614034 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 18:34:46.614272 systemd[1]: session-9.scope: Consumed 6.699s CPU time, 230M memory peak. Dec 12 18:34:46.615575 systemd-logind[1540]: Session 9 logged out. Waiting for processes to exit. Dec 12 18:34:46.617232 systemd-logind[1540]: Removed session 9. Dec 12 18:34:51.312958 systemd[1]: Created slice kubepods-besteffort-pod2136728d_4359_4fb7_8cd4_f5111de3932f.slice - libcontainer container kubepods-besteffort-pod2136728d_4359_4fb7_8cd4_f5111de3932f.slice. Dec 12 18:34:51.450954 kubelet[2769]: I1212 18:34:51.450800 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2136728d-4359-4fb7-8cd4-f5111de3932f-typha-certs\") pod \"calico-typha-57d744f5-p7xx4\" (UID: \"2136728d-4359-4fb7-8cd4-f5111de3932f\") " pod="calico-system/calico-typha-57d744f5-p7xx4" Dec 12 18:34:51.450954 kubelet[2769]: I1212 18:34:51.450859 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfbbf\" (UniqueName: \"kubernetes.io/projected/2136728d-4359-4fb7-8cd4-f5111de3932f-kube-api-access-rfbbf\") pod \"calico-typha-57d744f5-p7xx4\" (UID: \"2136728d-4359-4fb7-8cd4-f5111de3932f\") " pod="calico-system/calico-typha-57d744f5-p7xx4" Dec 12 18:34:51.450954 kubelet[2769]: I1212 18:34:51.450886 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2136728d-4359-4fb7-8cd4-f5111de3932f-tigera-ca-bundle\") pod \"calico-typha-57d744f5-p7xx4\" (UID: \"2136728d-4359-4fb7-8cd4-f5111de3932f\") " pod="calico-system/calico-typha-57d744f5-p7xx4" Dec 12 18:34:51.494335 systemd[1]: Created slice kubepods-besteffort-podd0a9ae74_bc0f_4813_aa50_8eefc725801f.slice - libcontainer container kubepods-besteffort-podd0a9ae74_bc0f_4813_aa50_8eefc725801f.slice. Dec 12 18:34:51.551478 kubelet[2769]: I1212 18:34:51.551423 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d0a9ae74-bc0f-4813-aa50-8eefc725801f-cni-log-dir\") pod \"calico-node-gbz6l\" (UID: \"d0a9ae74-bc0f-4813-aa50-8eefc725801f\") " pod="calico-system/calico-node-gbz6l" Dec 12 18:34:51.551478 kubelet[2769]: I1212 18:34:51.551472 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d0a9ae74-bc0f-4813-aa50-8eefc725801f-cni-bin-dir\") pod \"calico-node-gbz6l\" (UID: \"d0a9ae74-bc0f-4813-aa50-8eefc725801f\") " pod="calico-system/calico-node-gbz6l" Dec 12 18:34:51.628448 kubelet[2769]: E1212 18:34:51.628396 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:51.629116 containerd[1553]: time="2025-12-12T18:34:51.629079141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57d744f5-p7xx4,Uid:2136728d-4359-4fb7-8cd4-f5111de3932f,Namespace:calico-system,Attempt:0,}" Dec 12 18:34:51.652731 kubelet[2769]: I1212 18:34:51.652667 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d0a9ae74-bc0f-4813-aa50-8eefc725801f-var-lib-calico\") pod \"calico-node-gbz6l\" (UID: \"d0a9ae74-bc0f-4813-aa50-8eefc725801f\") " pod="calico-system/calico-node-gbz6l" Dec 12 18:34:51.652731 kubelet[2769]: I1212 18:34:51.652718 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0a9ae74-bc0f-4813-aa50-8eefc725801f-xtables-lock\") pod \"calico-node-gbz6l\" (UID: \"d0a9ae74-bc0f-4813-aa50-8eefc725801f\") " pod="calico-system/calico-node-gbz6l" Dec 12 18:34:51.652731 kubelet[2769]: I1212 18:34:51.652734 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d0a9ae74-bc0f-4813-aa50-8eefc725801f-var-run-calico\") pod \"calico-node-gbz6l\" (UID: \"d0a9ae74-bc0f-4813-aa50-8eefc725801f\") " pod="calico-system/calico-node-gbz6l" Dec 12 18:34:51.653007 kubelet[2769]: I1212 18:34:51.652749 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d0a9ae74-bc0f-4813-aa50-8eefc725801f-policysync\") pod \"calico-node-gbz6l\" (UID: \"d0a9ae74-bc0f-4813-aa50-8eefc725801f\") " pod="calico-system/calico-node-gbz6l" Dec 12 18:34:51.653007 kubelet[2769]: I1212 18:34:51.652769 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0a9ae74-bc0f-4813-aa50-8eefc725801f-tigera-ca-bundle\") pod \"calico-node-gbz6l\" (UID: \"d0a9ae74-bc0f-4813-aa50-8eefc725801f\") " pod="calico-system/calico-node-gbz6l" Dec 12 18:34:51.653007 kubelet[2769]: I1212 18:34:51.652794 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d0a9ae74-bc0f-4813-aa50-8eefc725801f-flexvol-driver-host\") pod \"calico-node-gbz6l\" (UID: \"d0a9ae74-bc0f-4813-aa50-8eefc725801f\") " pod="calico-system/calico-node-gbz6l" Dec 12 18:34:51.653007 kubelet[2769]: I1212 18:34:51.652815 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d0a9ae74-bc0f-4813-aa50-8eefc725801f-cni-net-dir\") pod \"calico-node-gbz6l\" (UID: \"d0a9ae74-bc0f-4813-aa50-8eefc725801f\") " pod="calico-system/calico-node-gbz6l" Dec 12 18:34:51.653007 kubelet[2769]: I1212 18:34:51.652827 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d0a9ae74-bc0f-4813-aa50-8eefc725801f-node-certs\") pod \"calico-node-gbz6l\" (UID: \"d0a9ae74-bc0f-4813-aa50-8eefc725801f\") " pod="calico-system/calico-node-gbz6l" Dec 12 18:34:51.653229 kubelet[2769]: I1212 18:34:51.652840 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0a9ae74-bc0f-4813-aa50-8eefc725801f-lib-modules\") pod \"calico-node-gbz6l\" (UID: \"d0a9ae74-bc0f-4813-aa50-8eefc725801f\") " pod="calico-system/calico-node-gbz6l" Dec 12 18:34:51.653229 kubelet[2769]: I1212 18:34:51.652852 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs5gb\" (UniqueName: \"kubernetes.io/projected/d0a9ae74-bc0f-4813-aa50-8eefc725801f-kube-api-access-rs5gb\") pod \"calico-node-gbz6l\" (UID: \"d0a9ae74-bc0f-4813-aa50-8eefc725801f\") " pod="calico-system/calico-node-gbz6l" Dec 12 18:34:51.682848 kubelet[2769]: E1212 18:34:51.682759 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dtnq5" podUID="3590ca52-1c12-4793-a003-8621a1fe8861" Dec 12 18:34:51.683831 containerd[1553]: time="2025-12-12T18:34:51.683788702Z" level=info msg="connecting to shim 8b704018edc5494daaa3cda14056e9c7040f610eb55cc5281a7bb197cfd29711" address="unix:///run/containerd/s/857c346dbf3a88b643625743d72a5466a63e4e942c2877f6c5d981e09ef6aacc" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:34:51.727216 systemd[1]: Started cri-containerd-8b704018edc5494daaa3cda14056e9c7040f610eb55cc5281a7bb197cfd29711.scope - libcontainer container 8b704018edc5494daaa3cda14056e9c7040f610eb55cc5281a7bb197cfd29711. Dec 12 18:34:51.754534 kubelet[2769]: I1212 18:34:51.753991 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3590ca52-1c12-4793-a003-8621a1fe8861-socket-dir\") pod \"csi-node-driver-dtnq5\" (UID: \"3590ca52-1c12-4793-a003-8621a1fe8861\") " pod="calico-system/csi-node-driver-dtnq5" Dec 12 18:34:51.754534 kubelet[2769]: I1212 18:34:51.754062 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vthr7\" (UniqueName: \"kubernetes.io/projected/3590ca52-1c12-4793-a003-8621a1fe8861-kube-api-access-vthr7\") pod \"csi-node-driver-dtnq5\" (UID: \"3590ca52-1c12-4793-a003-8621a1fe8861\") " pod="calico-system/csi-node-driver-dtnq5" Dec 12 18:34:51.754534 kubelet[2769]: I1212 18:34:51.754107 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3590ca52-1c12-4793-a003-8621a1fe8861-registration-dir\") pod \"csi-node-driver-dtnq5\" (UID: \"3590ca52-1c12-4793-a003-8621a1fe8861\") " pod="calico-system/csi-node-driver-dtnq5" Dec 12 18:34:51.754786 kubelet[2769]: I1212 18:34:51.754147 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3590ca52-1c12-4793-a003-8621a1fe8861-varrun\") pod \"csi-node-driver-dtnq5\" (UID: \"3590ca52-1c12-4793-a003-8621a1fe8861\") " pod="calico-system/csi-node-driver-dtnq5" Dec 12 18:34:51.754856 kubelet[2769]: I1212 18:34:51.754820 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3590ca52-1c12-4793-a003-8621a1fe8861-kubelet-dir\") pod \"csi-node-driver-dtnq5\" (UID: \"3590ca52-1c12-4793-a003-8621a1fe8861\") " pod="calico-system/csi-node-driver-dtnq5" Dec 12 18:34:51.757803 kubelet[2769]: E1212 18:34:51.757783 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:51.757895 kubelet[2769]: W1212 18:34:51.757879 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:51.758025 kubelet[2769]: E1212 18:34:51.758009 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:51.762291 kubelet[2769]: E1212 18:34:51.760334 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:51.762291 kubelet[2769]: W1212 18:34:51.760353 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:51.762291 kubelet[2769]: E1212 18:34:51.760379 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:51.778422 kubelet[2769]: E1212 18:34:51.778385 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:51.778422 kubelet[2769]: W1212 18:34:51.778406 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:51.778422 kubelet[2769]: E1212 18:34:51.778428 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:51.802685 kubelet[2769]: E1212 18:34:51.802637 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:51.804945 containerd[1553]: time="2025-12-12T18:34:51.804875168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gbz6l,Uid:d0a9ae74-bc0f-4813-aa50-8eefc725801f,Namespace:calico-system,Attempt:0,}" Dec 12 18:34:51.816759 containerd[1553]: time="2025-12-12T18:34:51.816691660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57d744f5-p7xx4,Uid:2136728d-4359-4fb7-8cd4-f5111de3932f,Namespace:calico-system,Attempt:0,} returns sandbox id \"8b704018edc5494daaa3cda14056e9c7040f610eb55cc5281a7bb197cfd29711\"" Dec 12 18:34:51.818940 kubelet[2769]: E1212 18:34:51.817896 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:51.819141 containerd[1553]: time="2025-12-12T18:34:51.819044256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 12 18:34:51.842052 containerd[1553]: time="2025-12-12T18:34:51.841876718Z" level=info msg="connecting to shim d6a326e737bd6fecd1fad1bd7ca535b7f6d4661a898d6c8d3119855638e54286" address="unix:///run/containerd/s/494d55a182bc9b7da9de74d945116fc893abd55198c45c82c18d5c1f097feb90" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:34:51.857038 kubelet[2769]: E1212 18:34:51.856784 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:51.857038 kubelet[2769]: W1212 18:34:51.856808 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:51.857038 kubelet[2769]: E1212 18:34:51.856834 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:51.857553 kubelet[2769]: E1212 18:34:51.857387 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:51.857553 kubelet[2769]: W1212 18:34:51.857400 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:51.857553 kubelet[2769]: E1212 18:34:51.857409 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:51.857799 kubelet[2769]: E1212 18:34:51.857679 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:51.857799 kubelet[2769]: W1212 18:34:51.857690 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:51.857799 kubelet[2769]: E1212 18:34:51.857699 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:51.858119 kubelet[2769]: E1212 18:34:51.857970 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:51.858119 kubelet[2769]: W1212 18:34:51.857982 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:51.858119 kubelet[2769]: E1212 18:34:51.857991 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:51.858477 kubelet[2769]: E1212 18:34:51.858278 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:51.858477 kubelet[2769]: W1212 18:34:51.858307 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:51.858477 kubelet[2769]: E1212 18:34:51.858318 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:51.858687 kubelet[2769]: E1212 18:34:51.858616 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:51.858687 kubelet[2769]: W1212 18:34:51.858627 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:51.858687 kubelet[2769]: E1212 18:34:51.858636 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:51.859036 kubelet[2769]: E1212 18:34:51.859000 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:51.859036 kubelet[2769]: W1212 18:34:51.859012 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:51.859036 kubelet[2769]: E1212 18:34:51.859022 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:51.859389 kubelet[2769]: E1212 18:34:51.859357 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:51.859389 kubelet[2769]: W1212 18:34:51.859367 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:51.859389 kubelet[2769]: E1212 18:34:51.859376 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:51.859744 kubelet[2769]: E1212 18:34:51.859712 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:51.859744 kubelet[2769]: W1212 18:34:51.859722 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:51.859744 kubelet[2769]: E1212 18:34:51.859732 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:51.860147 kubelet[2769]: E1212 18:34:51.860098 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:51.860147 kubelet[2769]: W1212 18:34:51.860109 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:51.860147 kubelet[2769]: E1212 18:34:51.860118 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:51.860988 kubelet[2769]: E1212 18:34:51.860832 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:51.860988 kubelet[2769]: W1212 18:34:51.860843 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:51.860988 kubelet[2769]: E1212 18:34:51.860852 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:51.861234 kubelet[2769]: E1212 18:34:51.861142 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:51.861234 kubelet[2769]: W1212 18:34:51.861176 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:51.861234 kubelet[2769]: E1212 18:34:51.861206 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:51.861531 kubelet[2769]: E1212 18:34:51.861511 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:51.861531 kubelet[2769]: W1212 18:34:51.861529 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:51.861602 kubelet[2769]: E1212 18:34:51.861540 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:51.861848 kubelet[2769]: E1212 18:34:51.861823 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:51.861848 kubelet[2769]: W1212 18:34:51.861839 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:51.861848 kubelet[2769]: E1212 18:34:51.861848 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:51.862229 kubelet[2769]: E1212 18:34:51.862115 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:51.862229 kubelet[2769]: W1212 18:34:51.862128 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:51.862229 kubelet[2769]: E1212 18:34:51.862138 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:51.862380 kubelet[2769]: E1212 18:34:51.862359 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:51.862380 kubelet[2769]: W1212 18:34:51.862373 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:51.862437 kubelet[2769]: E1212 18:34:51.862383 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:51.862676 kubelet[2769]: E1212 18:34:51.862643 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:51.862676 kubelet[2769]: W1212 18:34:51.862658 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:51.862676 kubelet[2769]: E1212 18:34:51.862668 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:51.863698 kubelet[2769]: E1212 18:34:51.863014 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:51.863698 kubelet[2769]: W1212 18:34:51.863033 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:51.863698 kubelet[2769]: E1212 18:34:51.863045 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:51.864179 kubelet[2769]: E1212 18:34:51.864009 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:51.864179 kubelet[2769]: W1212 18:34:51.864032 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:51.864179 kubelet[2769]: E1212 18:34:51.864043 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:51.864619 kubelet[2769]: E1212 18:34:51.864451 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:51.864619 kubelet[2769]: W1212 18:34:51.864459 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:51.864619 kubelet[2769]: E1212 18:34:51.864468 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:51.864703 kubelet[2769]: E1212 18:34:51.864674 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:51.864703 kubelet[2769]: W1212 18:34:51.864681 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:51.864703 kubelet[2769]: E1212 18:34:51.864689 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:51.865826 kubelet[2769]: E1212 18:34:51.864938 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:51.865826 kubelet[2769]: W1212 18:34:51.864966 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:51.865826 kubelet[2769]: E1212 18:34:51.864975 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:51.865826 kubelet[2769]: E1212 18:34:51.865268 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:51.865826 kubelet[2769]: W1212 18:34:51.865277 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:51.865826 kubelet[2769]: E1212 18:34:51.865285 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:51.865826 kubelet[2769]: E1212 18:34:51.865523 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:51.865826 kubelet[2769]: W1212 18:34:51.865533 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:51.865826 kubelet[2769]: E1212 18:34:51.865541 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:51.865826 kubelet[2769]: E1212 18:34:51.865757 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:51.866409 kubelet[2769]: W1212 18:34:51.865764 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:51.866409 kubelet[2769]: E1212 18:34:51.865772 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:51.871177 systemd[1]: Started cri-containerd-d6a326e737bd6fecd1fad1bd7ca535b7f6d4661a898d6c8d3119855638e54286.scope - libcontainer container d6a326e737bd6fecd1fad1bd7ca535b7f6d4661a898d6c8d3119855638e54286. Dec 12 18:34:51.873515 kubelet[2769]: E1212 18:34:51.873493 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:51.873515 kubelet[2769]: W1212 18:34:51.873512 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:51.873621 kubelet[2769]: E1212 18:34:51.873535 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:51.903504 containerd[1553]: time="2025-12-12T18:34:51.903335580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gbz6l,Uid:d0a9ae74-bc0f-4813-aa50-8eefc725801f,Namespace:calico-system,Attempt:0,} returns sandbox id \"d6a326e737bd6fecd1fad1bd7ca535b7f6d4661a898d6c8d3119855638e54286\"" Dec 12 18:34:51.904694 kubelet[2769]: E1212 18:34:51.904668 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:53.266277 kubelet[2769]: E1212 18:34:53.266197 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dtnq5" podUID="3590ca52-1c12-4793-a003-8621a1fe8861" Dec 12 18:34:53.689656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4183267192.mount: Deactivated successfully. Dec 12 18:34:54.411149 containerd[1553]: time="2025-12-12T18:34:54.411066997Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:54.413847 containerd[1553]: time="2025-12-12T18:34:54.413817920Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Dec 12 18:34:54.415824 containerd[1553]: time="2025-12-12T18:34:54.415791483Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:54.418534 containerd[1553]: time="2025-12-12T18:34:54.418459321Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:54.418936 containerd[1553]: time="2025-12-12T18:34:54.418880551Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.599733152s" Dec 12 18:34:54.418985 containerd[1553]: time="2025-12-12T18:34:54.418947106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 12 18:34:54.419988 containerd[1553]: time="2025-12-12T18:34:54.419958905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 18:34:54.437117 containerd[1553]: time="2025-12-12T18:34:54.437063998Z" level=info msg="CreateContainer within sandbox \"8b704018edc5494daaa3cda14056e9c7040f610eb55cc5281a7bb197cfd29711\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 12 18:34:54.453688 containerd[1553]: time="2025-12-12T18:34:54.453409676Z" level=info msg="Container f19b86cf27adffe749dba1625816512b6545eb78f81f89340bb803c025c27106: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:34:54.464038 containerd[1553]: time="2025-12-12T18:34:54.463983502Z" level=info msg="CreateContainer within sandbox \"8b704018edc5494daaa3cda14056e9c7040f610eb55cc5281a7bb197cfd29711\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f19b86cf27adffe749dba1625816512b6545eb78f81f89340bb803c025c27106\"" Dec 12 18:34:54.464567 containerd[1553]: time="2025-12-12T18:34:54.464534917Z" level=info msg="StartContainer for \"f19b86cf27adffe749dba1625816512b6545eb78f81f89340bb803c025c27106\"" Dec 12 18:34:54.465688 containerd[1553]: time="2025-12-12T18:34:54.465658185Z" level=info msg="connecting to shim f19b86cf27adffe749dba1625816512b6545eb78f81f89340bb803c025c27106" address="unix:///run/containerd/s/857c346dbf3a88b643625743d72a5466a63e4e942c2877f6c5d981e09ef6aacc" protocol=ttrpc version=3 Dec 12 18:34:54.490101 systemd[1]: Started cri-containerd-f19b86cf27adffe749dba1625816512b6545eb78f81f89340bb803c025c27106.scope - libcontainer container f19b86cf27adffe749dba1625816512b6545eb78f81f89340bb803c025c27106. Dec 12 18:34:54.550172 containerd[1553]: time="2025-12-12T18:34:54.550082775Z" level=info msg="StartContainer for \"f19b86cf27adffe749dba1625816512b6545eb78f81f89340bb803c025c27106\" returns successfully" Dec 12 18:34:55.266435 kubelet[2769]: E1212 18:34:55.266337 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dtnq5" podUID="3590ca52-1c12-4793-a003-8621a1fe8861" Dec 12 18:34:55.349441 kubelet[2769]: E1212 18:34:55.349406 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:55.369710 kubelet[2769]: I1212 18:34:55.369641 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-57d744f5-p7xx4" podStartSLOduration=1.7686069789999999 podStartE2EDuration="4.369624651s" podCreationTimestamp="2025-12-12 18:34:51 +0000 UTC" firstStartedPulling="2025-12-12 18:34:51.818645528 +0000 UTC m=+22.661952663" lastFinishedPulling="2025-12-12 18:34:54.4196632 +0000 UTC m=+25.262970335" observedRunningTime="2025-12-12 18:34:55.368762484 +0000 UTC m=+26.212069609" watchObservedRunningTime="2025-12-12 18:34:55.369624651 +0000 UTC m=+26.212931786" Dec 12 18:34:55.375211 kubelet[2769]: E1212 18:34:55.375192 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:55.375211 kubelet[2769]: W1212 18:34:55.375207 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:55.375296 kubelet[2769]: E1212 18:34:55.375226 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:55.375401 kubelet[2769]: E1212 18:34:55.375380 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:55.375401 kubelet[2769]: W1212 18:34:55.375389 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:55.375401 kubelet[2769]: E1212 18:34:55.375397 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:55.375544 kubelet[2769]: E1212 18:34:55.375531 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:55.375544 kubelet[2769]: W1212 18:34:55.375540 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:55.375612 kubelet[2769]: E1212 18:34:55.375547 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:55.375732 kubelet[2769]: E1212 18:34:55.375711 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:55.375732 kubelet[2769]: W1212 18:34:55.375720 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:55.375732 kubelet[2769]: E1212 18:34:55.375728 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:55.375880 kubelet[2769]: E1212 18:34:55.375859 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:55.375880 kubelet[2769]: W1212 18:34:55.375867 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:55.375962 kubelet[2769]: E1212 18:34:55.375884 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:55.376036 kubelet[2769]: E1212 18:34:55.376022 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:55.376036 kubelet[2769]: W1212 18:34:55.376031 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:55.376098 kubelet[2769]: E1212 18:34:55.376038 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:55.376186 kubelet[2769]: E1212 18:34:55.376173 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:55.376186 kubelet[2769]: W1212 18:34:55.376181 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:55.376250 kubelet[2769]: E1212 18:34:55.376188 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:55.376323 kubelet[2769]: E1212 18:34:55.376310 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:55.376323 kubelet[2769]: W1212 18:34:55.376318 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:55.376382 kubelet[2769]: E1212 18:34:55.376325 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:55.376474 kubelet[2769]: E1212 18:34:55.376461 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:55.376474 kubelet[2769]: W1212 18:34:55.376469 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:55.376536 kubelet[2769]: E1212 18:34:55.376476 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:55.376610 kubelet[2769]: E1212 18:34:55.376597 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:55.376610 kubelet[2769]: W1212 18:34:55.376605 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:55.376668 kubelet[2769]: E1212 18:34:55.376612 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:55.376748 kubelet[2769]: E1212 18:34:55.376735 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:55.376748 kubelet[2769]: W1212 18:34:55.376743 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:55.376803 kubelet[2769]: E1212 18:34:55.376750 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:55.376897 kubelet[2769]: E1212 18:34:55.376883 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:55.376897 kubelet[2769]: W1212 18:34:55.376894 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:55.376977 kubelet[2769]: E1212 18:34:55.376902 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:55.377068 kubelet[2769]: E1212 18:34:55.377055 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:55.377068 kubelet[2769]: W1212 18:34:55.377063 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:55.377125 kubelet[2769]: E1212 18:34:55.377070 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:55.377211 kubelet[2769]: E1212 18:34:55.377200 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:55.377211 kubelet[2769]: W1212 18:34:55.377208 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:55.377254 kubelet[2769]: E1212 18:34:55.377215 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:55.377358 kubelet[2769]: E1212 18:34:55.377346 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:55.377358 kubelet[2769]: W1212 18:34:55.377353 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:55.377417 kubelet[2769]: E1212 18:34:55.377360 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:55.382034 kubelet[2769]: E1212 18:34:55.382011 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:55.382034 kubelet[2769]: W1212 18:34:55.382025 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:55.382034 kubelet[2769]: E1212 18:34:55.382039 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:55.382238 kubelet[2769]: E1212 18:34:55.382226 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:55.382238 kubelet[2769]: W1212 18:34:55.382236 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:55.382288 kubelet[2769]: E1212 18:34:55.382244 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:55.382455 kubelet[2769]: E1212 18:34:55.382441 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:55.382488 kubelet[2769]: W1212 18:34:55.382453 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:55.382488 kubelet[2769]: E1212 18:34:55.382464 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:55.382637 kubelet[2769]: E1212 18:34:55.382625 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:55.382658 kubelet[2769]: W1212 18:34:55.382636 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:55.382658 kubelet[2769]: E1212 18:34:55.382644 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:55.382797 kubelet[2769]: E1212 18:34:55.382784 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:55.382797 kubelet[2769]: W1212 18:34:55.382793 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:55.382891 kubelet[2769]: E1212 18:34:55.382800 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:55.383042 kubelet[2769]: E1212 18:34:55.383029 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:55.383042 kubelet[2769]: W1212 18:34:55.383040 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:55.383092 kubelet[2769]: E1212 18:34:55.383049 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:55.383607 kubelet[2769]: E1212 18:34:55.383591 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:55.383607 kubelet[2769]: W1212 18:34:55.383603 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:55.383682 kubelet[2769]: E1212 18:34:55.383613 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:55.383825 kubelet[2769]: E1212 18:34:55.383806 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:55.383825 kubelet[2769]: W1212 18:34:55.383817 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:55.383884 kubelet[2769]: E1212 18:34:55.383825 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:55.384046 kubelet[2769]: E1212 18:34:55.384028 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:55.384046 kubelet[2769]: W1212 18:34:55.384039 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:55.384120 kubelet[2769]: E1212 18:34:55.384051 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:55.384233 kubelet[2769]: E1212 18:34:55.384218 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:55.384233 kubelet[2769]: W1212 18:34:55.384229 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:55.384280 kubelet[2769]: E1212 18:34:55.384239 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:55.384407 kubelet[2769]: E1212 18:34:55.384395 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:55.384440 kubelet[2769]: W1212 18:34:55.384405 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:55.384440 kubelet[2769]: E1212 18:34:55.384414 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:55.384571 kubelet[2769]: E1212 18:34:55.384558 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:55.384571 kubelet[2769]: W1212 18:34:55.384569 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:55.384619 kubelet[2769]: E1212 18:34:55.384578 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:55.384751 kubelet[2769]: E1212 18:34:55.384740 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:55.384781 kubelet[2769]: W1212 18:34:55.384751 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:55.384781 kubelet[2769]: E1212 18:34:55.384760 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:55.385068 kubelet[2769]: E1212 18:34:55.385052 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:55.385068 kubelet[2769]: W1212 18:34:55.385063 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:55.385120 kubelet[2769]: E1212 18:34:55.385072 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:55.385250 kubelet[2769]: E1212 18:34:55.385235 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:55.385250 kubelet[2769]: W1212 18:34:55.385245 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:55.385310 kubelet[2769]: E1212 18:34:55.385258 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:55.385460 kubelet[2769]: E1212 18:34:55.385445 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:55.385460 kubelet[2769]: W1212 18:34:55.385456 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:55.385505 kubelet[2769]: E1212 18:34:55.385465 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:55.385715 kubelet[2769]: E1212 18:34:55.385699 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:55.385715 kubelet[2769]: W1212 18:34:55.385710 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:55.385767 kubelet[2769]: E1212 18:34:55.385719 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:55.385983 kubelet[2769]: E1212 18:34:55.385966 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:34:55.385983 kubelet[2769]: W1212 18:34:55.385978 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:34:55.386033 kubelet[2769]: E1212 18:34:55.385988 2769 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:34:55.814765 containerd[1553]: time="2025-12-12T18:34:55.814696363Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:55.815697 containerd[1553]: time="2025-12-12T18:34:55.815643600Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Dec 12 18:34:55.817014 containerd[1553]: time="2025-12-12T18:34:55.816977554Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:55.819018 containerd[1553]: time="2025-12-12T18:34:55.818977167Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:34:55.819571 containerd[1553]: time="2025-12-12T18:34:55.819513133Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.39952375s" Dec 12 18:34:55.819571 containerd[1553]: time="2025-12-12T18:34:55.819566503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 12 18:34:55.825904 containerd[1553]: time="2025-12-12T18:34:55.825830929Z" level=info msg="CreateContainer within sandbox \"d6a326e737bd6fecd1fad1bd7ca535b7f6d4661a898d6c8d3119855638e54286\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 18:34:55.839524 containerd[1553]: time="2025-12-12T18:34:55.839446181Z" level=info msg="Container bdd3b7ba644bf64da17cec2f568bc87f2519224cea94f4ec5251610d6d891503: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:34:55.854222 containerd[1553]: time="2025-12-12T18:34:55.854161176Z" level=info msg="CreateContainer within sandbox \"d6a326e737bd6fecd1fad1bd7ca535b7f6d4661a898d6c8d3119855638e54286\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bdd3b7ba644bf64da17cec2f568bc87f2519224cea94f4ec5251610d6d891503\"" Dec 12 18:34:55.854802 containerd[1553]: time="2025-12-12T18:34:55.854775259Z" level=info msg="StartContainer for \"bdd3b7ba644bf64da17cec2f568bc87f2519224cea94f4ec5251610d6d891503\"" Dec 12 18:34:55.856450 containerd[1553]: time="2025-12-12T18:34:55.856420827Z" level=info msg="connecting to shim bdd3b7ba644bf64da17cec2f568bc87f2519224cea94f4ec5251610d6d891503" address="unix:///run/containerd/s/494d55a182bc9b7da9de74d945116fc893abd55198c45c82c18d5c1f097feb90" protocol=ttrpc version=3 Dec 12 18:34:55.879462 systemd[1]: Started cri-containerd-bdd3b7ba644bf64da17cec2f568bc87f2519224cea94f4ec5251610d6d891503.scope - libcontainer container bdd3b7ba644bf64da17cec2f568bc87f2519224cea94f4ec5251610d6d891503. Dec 12 18:34:55.991778 systemd[1]: cri-containerd-bdd3b7ba644bf64da17cec2f568bc87f2519224cea94f4ec5251610d6d891503.scope: Deactivated successfully. Dec 12 18:34:55.992151 systemd[1]: cri-containerd-bdd3b7ba644bf64da17cec2f568bc87f2519224cea94f4ec5251610d6d891503.scope: Consumed 49ms CPU time, 6.3M memory peak, 4.6M written to disk. Dec 12 18:34:56.046102 containerd[1553]: time="2025-12-12T18:34:56.046009096Z" level=info msg="received container exit event container_id:\"bdd3b7ba644bf64da17cec2f568bc87f2519224cea94f4ec5251610d6d891503\" id:\"bdd3b7ba644bf64da17cec2f568bc87f2519224cea94f4ec5251610d6d891503\" pid:3424 exited_at:{seconds:1765564495 nanos:994594499}" Dec 12 18:34:56.047543 containerd[1553]: time="2025-12-12T18:34:56.047503209Z" level=info msg="StartContainer for \"bdd3b7ba644bf64da17cec2f568bc87f2519224cea94f4ec5251610d6d891503\" returns successfully" Dec 12 18:34:56.074965 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bdd3b7ba644bf64da17cec2f568bc87f2519224cea94f4ec5251610d6d891503-rootfs.mount: Deactivated successfully. Dec 12 18:34:56.353659 kubelet[2769]: I1212 18:34:56.353521 2769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 18:34:56.354235 kubelet[2769]: E1212 18:34:56.353955 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:56.354355 kubelet[2769]: E1212 18:34:56.354315 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:57.266739 kubelet[2769]: E1212 18:34:57.266318 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dtnq5" podUID="3590ca52-1c12-4793-a003-8621a1fe8861" Dec 12 18:34:57.357462 kubelet[2769]: E1212 18:34:57.357420 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:34:57.359143 containerd[1553]: time="2025-12-12T18:34:57.359092228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 18:34:59.266333 kubelet[2769]: E1212 18:34:59.266269 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dtnq5" podUID="3590ca52-1c12-4793-a003-8621a1fe8861" Dec 12 18:34:59.794894 kubelet[2769]: I1212 18:34:59.794828 2769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 18:34:59.795352 kubelet[2769]: E1212 18:34:59.795326 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:00.362027 kubelet[2769]: E1212 18:35:00.361981 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:01.266537 kubelet[2769]: E1212 18:35:01.266438 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dtnq5" podUID="3590ca52-1c12-4793-a003-8621a1fe8861" Dec 12 18:35:01.303836 containerd[1553]: time="2025-12-12T18:35:01.303758978Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:35:01.304806 containerd[1553]: time="2025-12-12T18:35:01.304767572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Dec 12 18:35:01.306059 containerd[1553]: time="2025-12-12T18:35:01.306022606Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:35:01.308957 containerd[1553]: time="2025-12-12T18:35:01.308925783Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:35:01.309400 containerd[1553]: time="2025-12-12T18:35:01.309373022Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.950237903s" Dec 12 18:35:01.309400 containerd[1553]: time="2025-12-12T18:35:01.309398850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 12 18:35:01.316159 containerd[1553]: time="2025-12-12T18:35:01.316107317Z" level=info msg="CreateContainer within sandbox \"d6a326e737bd6fecd1fad1bd7ca535b7f6d4661a898d6c8d3119855638e54286\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 18:35:01.327194 containerd[1553]: time="2025-12-12T18:35:01.327144965Z" level=info msg="Container 0b79c43201249c84c345619de3201b0f0f84a7497f048990f919c0fac2e5110c: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:35:01.340932 containerd[1553]: time="2025-12-12T18:35:01.340852773Z" level=info msg="CreateContainer within sandbox \"d6a326e737bd6fecd1fad1bd7ca535b7f6d4661a898d6c8d3119855638e54286\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0b79c43201249c84c345619de3201b0f0f84a7497f048990f919c0fac2e5110c\"" Dec 12 18:35:01.343382 containerd[1553]: time="2025-12-12T18:35:01.341491300Z" level=info msg="StartContainer for \"0b79c43201249c84c345619de3201b0f0f84a7497f048990f919c0fac2e5110c\"" Dec 12 18:35:01.343382 containerd[1553]: time="2025-12-12T18:35:01.343210125Z" level=info msg="connecting to shim 0b79c43201249c84c345619de3201b0f0f84a7497f048990f919c0fac2e5110c" address="unix:///run/containerd/s/494d55a182bc9b7da9de74d945116fc893abd55198c45c82c18d5c1f097feb90" protocol=ttrpc version=3 Dec 12 18:35:01.371271 systemd[1]: Started cri-containerd-0b79c43201249c84c345619de3201b0f0f84a7497f048990f919c0fac2e5110c.scope - libcontainer container 0b79c43201249c84c345619de3201b0f0f84a7497f048990f919c0fac2e5110c. Dec 12 18:35:01.472368 containerd[1553]: time="2025-12-12T18:35:01.472310169Z" level=info msg="StartContainer for \"0b79c43201249c84c345619de3201b0f0f84a7497f048990f919c0fac2e5110c\" returns successfully" Dec 12 18:35:02.372737 kubelet[2769]: E1212 18:35:02.372615 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:02.686494 containerd[1553]: time="2025-12-12T18:35:02.686356116Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 18:35:02.690542 systemd[1]: cri-containerd-0b79c43201249c84c345619de3201b0f0f84a7497f048990f919c0fac2e5110c.scope: Deactivated successfully. Dec 12 18:35:02.691332 systemd[1]: cri-containerd-0b79c43201249c84c345619de3201b0f0f84a7497f048990f919c0fac2e5110c.scope: Consumed 698ms CPU time, 182.2M memory peak, 2.7M read from disk, 171.3M written to disk. Dec 12 18:35:02.693992 containerd[1553]: time="2025-12-12T18:35:02.693947588Z" level=info msg="received container exit event container_id:\"0b79c43201249c84c345619de3201b0f0f84a7497f048990f919c0fac2e5110c\" id:\"0b79c43201249c84c345619de3201b0f0f84a7497f048990f919c0fac2e5110c\" pid:3484 exited_at:{seconds:1765564502 nanos:693568186}" Dec 12 18:35:02.717429 kubelet[2769]: I1212 18:35:02.717393 2769 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 12 18:35:02.728857 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b79c43201249c84c345619de3201b0f0f84a7497f048990f919c0fac2e5110c-rootfs.mount: Deactivated successfully. Dec 12 18:35:03.276775 systemd[1]: Created slice kubepods-burstable-podb7aa5f12_60b1_4a4a_b2f4_b7aa5d15059a.slice - libcontainer container kubepods-burstable-podb7aa5f12_60b1_4a4a_b2f4_b7aa5d15059a.slice. Dec 12 18:35:03.295157 systemd[1]: Created slice kubepods-besteffort-pod1f0321c0_7695_4f53_9a29_c3900a354123.slice - libcontainer container kubepods-besteffort-pod1f0321c0_7695_4f53_9a29_c3900a354123.slice. Dec 12 18:35:03.302580 systemd[1]: Created slice kubepods-besteffort-pod3590ca52_1c12_4793_a003_8621a1fe8861.slice - libcontainer container kubepods-besteffort-pod3590ca52_1c12_4793_a003_8621a1fe8861.slice. Dec 12 18:35:03.315527 systemd[1]: Created slice kubepods-besteffort-pod4c88e5b7_6c17_45c7_92f0_9be254ebdd59.slice - libcontainer container kubepods-besteffort-pod4c88e5b7_6c17_45c7_92f0_9be254ebdd59.slice. Dec 12 18:35:03.323169 containerd[1553]: time="2025-12-12T18:35:03.323107938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dtnq5,Uid:3590ca52-1c12-4793-a003-8621a1fe8861,Namespace:calico-system,Attempt:0,}" Dec 12 18:35:03.326568 systemd[1]: Created slice kubepods-burstable-podfefe395e_a76c_40a6_a6e5_38c9f3e1ee92.slice - libcontainer container kubepods-burstable-podfefe395e_a76c_40a6_a6e5_38c9f3e1ee92.slice. Dec 12 18:35:03.335249 systemd[1]: Created slice kubepods-besteffort-pod130041d5_7a82_45e2_b4bb_3f947d0d2476.slice - libcontainer container kubepods-besteffort-pod130041d5_7a82_45e2_b4bb_3f947d0d2476.slice. Dec 12 18:35:03.351280 systemd[1]: Created slice kubepods-besteffort-pod924b51e0_ed81_4bc8_a597_a44686b519ff.slice - libcontainer container kubepods-besteffort-pod924b51e0_ed81_4bc8_a597_a44686b519ff.slice. Dec 12 18:35:03.373545 kubelet[2769]: I1212 18:35:03.373506 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bplwc\" (UniqueName: \"kubernetes.io/projected/b7aa5f12-60b1-4a4a-b2f4-b7aa5d15059a-kube-api-access-bplwc\") pod \"coredns-66bc5c9577-74dh6\" (UID: \"b7aa5f12-60b1-4a4a-b2f4-b7aa5d15059a\") " pod="kube-system/coredns-66bc5c9577-74dh6" Dec 12 18:35:03.374187 kubelet[2769]: I1212 18:35:03.373554 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7aa5f12-60b1-4a4a-b2f4-b7aa5d15059a-config-volume\") pod \"coredns-66bc5c9577-74dh6\" (UID: \"b7aa5f12-60b1-4a4a-b2f4-b7aa5d15059a\") " pod="kube-system/coredns-66bc5c9577-74dh6" Dec 12 18:35:03.376936 kubelet[2769]: E1212 18:35:03.376882 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:03.377683 containerd[1553]: time="2025-12-12T18:35:03.377590217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 18:35:03.474388 kubelet[2769]: I1212 18:35:03.474328 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/924b51e0-ed81-4bc8-a597-a44686b519ff-calico-apiserver-certs\") pod \"calico-apiserver-7b767d98d4-755s8\" (UID: \"924b51e0-ed81-4bc8-a597-a44686b519ff\") " pod="calico-apiserver/calico-apiserver-7b767d98d4-755s8" Dec 12 18:35:03.474388 kubelet[2769]: I1212 18:35:03.474381 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl2th\" (UniqueName: \"kubernetes.io/projected/4c88e5b7-6c17-45c7-92f0-9be254ebdd59-kube-api-access-sl2th\") pod \"goldmane-7c778bb748-wcf2b\" (UID: \"4c88e5b7-6c17-45c7-92f0-9be254ebdd59\") " pod="calico-system/goldmane-7c778bb748-wcf2b" Dec 12 18:35:03.474388 kubelet[2769]: I1212 18:35:03.474404 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9pnp\" (UniqueName: \"kubernetes.io/projected/924b51e0-ed81-4bc8-a597-a44686b519ff-kube-api-access-j9pnp\") pod \"calico-apiserver-7b767d98d4-755s8\" (UID: \"924b51e0-ed81-4bc8-a597-a44686b519ff\") " pod="calico-apiserver/calico-apiserver-7b767d98d4-755s8" Dec 12 18:35:03.474950 kubelet[2769]: I1212 18:35:03.474454 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/130041d5-7a82-45e2-b4bb-3f947d0d2476-whisker-ca-bundle\") pod \"whisker-84c98976c4-29chl\" (UID: \"130041d5-7a82-45e2-b4bb-3f947d0d2476\") " pod="calico-system/whisker-84c98976c4-29chl" Dec 12 18:35:03.474950 kubelet[2769]: I1212 18:35:03.474473 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4c88e5b7-6c17-45c7-92f0-9be254ebdd59-goldmane-key-pair\") pod \"goldmane-7c778bb748-wcf2b\" (UID: \"4c88e5b7-6c17-45c7-92f0-9be254ebdd59\") " pod="calico-system/goldmane-7c778bb748-wcf2b" Dec 12 18:35:03.474950 kubelet[2769]: I1212 18:35:03.474507 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t64rv\" (UniqueName: \"kubernetes.io/projected/130041d5-7a82-45e2-b4bb-3f947d0d2476-kube-api-access-t64rv\") pod \"whisker-84c98976c4-29chl\" (UID: \"130041d5-7a82-45e2-b4bb-3f947d0d2476\") " pod="calico-system/whisker-84c98976c4-29chl" Dec 12 18:35:03.474950 kubelet[2769]: I1212 18:35:03.474527 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4796\" (UniqueName: \"kubernetes.io/projected/1f0321c0-7695-4f53-9a29-c3900a354123-kube-api-access-z4796\") pod \"calico-apiserver-7b767d98d4-5tzst\" (UID: \"1f0321c0-7695-4f53-9a29-c3900a354123\") " pod="calico-apiserver/calico-apiserver-7b767d98d4-5tzst" Dec 12 18:35:03.474950 kubelet[2769]: I1212 18:35:03.474702 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5vp9\" (UniqueName: \"kubernetes.io/projected/fefe395e-a76c-40a6-a6e5-38c9f3e1ee92-kube-api-access-v5vp9\") pod \"coredns-66bc5c9577-nscm8\" (UID: \"fefe395e-a76c-40a6-a6e5-38c9f3e1ee92\") " pod="kube-system/coredns-66bc5c9577-nscm8" Dec 12 18:35:03.475124 kubelet[2769]: I1212 18:35:03.474793 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c88e5b7-6c17-45c7-92f0-9be254ebdd59-config\") pod \"goldmane-7c778bb748-wcf2b\" (UID: \"4c88e5b7-6c17-45c7-92f0-9be254ebdd59\") " pod="calico-system/goldmane-7c778bb748-wcf2b" Dec 12 18:35:03.475813 kubelet[2769]: I1212 18:35:03.475189 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/130041d5-7a82-45e2-b4bb-3f947d0d2476-whisker-backend-key-pair\") pod \"whisker-84c98976c4-29chl\" (UID: \"130041d5-7a82-45e2-b4bb-3f947d0d2476\") " pod="calico-system/whisker-84c98976c4-29chl" Dec 12 18:35:03.475813 kubelet[2769]: I1212 18:35:03.475241 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fefe395e-a76c-40a6-a6e5-38c9f3e1ee92-config-volume\") pod \"coredns-66bc5c9577-nscm8\" (UID: \"fefe395e-a76c-40a6-a6e5-38c9f3e1ee92\") " pod="kube-system/coredns-66bc5c9577-nscm8" Dec 12 18:35:03.475813 kubelet[2769]: I1212 18:35:03.475267 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c88e5b7-6c17-45c7-92f0-9be254ebdd59-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-wcf2b\" (UID: \"4c88e5b7-6c17-45c7-92f0-9be254ebdd59\") " pod="calico-system/goldmane-7c778bb748-wcf2b" Dec 12 18:35:03.475813 kubelet[2769]: I1212 18:35:03.475287 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1f0321c0-7695-4f53-9a29-c3900a354123-calico-apiserver-certs\") pod \"calico-apiserver-7b767d98d4-5tzst\" (UID: \"1f0321c0-7695-4f53-9a29-c3900a354123\") " pod="calico-apiserver/calico-apiserver-7b767d98d4-5tzst" Dec 12 18:35:03.538739 containerd[1553]: time="2025-12-12T18:35:03.538565771Z" level=error msg="Failed to destroy network for sandbox \"5dfd1e1aca865b8d7d104e167e56b255b68739a6ba31d93fcf9564a078eb74f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:35:03.542341 systemd[1]: run-netns-cni\x2d9141c41b\x2d287b\x2d224b\x2da0c0\x2d73be06dfb9ee.mount: Deactivated successfully. Dec 12 18:35:03.630956 containerd[1553]: time="2025-12-12T18:35:03.629881893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-wcf2b,Uid:4c88e5b7-6c17-45c7-92f0-9be254ebdd59,Namespace:calico-system,Attempt:0,}" Dec 12 18:35:03.632333 containerd[1553]: time="2025-12-12T18:35:03.632245667Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dtnq5,Uid:3590ca52-1c12-4793-a003-8621a1fe8861,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dfd1e1aca865b8d7d104e167e56b255b68739a6ba31d93fcf9564a078eb74f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:35:03.632943 kubelet[2769]: E1212 18:35:03.632873 2769 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dfd1e1aca865b8d7d104e167e56b255b68739a6ba31d93fcf9564a078eb74f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:35:03.633038 kubelet[2769]: E1212 18:35:03.632972 2769 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dfd1e1aca865b8d7d104e167e56b255b68739a6ba31d93fcf9564a078eb74f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dtnq5" Dec 12 18:35:03.633038 kubelet[2769]: E1212 18:35:03.632996 2769 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dfd1e1aca865b8d7d104e167e56b255b68739a6ba31d93fcf9564a078eb74f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dtnq5" Dec 12 18:35:03.633170 kubelet[2769]: E1212 18:35:03.633056 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dtnq5_calico-system(3590ca52-1c12-4793-a003-8621a1fe8861)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dtnq5_calico-system(3590ca52-1c12-4793-a003-8621a1fe8861)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5dfd1e1aca865b8d7d104e167e56b255b68739a6ba31d93fcf9564a078eb74f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dtnq5" podUID="3590ca52-1c12-4793-a003-8621a1fe8861" Dec 12 18:35:03.635573 systemd[1]: Created slice kubepods-besteffort-pod5bd7d04f_25d6_4f6d_8d32_675830519b60.slice - libcontainer container kubepods-besteffort-pod5bd7d04f_25d6_4f6d_8d32_675830519b60.slice. Dec 12 18:35:03.677289 kubelet[2769]: I1212 18:35:03.677082 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bd7d04f-25d6-4f6d-8d32-675830519b60-tigera-ca-bundle\") pod \"calico-kube-controllers-57c994577d-zf2dw\" (UID: \"5bd7d04f-25d6-4f6d-8d32-675830519b60\") " pod="calico-system/calico-kube-controllers-57c994577d-zf2dw" Dec 12 18:35:03.677289 kubelet[2769]: I1212 18:35:03.677221 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n22qc\" (UniqueName: \"kubernetes.io/projected/5bd7d04f-25d6-4f6d-8d32-675830519b60-kube-api-access-n22qc\") pod \"calico-kube-controllers-57c994577d-zf2dw\" (UID: \"5bd7d04f-25d6-4f6d-8d32-675830519b60\") " pod="calico-system/calico-kube-controllers-57c994577d-zf2dw" Dec 12 18:35:03.684570 containerd[1553]: time="2025-12-12T18:35:03.684503213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b767d98d4-755s8,Uid:924b51e0-ed81-4bc8-a597-a44686b519ff,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:35:03.687959 containerd[1553]: time="2025-12-12T18:35:03.687823081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84c98976c4-29chl,Uid:130041d5-7a82-45e2-b4bb-3f947d0d2476,Namespace:calico-system,Attempt:0,}" Dec 12 18:35:03.779433 containerd[1553]: time="2025-12-12T18:35:03.779347513Z" level=error msg="Failed to destroy network for sandbox \"17b760c36b783b8631c0d1e8b3569cb0e1ddcd27ec61af6878cfcdff8985403d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:35:03.782031 containerd[1553]: time="2025-12-12T18:35:03.781963141Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b767d98d4-755s8,Uid:924b51e0-ed81-4bc8-a597-a44686b519ff,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"17b760c36b783b8631c0d1e8b3569cb0e1ddcd27ec61af6878cfcdff8985403d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:35:03.782510 kubelet[2769]: E1212 18:35:03.782457 2769 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17b760c36b783b8631c0d1e8b3569cb0e1ddcd27ec61af6878cfcdff8985403d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:35:03.782626 kubelet[2769]: E1212 18:35:03.782536 2769 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17b760c36b783b8631c0d1e8b3569cb0e1ddcd27ec61af6878cfcdff8985403d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b767d98d4-755s8" Dec 12 18:35:03.782626 kubelet[2769]: E1212 18:35:03.782573 2769 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17b760c36b783b8631c0d1e8b3569cb0e1ddcd27ec61af6878cfcdff8985403d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b767d98d4-755s8" Dec 12 18:35:03.782722 kubelet[2769]: E1212 18:35:03.782665 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b767d98d4-755s8_calico-apiserver(924b51e0-ed81-4bc8-a597-a44686b519ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b767d98d4-755s8_calico-apiserver(924b51e0-ed81-4bc8-a597-a44686b519ff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17b760c36b783b8631c0d1e8b3569cb0e1ddcd27ec61af6878cfcdff8985403d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b767d98d4-755s8" podUID="924b51e0-ed81-4bc8-a597-a44686b519ff" Dec 12 18:35:03.787108 systemd[1]: run-netns-cni\x2d6e5f426d\x2dffe4\x2db99a\x2d4529\x2dde38c7e0a89d.mount: Deactivated successfully. Dec 12 18:35:03.791753 containerd[1553]: time="2025-12-12T18:35:03.791596101Z" level=error msg="Failed to destroy network for sandbox \"b23ed308b32d243f3c4afc611e409d511e9db10640c9ed1bf1815d6c5f3db714\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:35:03.795477 systemd[1]: run-netns-cni\x2dbb2bc789\x2de3d6\x2d952f\x2dc3b4\x2d94f5b30b2a36.mount: Deactivated successfully. Dec 12 18:35:03.795697 containerd[1553]: time="2025-12-12T18:35:03.795477563Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-wcf2b,Uid:4c88e5b7-6c17-45c7-92f0-9be254ebdd59,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b23ed308b32d243f3c4afc611e409d511e9db10640c9ed1bf1815d6c5f3db714\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:35:03.796018 kubelet[2769]: E1212 18:35:03.795890 2769 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b23ed308b32d243f3c4afc611e409d511e9db10640c9ed1bf1815d6c5f3db714\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:35:03.796237 kubelet[2769]: E1212 18:35:03.796108 2769 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b23ed308b32d243f3c4afc611e409d511e9db10640c9ed1bf1815d6c5f3db714\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-wcf2b" Dec 12 18:35:03.796237 kubelet[2769]: E1212 18:35:03.796137 2769 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b23ed308b32d243f3c4afc611e409d511e9db10640c9ed1bf1815d6c5f3db714\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-wcf2b" Dec 12 18:35:03.796237 kubelet[2769]: E1212 18:35:03.796297 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-wcf2b_calico-system(4c88e5b7-6c17-45c7-92f0-9be254ebdd59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-wcf2b_calico-system(4c88e5b7-6c17-45c7-92f0-9be254ebdd59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b23ed308b32d243f3c4afc611e409d511e9db10640c9ed1bf1815d6c5f3db714\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-wcf2b" podUID="4c88e5b7-6c17-45c7-92f0-9be254ebdd59" Dec 12 18:35:03.801873 containerd[1553]: time="2025-12-12T18:35:03.801817788Z" level=error msg="Failed to destroy network for sandbox \"33cf906eb5e8fd27dbd034bc2db91fc5a107a3537964ebcf69d8df2a49ae666f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:35:03.803802 containerd[1553]: time="2025-12-12T18:35:03.803734854Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84c98976c4-29chl,Uid:130041d5-7a82-45e2-b4bb-3f947d0d2476,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"33cf906eb5e8fd27dbd034bc2db91fc5a107a3537964ebcf69d8df2a49ae666f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:35:03.804549 kubelet[2769]: E1212 18:35:03.804141 2769 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33cf906eb5e8fd27dbd034bc2db91fc5a107a3537964ebcf69d8df2a49ae666f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:35:03.804549 kubelet[2769]: E1212 18:35:03.804215 2769 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33cf906eb5e8fd27dbd034bc2db91fc5a107a3537964ebcf69d8df2a49ae666f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-84c98976c4-29chl" Dec 12 18:35:03.804549 kubelet[2769]: E1212 18:35:03.804236 2769 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33cf906eb5e8fd27dbd034bc2db91fc5a107a3537964ebcf69d8df2a49ae666f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-84c98976c4-29chl" Dec 12 18:35:03.804696 kubelet[2769]: E1212 18:35:03.804293 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-84c98976c4-29chl_calico-system(130041d5-7a82-45e2-b4bb-3f947d0d2476)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-84c98976c4-29chl_calico-system(130041d5-7a82-45e2-b4bb-3f947d0d2476)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33cf906eb5e8fd27dbd034bc2db91fc5a107a3537964ebcf69d8df2a49ae666f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-84c98976c4-29chl" podUID="130041d5-7a82-45e2-b4bb-3f947d0d2476" Dec 12 18:35:03.807568 systemd[1]: run-netns-cni\x2d3fdc8813\x2d77c0\x2dab79\x2da685\x2de329476dad01.mount: Deactivated successfully. Dec 12 18:35:03.888038 kubelet[2769]: E1212 18:35:03.887993 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:03.888720 containerd[1553]: time="2025-12-12T18:35:03.888652442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-74dh6,Uid:b7aa5f12-60b1-4a4a-b2f4-b7aa5d15059a,Namespace:kube-system,Attempt:0,}" Dec 12 18:35:03.908836 containerd[1553]: time="2025-12-12T18:35:03.908761907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b767d98d4-5tzst,Uid:1f0321c0-7695-4f53-9a29-c3900a354123,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:35:03.938125 kubelet[2769]: E1212 18:35:03.938066 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:03.940850 containerd[1553]: time="2025-12-12T18:35:03.940726644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nscm8,Uid:fefe395e-a76c-40a6-a6e5-38c9f3e1ee92,Namespace:kube-system,Attempt:0,}" Dec 12 18:35:03.944970 containerd[1553]: time="2025-12-12T18:35:03.944577959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57c994577d-zf2dw,Uid:5bd7d04f-25d6-4f6d-8d32-675830519b60,Namespace:calico-system,Attempt:0,}" Dec 12 18:35:03.964099 containerd[1553]: time="2025-12-12T18:35:03.964037175Z" level=error msg="Failed to destroy network for sandbox \"8d1e6b1e68d69209ff2157a6ce8cb5cc518b5c04e03c80e13ab1c80cb86a472d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:35:03.965622 containerd[1553]: time="2025-12-12T18:35:03.965542448Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-74dh6,Uid:b7aa5f12-60b1-4a4a-b2f4-b7aa5d15059a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d1e6b1e68d69209ff2157a6ce8cb5cc518b5c04e03c80e13ab1c80cb86a472d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:35:03.966479 kubelet[2769]: E1212 18:35:03.966041 2769 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d1e6b1e68d69209ff2157a6ce8cb5cc518b5c04e03c80e13ab1c80cb86a472d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:35:03.966479 kubelet[2769]: E1212 18:35:03.966114 2769 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d1e6b1e68d69209ff2157a6ce8cb5cc518b5c04e03c80e13ab1c80cb86a472d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-74dh6" Dec 12 18:35:03.966479 kubelet[2769]: E1212 18:35:03.966140 2769 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d1e6b1e68d69209ff2157a6ce8cb5cc518b5c04e03c80e13ab1c80cb86a472d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-74dh6" Dec 12 18:35:03.966639 kubelet[2769]: E1212 18:35:03.966210 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-74dh6_kube-system(b7aa5f12-60b1-4a4a-b2f4-b7aa5d15059a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-74dh6_kube-system(b7aa5f12-60b1-4a4a-b2f4-b7aa5d15059a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d1e6b1e68d69209ff2157a6ce8cb5cc518b5c04e03c80e13ab1c80cb86a472d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-74dh6" podUID="b7aa5f12-60b1-4a4a-b2f4-b7aa5d15059a" Dec 12 18:35:03.981817 containerd[1553]: time="2025-12-12T18:35:03.981749043Z" level=error msg="Failed to destroy network for sandbox \"106e4c9bd0d9681c21fddd72c17b839b64c7fcb8302a4c6666abcb9d934e6fa8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:35:03.983923 containerd[1553]: time="2025-12-12T18:35:03.983795301Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b767d98d4-5tzst,Uid:1f0321c0-7695-4f53-9a29-c3900a354123,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"106e4c9bd0d9681c21fddd72c17b839b64c7fcb8302a4c6666abcb9d934e6fa8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:35:03.984169 kubelet[2769]: E1212 18:35:03.984125 2769 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"106e4c9bd0d9681c21fddd72c17b839b64c7fcb8302a4c6666abcb9d934e6fa8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:35:03.984256 kubelet[2769]: E1212 18:35:03.984196 2769 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"106e4c9bd0d9681c21fddd72c17b839b64c7fcb8302a4c6666abcb9d934e6fa8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b767d98d4-5tzst" Dec 12 18:35:03.984256 kubelet[2769]: E1212 18:35:03.984221 2769 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"106e4c9bd0d9681c21fddd72c17b839b64c7fcb8302a4c6666abcb9d934e6fa8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b767d98d4-5tzst" Dec 12 18:35:03.984326 kubelet[2769]: E1212 18:35:03.984290 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b767d98d4-5tzst_calico-apiserver(1f0321c0-7695-4f53-9a29-c3900a354123)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b767d98d4-5tzst_calico-apiserver(1f0321c0-7695-4f53-9a29-c3900a354123)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"106e4c9bd0d9681c21fddd72c17b839b64c7fcb8302a4c6666abcb9d934e6fa8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b767d98d4-5tzst" podUID="1f0321c0-7695-4f53-9a29-c3900a354123" Dec 12 18:35:04.013121 containerd[1553]: time="2025-12-12T18:35:04.013047498Z" level=error msg="Failed to destroy network for sandbox \"288935966c220093956df6ff471abe3cc0057c03dcbf707678339255c46c90a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:35:04.014823 containerd[1553]: time="2025-12-12T18:35:04.014784297Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57c994577d-zf2dw,Uid:5bd7d04f-25d6-4f6d-8d32-675830519b60,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"288935966c220093956df6ff471abe3cc0057c03dcbf707678339255c46c90a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:35:04.015132 kubelet[2769]: E1212 18:35:04.015094 2769 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"288935966c220093956df6ff471abe3cc0057c03dcbf707678339255c46c90a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:35:04.015192 kubelet[2769]: E1212 18:35:04.015159 2769 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"288935966c220093956df6ff471abe3cc0057c03dcbf707678339255c46c90a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57c994577d-zf2dw" Dec 12 18:35:04.015192 kubelet[2769]: E1212 18:35:04.015184 2769 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"288935966c220093956df6ff471abe3cc0057c03dcbf707678339255c46c90a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57c994577d-zf2dw" Dec 12 18:35:04.015287 kubelet[2769]: E1212 18:35:04.015254 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-57c994577d-zf2dw_calico-system(5bd7d04f-25d6-4f6d-8d32-675830519b60)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-57c994577d-zf2dw_calico-system(5bd7d04f-25d6-4f6d-8d32-675830519b60)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"288935966c220093956df6ff471abe3cc0057c03dcbf707678339255c46c90a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57c994577d-zf2dw" podUID="5bd7d04f-25d6-4f6d-8d32-675830519b60" Dec 12 18:35:04.017830 containerd[1553]: time="2025-12-12T18:35:04.017777863Z" level=error msg="Failed to destroy network for sandbox \"fa311add2d5122976a5e4272e6a3967fc2d27fb656e92db5db8bbe06e77a2464\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:35:04.019235 containerd[1553]: time="2025-12-12T18:35:04.019180824Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nscm8,Uid:fefe395e-a76c-40a6-a6e5-38c9f3e1ee92,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa311add2d5122976a5e4272e6a3967fc2d27fb656e92db5db8bbe06e77a2464\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:35:04.019417 kubelet[2769]: E1212 18:35:04.019385 2769 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa311add2d5122976a5e4272e6a3967fc2d27fb656e92db5db8bbe06e77a2464\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:35:04.019465 kubelet[2769]: E1212 18:35:04.019451 2769 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa311add2d5122976a5e4272e6a3967fc2d27fb656e92db5db8bbe06e77a2464\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-nscm8" Dec 12 18:35:04.019510 kubelet[2769]: E1212 18:35:04.019474 2769 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa311add2d5122976a5e4272e6a3967fc2d27fb656e92db5db8bbe06e77a2464\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-nscm8" Dec 12 18:35:04.019560 kubelet[2769]: E1212 18:35:04.019526 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-nscm8_kube-system(fefe395e-a76c-40a6-a6e5-38c9f3e1ee92)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-nscm8_kube-system(fefe395e-a76c-40a6-a6e5-38c9f3e1ee92)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa311add2d5122976a5e4272e6a3967fc2d27fb656e92db5db8bbe06e77a2464\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-nscm8" podUID="fefe395e-a76c-40a6-a6e5-38c9f3e1ee92" Dec 12 18:35:14.071322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2355421720.mount: Deactivated successfully. Dec 12 18:35:15.934457 containerd[1553]: time="2025-12-12T18:35:15.934379990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dtnq5,Uid:3590ca52-1c12-4793-a003-8621a1fe8861,Namespace:calico-system,Attempt:0,}" Dec 12 18:35:15.943831 containerd[1553]: time="2025-12-12T18:35:15.943750945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84c98976c4-29chl,Uid:130041d5-7a82-45e2-b4bb-3f947d0d2476,Namespace:calico-system,Attempt:0,}" Dec 12 18:35:15.957084 containerd[1553]: time="2025-12-12T18:35:15.957015653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b767d98d4-5tzst,Uid:1f0321c0-7695-4f53-9a29-c3900a354123,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:35:16.045263 containerd[1553]: time="2025-12-12T18:35:16.041794918Z" level=error msg="Failed to destroy network for sandbox \"ea2c73a17927b5ee9f5f294d7d33eacb558322bf1aca2f9e43dd83b00961208b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:35:16.045481 containerd[1553]: time="2025-12-12T18:35:16.045420380Z" level=error msg="Failed to destroy network for sandbox \"97d4d0662ff3353b2adc33276124ce1fee87faeee2dc184d8fd6fbb7405efbd6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:35:16.046676 systemd[1]: run-netns-cni\x2dc9b2588e\x2d3844\x2dcdf4\x2d5204\x2dc4e0fce349c9.mount: Deactivated successfully. Dec 12 18:35:16.050137 systemd[1]: run-netns-cni\x2de80b4e59\x2d77fe\x2d0f51\x2dae36\x2dcf572d32642d.mount: Deactivated successfully. Dec 12 18:35:16.071100 containerd[1553]: time="2025-12-12T18:35:16.071008520Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:35:16.085481 containerd[1553]: time="2025-12-12T18:35:16.085398214Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dtnq5,Uid:3590ca52-1c12-4793-a003-8621a1fe8861,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea2c73a17927b5ee9f5f294d7d33eacb558322bf1aca2f9e43dd83b00961208b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:35:16.086187 kubelet[2769]: E1212 18:35:16.086137 2769 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea2c73a17927b5ee9f5f294d7d33eacb558322bf1aca2f9e43dd83b00961208b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:35:16.086654 kubelet[2769]: E1212 18:35:16.086218 2769 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea2c73a17927b5ee9f5f294d7d33eacb558322bf1aca2f9e43dd83b00961208b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dtnq5" Dec 12 18:35:16.086654 kubelet[2769]: E1212 18:35:16.086245 2769 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea2c73a17927b5ee9f5f294d7d33eacb558322bf1aca2f9e43dd83b00961208b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dtnq5" Dec 12 18:35:16.086654 kubelet[2769]: E1212 18:35:16.086330 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dtnq5_calico-system(3590ca52-1c12-4793-a003-8621a1fe8861)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dtnq5_calico-system(3590ca52-1c12-4793-a003-8621a1fe8861)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea2c73a17927b5ee9f5f294d7d33eacb558322bf1aca2f9e43dd83b00961208b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dtnq5" podUID="3590ca52-1c12-4793-a003-8621a1fe8861" Dec 12 18:35:16.088101 containerd[1553]: time="2025-12-12T18:35:16.087862122Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84c98976c4-29chl,Uid:130041d5-7a82-45e2-b4bb-3f947d0d2476,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"97d4d0662ff3353b2adc33276124ce1fee87faeee2dc184d8fd6fbb7405efbd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:35:16.088340 kubelet[2769]: E1212 18:35:16.088298 2769 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97d4d0662ff3353b2adc33276124ce1fee87faeee2dc184d8fd6fbb7405efbd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:35:16.088412 kubelet[2769]: E1212 18:35:16.088361 2769 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97d4d0662ff3353b2adc33276124ce1fee87faeee2dc184d8fd6fbb7405efbd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-84c98976c4-29chl" Dec 12 18:35:16.088412 kubelet[2769]: E1212 18:35:16.088384 2769 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97d4d0662ff3353b2adc33276124ce1fee87faeee2dc184d8fd6fbb7405efbd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-84c98976c4-29chl" Dec 12 18:35:16.088566 kubelet[2769]: E1212 18:35:16.088450 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-84c98976c4-29chl_calico-system(130041d5-7a82-45e2-b4bb-3f947d0d2476)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-84c98976c4-29chl_calico-system(130041d5-7a82-45e2-b4bb-3f947d0d2476)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97d4d0662ff3353b2adc33276124ce1fee87faeee2dc184d8fd6fbb7405efbd6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-84c98976c4-29chl" podUID="130041d5-7a82-45e2-b4bb-3f947d0d2476" Dec 12 18:35:16.091290 containerd[1553]: time="2025-12-12T18:35:16.091221743Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Dec 12 18:35:16.094087 containerd[1553]: time="2025-12-12T18:35:16.094004144Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:35:16.105646 containerd[1553]: time="2025-12-12T18:35:16.104430196Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:35:16.105646 containerd[1553]: time="2025-12-12T18:35:16.105283797Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 12.727629529s" Dec 12 18:35:16.105646 containerd[1553]: time="2025-12-12T18:35:16.105314085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 12 18:35:16.160772 containerd[1553]: time="2025-12-12T18:35:16.160685855Z" level=error msg="Failed to destroy network for sandbox \"65b768e5e23a4b173b0dfa9321fde3182fcd9304219b30438860a54f7e72ddf0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:35:16.213528 containerd[1553]: time="2025-12-12T18:35:16.213370756Z" level=info msg="CreateContainer within sandbox \"d6a326e737bd6fecd1fad1bd7ca535b7f6d4661a898d6c8d3119855638e54286\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 18:35:16.230215 containerd[1553]: time="2025-12-12T18:35:16.230105850Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b767d98d4-5tzst,Uid:1f0321c0-7695-4f53-9a29-c3900a354123,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"65b768e5e23a4b173b0dfa9321fde3182fcd9304219b30438860a54f7e72ddf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:35:16.230720 kubelet[2769]: E1212 18:35:16.230632 2769 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65b768e5e23a4b173b0dfa9321fde3182fcd9304219b30438860a54f7e72ddf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:35:16.231005 kubelet[2769]: E1212 18:35:16.230737 2769 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65b768e5e23a4b173b0dfa9321fde3182fcd9304219b30438860a54f7e72ddf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b767d98d4-5tzst" Dec 12 18:35:16.231005 kubelet[2769]: E1212 18:35:16.230764 2769 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65b768e5e23a4b173b0dfa9321fde3182fcd9304219b30438860a54f7e72ddf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b767d98d4-5tzst" Dec 12 18:35:16.231005 kubelet[2769]: E1212 18:35:16.230843 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b767d98d4-5tzst_calico-apiserver(1f0321c0-7695-4f53-9a29-c3900a354123)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b767d98d4-5tzst_calico-apiserver(1f0321c0-7695-4f53-9a29-c3900a354123)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"65b768e5e23a4b173b0dfa9321fde3182fcd9304219b30438860a54f7e72ddf0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b767d98d4-5tzst" podUID="1f0321c0-7695-4f53-9a29-c3900a354123" Dec 12 18:35:16.253249 containerd[1553]: time="2025-12-12T18:35:16.252941507Z" level=info msg="Container 157c457b525b7c7e04059b76094c3b5097dd32a6b6ecd3b5f9e564ec49546042: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:35:16.424266 containerd[1553]: time="2025-12-12T18:35:16.424201320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57c994577d-zf2dw,Uid:5bd7d04f-25d6-4f6d-8d32-675830519b60,Namespace:calico-system,Attempt:0,}" Dec 12 18:35:16.461636 containerd[1553]: time="2025-12-12T18:35:16.461566249Z" level=info msg="CreateContainer within sandbox \"d6a326e737bd6fecd1fad1bd7ca535b7f6d4661a898d6c8d3119855638e54286\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"157c457b525b7c7e04059b76094c3b5097dd32a6b6ecd3b5f9e564ec49546042\"" Dec 12 18:35:16.462327 containerd[1553]: time="2025-12-12T18:35:16.462284791Z" level=info msg="StartContainer for \"157c457b525b7c7e04059b76094c3b5097dd32a6b6ecd3b5f9e564ec49546042\"" Dec 12 18:35:16.464273 containerd[1553]: time="2025-12-12T18:35:16.464163072Z" level=info msg="connecting to shim 157c457b525b7c7e04059b76094c3b5097dd32a6b6ecd3b5f9e564ec49546042" address="unix:///run/containerd/s/494d55a182bc9b7da9de74d945116fc893abd55198c45c82c18d5c1f097feb90" protocol=ttrpc version=3 Dec 12 18:35:16.502407 systemd[1]: Started cri-containerd-157c457b525b7c7e04059b76094c3b5097dd32a6b6ecd3b5f9e564ec49546042.scope - libcontainer container 157c457b525b7c7e04059b76094c3b5097dd32a6b6ecd3b5f9e564ec49546042. Dec 12 18:35:16.524861 containerd[1553]: time="2025-12-12T18:35:16.524773317Z" level=error msg="Failed to destroy network for sandbox \"129dc81e6b697ae812810ea0aa181407384d6b80e2ff119f0a2291b25903356b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:35:16.592032 containerd[1553]: time="2025-12-12T18:35:16.591938116Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57c994577d-zf2dw,Uid:5bd7d04f-25d6-4f6d-8d32-675830519b60,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"129dc81e6b697ae812810ea0aa181407384d6b80e2ff119f0a2291b25903356b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:35:16.592339 kubelet[2769]: E1212 18:35:16.592224 2769 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"129dc81e6b697ae812810ea0aa181407384d6b80e2ff119f0a2291b25903356b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:35:16.592339 kubelet[2769]: E1212 18:35:16.592285 2769 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"129dc81e6b697ae812810ea0aa181407384d6b80e2ff119f0a2291b25903356b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57c994577d-zf2dw" Dec 12 18:35:16.592339 kubelet[2769]: E1212 18:35:16.592305 2769 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"129dc81e6b697ae812810ea0aa181407384d6b80e2ff119f0a2291b25903356b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57c994577d-zf2dw" Dec 12 18:35:16.592561 kubelet[2769]: E1212 18:35:16.592362 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-57c994577d-zf2dw_calico-system(5bd7d04f-25d6-4f6d-8d32-675830519b60)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-57c994577d-zf2dw_calico-system(5bd7d04f-25d6-4f6d-8d32-675830519b60)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"129dc81e6b697ae812810ea0aa181407384d6b80e2ff119f0a2291b25903356b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57c994577d-zf2dw" podUID="5bd7d04f-25d6-4f6d-8d32-675830519b60" Dec 12 18:35:16.689826 containerd[1553]: time="2025-12-12T18:35:16.689772117Z" level=info msg="StartContainer for \"157c457b525b7c7e04059b76094c3b5097dd32a6b6ecd3b5f9e564ec49546042\" returns successfully" Dec 12 18:35:16.907089 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 18:35:16.908610 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 18:35:16.967503 systemd[1]: run-netns-cni\x2d681d5f66\x2da74c\x2d12c0\x2da4d5\x2d01e9b3aa00fa.mount: Deactivated successfully. Dec 12 18:35:17.474489 kubelet[2769]: E1212 18:35:17.474080 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:17.476836 kubelet[2769]: I1212 18:35:17.476801 2769 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/130041d5-7a82-45e2-b4bb-3f947d0d2476-whisker-backend-key-pair\") pod \"130041d5-7a82-45e2-b4bb-3f947d0d2476\" (UID: \"130041d5-7a82-45e2-b4bb-3f947d0d2476\") " Dec 12 18:35:17.479571 kubelet[2769]: I1212 18:35:17.478487 2769 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/130041d5-7a82-45e2-b4bb-3f947d0d2476-whisker-ca-bundle\") pod \"130041d5-7a82-45e2-b4bb-3f947d0d2476\" (UID: \"130041d5-7a82-45e2-b4bb-3f947d0d2476\") " Dec 12 18:35:17.479786 kubelet[2769]: I1212 18:35:17.479763 2769 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t64rv\" (UniqueName: \"kubernetes.io/projected/130041d5-7a82-45e2-b4bb-3f947d0d2476-kube-api-access-t64rv\") pod \"130041d5-7a82-45e2-b4bb-3f947d0d2476\" (UID: \"130041d5-7a82-45e2-b4bb-3f947d0d2476\") " Dec 12 18:35:17.480205 kubelet[2769]: I1212 18:35:17.479414 2769 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/130041d5-7a82-45e2-b4bb-3f947d0d2476-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "130041d5-7a82-45e2-b4bb-3f947d0d2476" (UID: "130041d5-7a82-45e2-b4bb-3f947d0d2476"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 18:35:17.486592 systemd[1]: var-lib-kubelet-pods-130041d5\x2d7a82\x2d45e2\x2db4bb\x2d3f947d0d2476-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt64rv.mount: Deactivated successfully. Dec 12 18:35:17.491939 kubelet[2769]: I1212 18:35:17.490434 2769 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/130041d5-7a82-45e2-b4bb-3f947d0d2476-kube-api-access-t64rv" (OuterVolumeSpecName: "kube-api-access-t64rv") pod "130041d5-7a82-45e2-b4bb-3f947d0d2476" (UID: "130041d5-7a82-45e2-b4bb-3f947d0d2476"). InnerVolumeSpecName "kube-api-access-t64rv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 18:35:17.492313 systemd[1]: var-lib-kubelet-pods-130041d5\x2d7a82\x2d45e2\x2db4bb\x2d3f947d0d2476-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 12 18:35:17.493603 kubelet[2769]: I1212 18:35:17.493084 2769 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/130041d5-7a82-45e2-b4bb-3f947d0d2476-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "130041d5-7a82-45e2-b4bb-3f947d0d2476" (UID: "130041d5-7a82-45e2-b4bb-3f947d0d2476"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 18:35:17.575874 systemd[1]: Started sshd@9-10.0.0.38:22-10.0.0.1:54782.service - OpenSSH per-connection server daemon (10.0.0.1:54782). Dec 12 18:35:17.583584 kubelet[2769]: I1212 18:35:17.582212 2769 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t64rv\" (UniqueName: \"kubernetes.io/projected/130041d5-7a82-45e2-b4bb-3f947d0d2476-kube-api-access-t64rv\") on node \"localhost\" DevicePath \"\"" Dec 12 18:35:17.583584 kubelet[2769]: I1212 18:35:17.582254 2769 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/130041d5-7a82-45e2-b4bb-3f947d0d2476-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Dec 12 18:35:17.583584 kubelet[2769]: I1212 18:35:17.582265 2769 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/130041d5-7a82-45e2-b4bb-3f947d0d2476-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 12 18:35:17.690358 sshd[3985]: Accepted publickey for core from 10.0.0.1 port 54782 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:35:17.694210 sshd-session[3985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:35:17.703477 systemd-logind[1540]: New session 10 of user core. Dec 12 18:35:17.711243 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 18:35:17.930010 sshd[3988]: Connection closed by 10.0.0.1 port 54782 Dec 12 18:35:17.930395 sshd-session[3985]: pam_unix(sshd:session): session closed for user core Dec 12 18:35:17.935205 systemd[1]: sshd@9-10.0.0.38:22-10.0.0.1:54782.service: Deactivated successfully. Dec 12 18:35:17.937799 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 18:35:17.941860 systemd-logind[1540]: Session 10 logged out. Waiting for processes to exit. Dec 12 18:35:17.944640 systemd-logind[1540]: Removed session 10. Dec 12 18:35:18.282219 containerd[1553]: time="2025-12-12T18:35:18.282082611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-wcf2b,Uid:4c88e5b7-6c17-45c7-92f0-9be254ebdd59,Namespace:calico-system,Attempt:0,}" Dec 12 18:35:18.478638 kubelet[2769]: E1212 18:35:18.478581 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:18.489821 systemd[1]: Removed slice kubepods-besteffort-pod130041d5_7a82_45e2_b4bb_3f947d0d2476.slice - libcontainer container kubepods-besteffort-pod130041d5_7a82_45e2_b4bb_3f947d0d2476.slice. Dec 12 18:35:18.531937 kubelet[2769]: I1212 18:35:18.531543 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gbz6l" podStartSLOduration=3.329207626 podStartE2EDuration="27.531523525s" podCreationTimestamp="2025-12-12 18:34:51 +0000 UTC" firstStartedPulling="2025-12-12 18:34:51.905370078 +0000 UTC m=+22.748677213" lastFinishedPulling="2025-12-12 18:35:16.107685977 +0000 UTC m=+46.950993112" observedRunningTime="2025-12-12 18:35:17.511320315 +0000 UTC m=+48.354627470" watchObservedRunningTime="2025-12-12 18:35:18.531523525 +0000 UTC m=+49.374830660" Dec 12 18:35:18.917142 systemd[1]: Created slice kubepods-besteffort-pod1290709f_462a_4bdb_93db_9172d8fdb29d.slice - libcontainer container kubepods-besteffort-pod1290709f_462a_4bdb_93db_9172d8fdb29d.slice. Dec 12 18:35:18.994186 kubelet[2769]: I1212 18:35:18.994099 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls9c8\" (UniqueName: \"kubernetes.io/projected/1290709f-462a-4bdb-93db-9172d8fdb29d-kube-api-access-ls9c8\") pod \"whisker-7b5c98f7cb-flntl\" (UID: \"1290709f-462a-4bdb-93db-9172d8fdb29d\") " pod="calico-system/whisker-7b5c98f7cb-flntl" Dec 12 18:35:18.994583 kubelet[2769]: I1212 18:35:18.994564 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1290709f-462a-4bdb-93db-9172d8fdb29d-whisker-backend-key-pair\") pod \"whisker-7b5c98f7cb-flntl\" (UID: \"1290709f-462a-4bdb-93db-9172d8fdb29d\") " pod="calico-system/whisker-7b5c98f7cb-flntl" Dec 12 18:35:18.994939 kubelet[2769]: I1212 18:35:18.994893 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1290709f-462a-4bdb-93db-9172d8fdb29d-whisker-ca-bundle\") pod \"whisker-7b5c98f7cb-flntl\" (UID: \"1290709f-462a-4bdb-93db-9172d8fdb29d\") " pod="calico-system/whisker-7b5c98f7cb-flntl" Dec 12 18:35:19.309884 containerd[1553]: time="2025-12-12T18:35:19.295403757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b5c98f7cb-flntl,Uid:1290709f-462a-4bdb-93db-9172d8fdb29d,Namespace:calico-system,Attempt:0,}" Dec 12 18:35:19.331438 kubelet[2769]: E1212 18:35:19.324104 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:19.331438 kubelet[2769]: I1212 18:35:19.326656 2769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="130041d5-7a82-45e2-b4bb-3f947d0d2476" path="/var/lib/kubelet/pods/130041d5-7a82-45e2-b4bb-3f947d0d2476/volumes" Dec 12 18:35:19.331723 containerd[1553]: time="2025-12-12T18:35:19.327367828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-74dh6,Uid:b7aa5f12-60b1-4a4a-b2f4-b7aa5d15059a,Namespace:kube-system,Attempt:0,}" Dec 12 18:35:19.331723 containerd[1553]: time="2025-12-12T18:35:19.327682412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b767d98d4-755s8,Uid:924b51e0-ed81-4bc8-a597-a44686b519ff,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:35:19.360787 kubelet[2769]: E1212 18:35:19.356203 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:19.373480 systemd-networkd[1458]: cali8f847213ddb: Link UP Dec 12 18:35:19.382376 containerd[1553]: time="2025-12-12T18:35:19.379075534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nscm8,Uid:fefe395e-a76c-40a6-a6e5-38c9f3e1ee92,Namespace:kube-system,Attempt:0,}" Dec 12 18:35:19.382668 systemd-networkd[1458]: cali8f847213ddb: Gained carrier Dec 12 18:35:19.453227 containerd[1553]: 2025-12-12 18:35:18.325 [INFO][4014] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:35:19.453227 containerd[1553]: 2025-12-12 18:35:18.391 [INFO][4014] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--wcf2b-eth0 goldmane-7c778bb748- calico-system 4c88e5b7-6c17-45c7-92f0-9be254ebdd59 844 0 2025-12-12 18:34:49 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-wcf2b eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8f847213ddb [] [] }} ContainerID="ce61d1a7d64a12f626b1a6043228b93a7e745f97c5f638795390783f0b689d77" Namespace="calico-system" Pod="goldmane-7c778bb748-wcf2b" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--wcf2b-" Dec 12 18:35:19.453227 containerd[1553]: 2025-12-12 18:35:18.391 [INFO][4014] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ce61d1a7d64a12f626b1a6043228b93a7e745f97c5f638795390783f0b689d77" Namespace="calico-system" Pod="goldmane-7c778bb748-wcf2b" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--wcf2b-eth0" Dec 12 18:35:19.453227 containerd[1553]: 2025-12-12 18:35:18.925 [INFO][4026] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ce61d1a7d64a12f626b1a6043228b93a7e745f97c5f638795390783f0b689d77" HandleID="k8s-pod-network.ce61d1a7d64a12f626b1a6043228b93a7e745f97c5f638795390783f0b689d77" Workload="localhost-k8s-goldmane--7c778bb748--wcf2b-eth0" Dec 12 18:35:19.453616 containerd[1553]: 2025-12-12 18:35:18.929 [INFO][4026] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ce61d1a7d64a12f626b1a6043228b93a7e745f97c5f638795390783f0b689d77" HandleID="k8s-pod-network.ce61d1a7d64a12f626b1a6043228b93a7e745f97c5f638795390783f0b689d77" Workload="localhost-k8s-goldmane--7c778bb748--wcf2b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001861e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-wcf2b", "timestamp":"2025-12-12 18:35:18.925457949 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:35:19.453616 containerd[1553]: 2025-12-12 18:35:18.929 [INFO][4026] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:35:19.453616 containerd[1553]: 2025-12-12 18:35:18.931 [INFO][4026] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:35:19.453616 containerd[1553]: 2025-12-12 18:35:18.931 [INFO][4026] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 18:35:19.453616 containerd[1553]: 2025-12-12 18:35:18.975 [INFO][4026] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ce61d1a7d64a12f626b1a6043228b93a7e745f97c5f638795390783f0b689d77" host="localhost" Dec 12 18:35:19.453616 containerd[1553]: 2025-12-12 18:35:19.007 [INFO][4026] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 18:35:19.453616 containerd[1553]: 2025-12-12 18:35:19.020 [INFO][4026] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 18:35:19.453616 containerd[1553]: 2025-12-12 18:35:19.026 [INFO][4026] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 18:35:19.453616 containerd[1553]: 2025-12-12 18:35:19.035 [INFO][4026] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 18:35:19.453616 containerd[1553]: 2025-12-12 18:35:19.036 [INFO][4026] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ce61d1a7d64a12f626b1a6043228b93a7e745f97c5f638795390783f0b689d77" host="localhost" Dec 12 18:35:19.454451 containerd[1553]: 2025-12-12 18:35:19.039 [INFO][4026] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ce61d1a7d64a12f626b1a6043228b93a7e745f97c5f638795390783f0b689d77 Dec 12 18:35:19.454451 containerd[1553]: 2025-12-12 18:35:19.050 [INFO][4026] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ce61d1a7d64a12f626b1a6043228b93a7e745f97c5f638795390783f0b689d77" host="localhost" Dec 12 18:35:19.454451 containerd[1553]: 2025-12-12 18:35:19.075 [INFO][4026] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.ce61d1a7d64a12f626b1a6043228b93a7e745f97c5f638795390783f0b689d77" host="localhost" Dec 12 18:35:19.454451 containerd[1553]: 2025-12-12 18:35:19.076 [INFO][4026] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.ce61d1a7d64a12f626b1a6043228b93a7e745f97c5f638795390783f0b689d77" host="localhost" Dec 12 18:35:19.454451 containerd[1553]: 2025-12-12 18:35:19.076 [INFO][4026] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:35:19.454451 containerd[1553]: 2025-12-12 18:35:19.076 [INFO][4026] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="ce61d1a7d64a12f626b1a6043228b93a7e745f97c5f638795390783f0b689d77" HandleID="k8s-pod-network.ce61d1a7d64a12f626b1a6043228b93a7e745f97c5f638795390783f0b689d77" Workload="localhost-k8s-goldmane--7c778bb748--wcf2b-eth0" Dec 12 18:35:19.454633 containerd[1553]: 2025-12-12 18:35:19.086 [INFO][4014] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ce61d1a7d64a12f626b1a6043228b93a7e745f97c5f638795390783f0b689d77" Namespace="calico-system" Pod="goldmane-7c778bb748-wcf2b" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--wcf2b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--wcf2b-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"4c88e5b7-6c17-45c7-92f0-9be254ebdd59", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 34, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-wcf2b", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8f847213ddb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:35:19.454633 containerd[1553]: 2025-12-12 18:35:19.086 [INFO][4014] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="ce61d1a7d64a12f626b1a6043228b93a7e745f97c5f638795390783f0b689d77" Namespace="calico-system" Pod="goldmane-7c778bb748-wcf2b" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--wcf2b-eth0" Dec 12 18:35:19.456957 containerd[1553]: 2025-12-12 18:35:19.086 [INFO][4014] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f847213ddb ContainerID="ce61d1a7d64a12f626b1a6043228b93a7e745f97c5f638795390783f0b689d77" Namespace="calico-system" Pod="goldmane-7c778bb748-wcf2b" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--wcf2b-eth0" Dec 12 18:35:19.456957 containerd[1553]: 2025-12-12 18:35:19.376 [INFO][4014] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ce61d1a7d64a12f626b1a6043228b93a7e745f97c5f638795390783f0b689d77" Namespace="calico-system" Pod="goldmane-7c778bb748-wcf2b" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--wcf2b-eth0" Dec 12 18:35:19.464545 containerd[1553]: 2025-12-12 18:35:19.384 [INFO][4014] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ce61d1a7d64a12f626b1a6043228b93a7e745f97c5f638795390783f0b689d77" Namespace="calico-system" Pod="goldmane-7c778bb748-wcf2b" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--wcf2b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--wcf2b-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"4c88e5b7-6c17-45c7-92f0-9be254ebdd59", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 34, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ce61d1a7d64a12f626b1a6043228b93a7e745f97c5f638795390783f0b689d77", Pod:"goldmane-7c778bb748-wcf2b", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8f847213ddb", MAC:"f6:f9:0d:16:36:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:35:19.464677 containerd[1553]: 2025-12-12 18:35:19.419 [INFO][4014] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ce61d1a7d64a12f626b1a6043228b93a7e745f97c5f638795390783f0b689d77" Namespace="calico-system" Pod="goldmane-7c778bb748-wcf2b" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--wcf2b-eth0" Dec 12 18:35:19.484354 kubelet[2769]: E1212 18:35:19.484307 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:19.785799 containerd[1553]: time="2025-12-12T18:35:19.784938748Z" level=info msg="connecting to shim ce61d1a7d64a12f626b1a6043228b93a7e745f97c5f638795390783f0b689d77" address="unix:///run/containerd/s/dcf06526839b2f07d60a21cb24aaecc67f3d47ea6860c7130f34dce6651c0a8a" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:35:19.855363 systemd[1]: Started cri-containerd-ce61d1a7d64a12f626b1a6043228b93a7e745f97c5f638795390783f0b689d77.scope - libcontainer container ce61d1a7d64a12f626b1a6043228b93a7e745f97c5f638795390783f0b689d77. Dec 12 18:35:19.907924 systemd-resolved[1387]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 18:35:19.976833 systemd-networkd[1458]: calid8d04edda95: Link UP Dec 12 18:35:19.979535 containerd[1553]: time="2025-12-12T18:35:19.979476856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-wcf2b,Uid:4c88e5b7-6c17-45c7-92f0-9be254ebdd59,Namespace:calico-system,Attempt:0,} returns sandbox id \"ce61d1a7d64a12f626b1a6043228b93a7e745f97c5f638795390783f0b689d77\"" Dec 12 18:35:19.981769 systemd-networkd[1458]: calid8d04edda95: Gained carrier Dec 12 18:35:19.983186 containerd[1553]: time="2025-12-12T18:35:19.983123555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:35:20.019392 containerd[1553]: 2025-12-12 18:35:19.608 [INFO][4183] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--74dh6-eth0 coredns-66bc5c9577- kube-system b7aa5f12-60b1-4a4a-b2f4-b7aa5d15059a 842 0 2025-12-12 18:34:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-74dh6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid8d04edda95 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="1a0724caf725dad28a2ff3952f425f0d46c63363b5861b5ebb1605a589b6b83d" Namespace="kube-system" Pod="coredns-66bc5c9577-74dh6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--74dh6-" Dec 12 18:35:20.019392 containerd[1553]: 2025-12-12 18:35:19.608 [INFO][4183] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1a0724caf725dad28a2ff3952f425f0d46c63363b5861b5ebb1605a589b6b83d" Namespace="kube-system" Pod="coredns-66bc5c9577-74dh6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--74dh6-eth0" Dec 12 18:35:20.019392 containerd[1553]: 2025-12-12 18:35:19.808 [INFO][4286] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a0724caf725dad28a2ff3952f425f0d46c63363b5861b5ebb1605a589b6b83d" HandleID="k8s-pod-network.1a0724caf725dad28a2ff3952f425f0d46c63363b5861b5ebb1605a589b6b83d" Workload="localhost-k8s-coredns--66bc5c9577--74dh6-eth0" Dec 12 18:35:20.019715 containerd[1553]: 2025-12-12 18:35:19.808 [INFO][4286] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1a0724caf725dad28a2ff3952f425f0d46c63363b5861b5ebb1605a589b6b83d" HandleID="k8s-pod-network.1a0724caf725dad28a2ff3952f425f0d46c63363b5861b5ebb1605a589b6b83d" Workload="localhost-k8s-coredns--66bc5c9577--74dh6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000117d70), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-74dh6", "timestamp":"2025-12-12 18:35:19.808378668 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:35:20.019715 containerd[1553]: 2025-12-12 18:35:19.808 [INFO][4286] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:35:20.019715 containerd[1553]: 2025-12-12 18:35:19.808 [INFO][4286] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:35:20.019715 containerd[1553]: 2025-12-12 18:35:19.808 [INFO][4286] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 18:35:20.019715 containerd[1553]: 2025-12-12 18:35:19.823 [INFO][4286] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1a0724caf725dad28a2ff3952f425f0d46c63363b5861b5ebb1605a589b6b83d" host="localhost" Dec 12 18:35:20.019715 containerd[1553]: 2025-12-12 18:35:19.842 [INFO][4286] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 18:35:20.019715 containerd[1553]: 2025-12-12 18:35:19.865 [INFO][4286] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 18:35:20.019715 containerd[1553]: 2025-12-12 18:35:19.880 [INFO][4286] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 18:35:20.019715 containerd[1553]: 2025-12-12 18:35:19.892 [INFO][4286] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 18:35:20.019715 containerd[1553]: 2025-12-12 18:35:19.892 [INFO][4286] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1a0724caf725dad28a2ff3952f425f0d46c63363b5861b5ebb1605a589b6b83d" host="localhost" Dec 12 18:35:20.020109 containerd[1553]: 2025-12-12 18:35:19.897 [INFO][4286] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1a0724caf725dad28a2ff3952f425f0d46c63363b5861b5ebb1605a589b6b83d Dec 12 18:35:20.020109 containerd[1553]: 2025-12-12 18:35:19.924 [INFO][4286] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1a0724caf725dad28a2ff3952f425f0d46c63363b5861b5ebb1605a589b6b83d" host="localhost" Dec 12 18:35:20.020109 containerd[1553]: 2025-12-12 18:35:19.946 [INFO][4286] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.1a0724caf725dad28a2ff3952f425f0d46c63363b5861b5ebb1605a589b6b83d" host="localhost" Dec 12 18:35:20.020109 containerd[1553]: 2025-12-12 18:35:19.946 [INFO][4286] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.1a0724caf725dad28a2ff3952f425f0d46c63363b5861b5ebb1605a589b6b83d" host="localhost" Dec 12 18:35:20.020109 containerd[1553]: 2025-12-12 18:35:19.946 [INFO][4286] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:35:20.020109 containerd[1553]: 2025-12-12 18:35:19.946 [INFO][4286] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="1a0724caf725dad28a2ff3952f425f0d46c63363b5861b5ebb1605a589b6b83d" HandleID="k8s-pod-network.1a0724caf725dad28a2ff3952f425f0d46c63363b5861b5ebb1605a589b6b83d" Workload="localhost-k8s-coredns--66bc5c9577--74dh6-eth0" Dec 12 18:35:20.020346 containerd[1553]: 2025-12-12 18:35:19.960 [INFO][4183] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1a0724caf725dad28a2ff3952f425f0d46c63363b5861b5ebb1605a589b6b83d" Namespace="kube-system" Pod="coredns-66bc5c9577-74dh6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--74dh6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--74dh6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b7aa5f12-60b1-4a4a-b2f4-b7aa5d15059a", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 34, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-74dh6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid8d04edda95", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:35:20.020346 containerd[1553]: 2025-12-12 18:35:19.961 [INFO][4183] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="1a0724caf725dad28a2ff3952f425f0d46c63363b5861b5ebb1605a589b6b83d" Namespace="kube-system" Pod="coredns-66bc5c9577-74dh6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--74dh6-eth0" Dec 12 18:35:20.020346 containerd[1553]: 2025-12-12 18:35:19.961 [INFO][4183] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid8d04edda95 ContainerID="1a0724caf725dad28a2ff3952f425f0d46c63363b5861b5ebb1605a589b6b83d" Namespace="kube-system" Pod="coredns-66bc5c9577-74dh6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--74dh6-eth0" Dec 12 18:35:20.020346 containerd[1553]: 2025-12-12 18:35:19.983 [INFO][4183] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a0724caf725dad28a2ff3952f425f0d46c63363b5861b5ebb1605a589b6b83d" Namespace="kube-system" Pod="coredns-66bc5c9577-74dh6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--74dh6-eth0" Dec 12 18:35:20.020346 containerd[1553]: 2025-12-12 18:35:19.984 [INFO][4183] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1a0724caf725dad28a2ff3952f425f0d46c63363b5861b5ebb1605a589b6b83d" Namespace="kube-system" Pod="coredns-66bc5c9577-74dh6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--74dh6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--74dh6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b7aa5f12-60b1-4a4a-b2f4-b7aa5d15059a", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 34, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1a0724caf725dad28a2ff3952f425f0d46c63363b5861b5ebb1605a589b6b83d", Pod:"coredns-66bc5c9577-74dh6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid8d04edda95", MAC:"52:a4:e1:92:86:d9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:35:20.020346 containerd[1553]: 2025-12-12 18:35:20.015 [INFO][4183] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1a0724caf725dad28a2ff3952f425f0d46c63363b5861b5ebb1605a589b6b83d" Namespace="kube-system" Pod="coredns-66bc5c9577-74dh6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--74dh6-eth0" Dec 12 18:35:20.083049 systemd-networkd[1458]: calif35515c16ef: Link UP Dec 12 18:35:20.083320 systemd-networkd[1458]: calif35515c16ef: Gained carrier Dec 12 18:35:20.115407 containerd[1553]: 2025-12-12 18:35:19.702 [INFO][4181] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--nscm8-eth0 coredns-66bc5c9577- kube-system fefe395e-a76c-40a6-a6e5-38c9f3e1ee92 845 0 2025-12-12 18:34:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-nscm8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif35515c16ef [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="b4ec145a5699f9a2a26f4e97f19aacd278c8e45200dec688a19466f0fcc19cee" Namespace="kube-system" Pod="coredns-66bc5c9577-nscm8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nscm8-" Dec 12 18:35:20.115407 containerd[1553]: 2025-12-12 18:35:19.702 [INFO][4181] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b4ec145a5699f9a2a26f4e97f19aacd278c8e45200dec688a19466f0fcc19cee" Namespace="kube-system" Pod="coredns-66bc5c9577-nscm8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nscm8-eth0" Dec 12 18:35:20.115407 containerd[1553]: 2025-12-12 18:35:19.838 [INFO][4306] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b4ec145a5699f9a2a26f4e97f19aacd278c8e45200dec688a19466f0fcc19cee" HandleID="k8s-pod-network.b4ec145a5699f9a2a26f4e97f19aacd278c8e45200dec688a19466f0fcc19cee" Workload="localhost-k8s-coredns--66bc5c9577--nscm8-eth0" Dec 12 18:35:20.115407 containerd[1553]: 2025-12-12 18:35:19.845 [INFO][4306] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b4ec145a5699f9a2a26f4e97f19aacd278c8e45200dec688a19466f0fcc19cee" HandleID="k8s-pod-network.b4ec145a5699f9a2a26f4e97f19aacd278c8e45200dec688a19466f0fcc19cee" Workload="localhost-k8s-coredns--66bc5c9577--nscm8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000581de0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-nscm8", "timestamp":"2025-12-12 18:35:19.838784598 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:35:20.115407 containerd[1553]: 2025-12-12 18:35:19.867 [INFO][4306] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:35:20.115407 containerd[1553]: 2025-12-12 18:35:19.946 [INFO][4306] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:35:20.115407 containerd[1553]: 2025-12-12 18:35:19.947 [INFO][4306] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 18:35:20.115407 containerd[1553]: 2025-12-12 18:35:19.962 [INFO][4306] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b4ec145a5699f9a2a26f4e97f19aacd278c8e45200dec688a19466f0fcc19cee" host="localhost" Dec 12 18:35:20.115407 containerd[1553]: 2025-12-12 18:35:19.995 [INFO][4306] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 18:35:20.115407 containerd[1553]: 2025-12-12 18:35:20.022 [INFO][4306] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 18:35:20.115407 containerd[1553]: 2025-12-12 18:35:20.028 [INFO][4306] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 18:35:20.115407 containerd[1553]: 2025-12-12 18:35:20.034 [INFO][4306] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 18:35:20.115407 containerd[1553]: 2025-12-12 18:35:20.034 [INFO][4306] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b4ec145a5699f9a2a26f4e97f19aacd278c8e45200dec688a19466f0fcc19cee" host="localhost" Dec 12 18:35:20.115407 containerd[1553]: 2025-12-12 18:35:20.039 [INFO][4306] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b4ec145a5699f9a2a26f4e97f19aacd278c8e45200dec688a19466f0fcc19cee Dec 12 18:35:20.115407 containerd[1553]: 2025-12-12 18:35:20.053 [INFO][4306] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b4ec145a5699f9a2a26f4e97f19aacd278c8e45200dec688a19466f0fcc19cee" host="localhost" Dec 12 18:35:20.115407 containerd[1553]: 2025-12-12 18:35:20.068 [INFO][4306] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b4ec145a5699f9a2a26f4e97f19aacd278c8e45200dec688a19466f0fcc19cee" host="localhost" Dec 12 18:35:20.115407 containerd[1553]: 2025-12-12 18:35:20.068 [INFO][4306] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b4ec145a5699f9a2a26f4e97f19aacd278c8e45200dec688a19466f0fcc19cee" host="localhost" Dec 12 18:35:20.115407 containerd[1553]: 2025-12-12 18:35:20.068 [INFO][4306] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:35:20.115407 containerd[1553]: 2025-12-12 18:35:20.068 [INFO][4306] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b4ec145a5699f9a2a26f4e97f19aacd278c8e45200dec688a19466f0fcc19cee" HandleID="k8s-pod-network.b4ec145a5699f9a2a26f4e97f19aacd278c8e45200dec688a19466f0fcc19cee" Workload="localhost-k8s-coredns--66bc5c9577--nscm8-eth0" Dec 12 18:35:20.116611 containerd[1553]: 2025-12-12 18:35:20.074 [INFO][4181] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b4ec145a5699f9a2a26f4e97f19aacd278c8e45200dec688a19466f0fcc19cee" Namespace="kube-system" Pod="coredns-66bc5c9577-nscm8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nscm8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--nscm8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"fefe395e-a76c-40a6-a6e5-38c9f3e1ee92", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 34, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-nscm8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif35515c16ef", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:35:20.116611 containerd[1553]: 2025-12-12 18:35:20.077 [INFO][4181] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b4ec145a5699f9a2a26f4e97f19aacd278c8e45200dec688a19466f0fcc19cee" Namespace="kube-system" Pod="coredns-66bc5c9577-nscm8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nscm8-eth0" Dec 12 18:35:20.116611 containerd[1553]: 2025-12-12 18:35:20.078 [INFO][4181] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif35515c16ef ContainerID="b4ec145a5699f9a2a26f4e97f19aacd278c8e45200dec688a19466f0fcc19cee" Namespace="kube-system" Pod="coredns-66bc5c9577-nscm8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nscm8-eth0" Dec 12 18:35:20.116611 containerd[1553]: 2025-12-12 18:35:20.080 [INFO][4181] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b4ec145a5699f9a2a26f4e97f19aacd278c8e45200dec688a19466f0fcc19cee" Namespace="kube-system" Pod="coredns-66bc5c9577-nscm8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nscm8-eth0" Dec 12 18:35:20.116611 containerd[1553]: 2025-12-12 18:35:20.081 [INFO][4181] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b4ec145a5699f9a2a26f4e97f19aacd278c8e45200dec688a19466f0fcc19cee" Namespace="kube-system" Pod="coredns-66bc5c9577-nscm8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nscm8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--nscm8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"fefe395e-a76c-40a6-a6e5-38c9f3e1ee92", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 34, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b4ec145a5699f9a2a26f4e97f19aacd278c8e45200dec688a19466f0fcc19cee", Pod:"coredns-66bc5c9577-nscm8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif35515c16ef", MAC:"c6:c9:8c:6e:f8:5f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:35:20.116611 containerd[1553]: 2025-12-12 18:35:20.103 [INFO][4181] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b4ec145a5699f9a2a26f4e97f19aacd278c8e45200dec688a19466f0fcc19cee" Namespace="kube-system" Pod="coredns-66bc5c9577-nscm8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nscm8-eth0" Dec 12 18:35:20.118522 containerd[1553]: time="2025-12-12T18:35:20.115263884Z" level=info msg="connecting to shim 1a0724caf725dad28a2ff3952f425f0d46c63363b5861b5ebb1605a589b6b83d" address="unix:///run/containerd/s/39da6b64d2f75900d76ff8f4aab628950881241562b5d212cffe35005c7d771d" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:35:20.172239 systemd[1]: Started cri-containerd-1a0724caf725dad28a2ff3952f425f0d46c63363b5861b5ebb1605a589b6b83d.scope - libcontainer container 1a0724caf725dad28a2ff3952f425f0d46c63363b5861b5ebb1605a589b6b83d. Dec 12 18:35:20.203975 systemd-networkd[1458]: calic1eda70991f: Link UP Dec 12 18:35:20.209941 containerd[1553]: time="2025-12-12T18:35:20.207143092Z" level=info msg="connecting to shim b4ec145a5699f9a2a26f4e97f19aacd278c8e45200dec688a19466f0fcc19cee" address="unix:///run/containerd/s/c60d9263e2bf9e4c172a70d6475dcb73f4b08796e10865d75b44db772a93dbdb" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:35:20.212834 systemd-networkd[1458]: calic1eda70991f: Gained carrier Dec 12 18:35:20.219349 systemd-resolved[1387]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 18:35:20.294408 containerd[1553]: 2025-12-12 18:35:19.750 [INFO][4182] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7b5c98f7cb--flntl-eth0 whisker-7b5c98f7cb- calico-system 1290709f-462a-4bdb-93db-9172d8fdb29d 972 0 2025-12-12 18:35:18 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7b5c98f7cb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7b5c98f7cb-flntl eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic1eda70991f [] [] }} ContainerID="b7e3328d050f35b6224cf5f18fc2ab61f6bdfc68ff283e43dda98c6e654e2aee" Namespace="calico-system" Pod="whisker-7b5c98f7cb-flntl" WorkloadEndpoint="localhost-k8s-whisker--7b5c98f7cb--flntl-" Dec 12 18:35:20.294408 containerd[1553]: 2025-12-12 18:35:19.750 [INFO][4182] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b7e3328d050f35b6224cf5f18fc2ab61f6bdfc68ff283e43dda98c6e654e2aee" Namespace="calico-system" Pod="whisker-7b5c98f7cb-flntl" WorkloadEndpoint="localhost-k8s-whisker--7b5c98f7cb--flntl-eth0" Dec 12 18:35:20.294408 containerd[1553]: 2025-12-12 18:35:19.892 [INFO][4323] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b7e3328d050f35b6224cf5f18fc2ab61f6bdfc68ff283e43dda98c6e654e2aee" HandleID="k8s-pod-network.b7e3328d050f35b6224cf5f18fc2ab61f6bdfc68ff283e43dda98c6e654e2aee" Workload="localhost-k8s-whisker--7b5c98f7cb--flntl-eth0" Dec 12 18:35:20.294408 containerd[1553]: 2025-12-12 18:35:19.898 [INFO][4323] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b7e3328d050f35b6224cf5f18fc2ab61f6bdfc68ff283e43dda98c6e654e2aee" HandleID="k8s-pod-network.b7e3328d050f35b6224cf5f18fc2ab61f6bdfc68ff283e43dda98c6e654e2aee" Workload="localhost-k8s-whisker--7b5c98f7cb--flntl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000385de0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7b5c98f7cb-flntl", "timestamp":"2025-12-12 18:35:19.89259044 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:35:20.294408 containerd[1553]: 2025-12-12 18:35:19.898 [INFO][4323] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:35:20.294408 containerd[1553]: 2025-12-12 18:35:20.068 [INFO][4323] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:35:20.294408 containerd[1553]: 2025-12-12 18:35:20.069 [INFO][4323] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 18:35:20.294408 containerd[1553]: 2025-12-12 18:35:20.088 [INFO][4323] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b7e3328d050f35b6224cf5f18fc2ab61f6bdfc68ff283e43dda98c6e654e2aee" host="localhost" Dec 12 18:35:20.294408 containerd[1553]: 2025-12-12 18:35:20.110 [INFO][4323] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 18:35:20.294408 containerd[1553]: 2025-12-12 18:35:20.127 [INFO][4323] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 18:35:20.294408 containerd[1553]: 2025-12-12 18:35:20.132 [INFO][4323] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 18:35:20.294408 containerd[1553]: 2025-12-12 18:35:20.142 [INFO][4323] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 18:35:20.294408 containerd[1553]: 2025-12-12 18:35:20.142 [INFO][4323] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b7e3328d050f35b6224cf5f18fc2ab61f6bdfc68ff283e43dda98c6e654e2aee" host="localhost" Dec 12 18:35:20.294408 containerd[1553]: 2025-12-12 18:35:20.145 [INFO][4323] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b7e3328d050f35b6224cf5f18fc2ab61f6bdfc68ff283e43dda98c6e654e2aee Dec 12 18:35:20.294408 containerd[1553]: 2025-12-12 18:35:20.161 [INFO][4323] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b7e3328d050f35b6224cf5f18fc2ab61f6bdfc68ff283e43dda98c6e654e2aee" host="localhost" Dec 12 18:35:20.294408 containerd[1553]: 2025-12-12 18:35:20.190 [INFO][4323] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.b7e3328d050f35b6224cf5f18fc2ab61f6bdfc68ff283e43dda98c6e654e2aee" host="localhost" Dec 12 18:35:20.294408 containerd[1553]: 2025-12-12 18:35:20.190 [INFO][4323] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.b7e3328d050f35b6224cf5f18fc2ab61f6bdfc68ff283e43dda98c6e654e2aee" host="localhost" Dec 12 18:35:20.294408 containerd[1553]: 2025-12-12 18:35:20.190 [INFO][4323] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:35:20.294408 containerd[1553]: 2025-12-12 18:35:20.190 [INFO][4323] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="b7e3328d050f35b6224cf5f18fc2ab61f6bdfc68ff283e43dda98c6e654e2aee" HandleID="k8s-pod-network.b7e3328d050f35b6224cf5f18fc2ab61f6bdfc68ff283e43dda98c6e654e2aee" Workload="localhost-k8s-whisker--7b5c98f7cb--flntl-eth0" Dec 12 18:35:20.295210 containerd[1553]: 2025-12-12 18:35:20.197 [INFO][4182] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b7e3328d050f35b6224cf5f18fc2ab61f6bdfc68ff283e43dda98c6e654e2aee" Namespace="calico-system" Pod="whisker-7b5c98f7cb-flntl" WorkloadEndpoint="localhost-k8s-whisker--7b5c98f7cb--flntl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7b5c98f7cb--flntl-eth0", GenerateName:"whisker-7b5c98f7cb-", Namespace:"calico-system", SelfLink:"", UID:"1290709f-462a-4bdb-93db-9172d8fdb29d", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 35, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b5c98f7cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7b5c98f7cb-flntl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic1eda70991f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:35:20.295210 containerd[1553]: 2025-12-12 18:35:20.198 [INFO][4182] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="b7e3328d050f35b6224cf5f18fc2ab61f6bdfc68ff283e43dda98c6e654e2aee" Namespace="calico-system" Pod="whisker-7b5c98f7cb-flntl" WorkloadEndpoint="localhost-k8s-whisker--7b5c98f7cb--flntl-eth0" Dec 12 18:35:20.295210 containerd[1553]: 2025-12-12 18:35:20.198 [INFO][4182] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1eda70991f ContainerID="b7e3328d050f35b6224cf5f18fc2ab61f6bdfc68ff283e43dda98c6e654e2aee" Namespace="calico-system" Pod="whisker-7b5c98f7cb-flntl" WorkloadEndpoint="localhost-k8s-whisker--7b5c98f7cb--flntl-eth0" Dec 12 18:35:20.295210 containerd[1553]: 2025-12-12 18:35:20.212 [INFO][4182] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b7e3328d050f35b6224cf5f18fc2ab61f6bdfc68ff283e43dda98c6e654e2aee" Namespace="calico-system" Pod="whisker-7b5c98f7cb-flntl" WorkloadEndpoint="localhost-k8s-whisker--7b5c98f7cb--flntl-eth0" Dec 12 18:35:20.295210 containerd[1553]: 2025-12-12 18:35:20.214 [INFO][4182] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b7e3328d050f35b6224cf5f18fc2ab61f6bdfc68ff283e43dda98c6e654e2aee" Namespace="calico-system" Pod="whisker-7b5c98f7cb-flntl" WorkloadEndpoint="localhost-k8s-whisker--7b5c98f7cb--flntl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7b5c98f7cb--flntl-eth0", GenerateName:"whisker-7b5c98f7cb-", Namespace:"calico-system", SelfLink:"", UID:"1290709f-462a-4bdb-93db-9172d8fdb29d", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 35, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b5c98f7cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b7e3328d050f35b6224cf5f18fc2ab61f6bdfc68ff283e43dda98c6e654e2aee", Pod:"whisker-7b5c98f7cb-flntl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic1eda70991f", MAC:"02:58:74:fb:9a:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:35:20.295210 containerd[1553]: 2025-12-12 18:35:20.254 [INFO][4182] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b7e3328d050f35b6224cf5f18fc2ab61f6bdfc68ff283e43dda98c6e654e2aee" Namespace="calico-system" Pod="whisker-7b5c98f7cb-flntl" WorkloadEndpoint="localhost-k8s-whisker--7b5c98f7cb--flntl-eth0" Dec 12 18:35:20.297247 systemd[1]: Started cri-containerd-b4ec145a5699f9a2a26f4e97f19aacd278c8e45200dec688a19466f0fcc19cee.scope - libcontainer container b4ec145a5699f9a2a26f4e97f19aacd278c8e45200dec688a19466f0fcc19cee. Dec 12 18:35:20.335946 containerd[1553]: time="2025-12-12T18:35:20.334659910Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:35:20.341713 containerd[1553]: time="2025-12-12T18:35:20.338398471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-74dh6,Uid:b7aa5f12-60b1-4a4a-b2f4-b7aa5d15059a,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a0724caf725dad28a2ff3952f425f0d46c63363b5861b5ebb1605a589b6b83d\"" Dec 12 18:35:20.341869 kubelet[2769]: E1212 18:35:20.339860 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:20.346214 containerd[1553]: time="2025-12-12T18:35:20.345830988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 18:35:20.357427 containerd[1553]: time="2025-12-12T18:35:20.357299106Z" level=info msg="CreateContainer within sandbox \"1a0724caf725dad28a2ff3952f425f0d46c63363b5861b5ebb1605a589b6b83d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:35:20.363171 containerd[1553]: time="2025-12-12T18:35:20.362704864Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:35:20.364972 kubelet[2769]: E1212 18:35:20.364523 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:35:20.364972 kubelet[2769]: E1212 18:35:20.364701 2769 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:35:20.367789 kubelet[2769]: E1212 18:35:20.365600 2769 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-wcf2b_calico-system(4c88e5b7-6c17-45c7-92f0-9be254ebdd59): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:35:20.367789 kubelet[2769]: E1212 18:35:20.365678 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wcf2b" podUID="4c88e5b7-6c17-45c7-92f0-9be254ebdd59" Dec 12 18:35:20.369275 systemd-resolved[1387]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 18:35:20.420361 systemd-networkd[1458]: calic9be3cf222a: Link UP Dec 12 18:35:20.423417 systemd-networkd[1458]: calic9be3cf222a: Gained carrier Dec 12 18:35:20.442412 containerd[1553]: time="2025-12-12T18:35:20.442353032Z" level=info msg="connecting to shim b7e3328d050f35b6224cf5f18fc2ab61f6bdfc68ff283e43dda98c6e654e2aee" address="unix:///run/containerd/s/3f08e8dc05ba1214935fece757ffb08bf2de0d1fd9066ed21a362182befc5b79" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:35:20.444143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2800629740.mount: Deactivated successfully. Dec 12 18:35:20.453341 containerd[1553]: time="2025-12-12T18:35:20.453265925Z" level=info msg="Container 047378a96dd0a80cc671778a57b02bd813487eaf00195b0237e55bacf2367699: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:35:20.464548 containerd[1553]: 2025-12-12 18:35:19.843 [INFO][4201] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7b767d98d4--755s8-eth0 calico-apiserver-7b767d98d4- calico-apiserver 924b51e0-ed81-4bc8-a597-a44686b519ff 851 0 2025-12-12 18:34:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b767d98d4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7b767d98d4-755s8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic9be3cf222a [] [] }} ContainerID="a637bc6687dbe93e32ef1fe44cd64623dc8f3460dfaa6747e82391c93ead994e" Namespace="calico-apiserver" Pod="calico-apiserver-7b767d98d4-755s8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b767d98d4--755s8-" Dec 12 18:35:20.464548 containerd[1553]: 2025-12-12 18:35:19.845 [INFO][4201] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a637bc6687dbe93e32ef1fe44cd64623dc8f3460dfaa6747e82391c93ead994e" Namespace="calico-apiserver" Pod="calico-apiserver-7b767d98d4-755s8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b767d98d4--755s8-eth0" Dec 12 18:35:20.464548 containerd[1553]: 2025-12-12 18:35:19.910 [INFO][4362] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a637bc6687dbe93e32ef1fe44cd64623dc8f3460dfaa6747e82391c93ead994e" HandleID="k8s-pod-network.a637bc6687dbe93e32ef1fe44cd64623dc8f3460dfaa6747e82391c93ead994e" Workload="localhost-k8s-calico--apiserver--7b767d98d4--755s8-eth0" Dec 12 18:35:20.464548 containerd[1553]: 2025-12-12 18:35:19.911 [INFO][4362] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a637bc6687dbe93e32ef1fe44cd64623dc8f3460dfaa6747e82391c93ead994e" HandleID="k8s-pod-network.a637bc6687dbe93e32ef1fe44cd64623dc8f3460dfaa6747e82391c93ead994e" Workload="localhost-k8s-calico--apiserver--7b767d98d4--755s8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000b85e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7b767d98d4-755s8", "timestamp":"2025-12-12 18:35:19.910527588 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:35:20.464548 containerd[1553]: 2025-12-12 18:35:19.911 [INFO][4362] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:35:20.464548 containerd[1553]: 2025-12-12 18:35:20.192 [INFO][4362] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:35:20.464548 containerd[1553]: 2025-12-12 18:35:20.192 [INFO][4362] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 18:35:20.464548 containerd[1553]: 2025-12-12 18:35:20.210 [INFO][4362] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a637bc6687dbe93e32ef1fe44cd64623dc8f3460dfaa6747e82391c93ead994e" host="localhost" Dec 12 18:35:20.464548 containerd[1553]: 2025-12-12 18:35:20.261 [INFO][4362] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 18:35:20.464548 containerd[1553]: 2025-12-12 18:35:20.292 [INFO][4362] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 18:35:20.464548 containerd[1553]: 2025-12-12 18:35:20.302 [INFO][4362] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 18:35:20.464548 containerd[1553]: 2025-12-12 18:35:20.315 [INFO][4362] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 18:35:20.464548 containerd[1553]: 2025-12-12 18:35:20.315 [INFO][4362] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a637bc6687dbe93e32ef1fe44cd64623dc8f3460dfaa6747e82391c93ead994e" host="localhost" Dec 12 18:35:20.464548 containerd[1553]: 2025-12-12 18:35:20.335 [INFO][4362] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a637bc6687dbe93e32ef1fe44cd64623dc8f3460dfaa6747e82391c93ead994e Dec 12 18:35:20.464548 containerd[1553]: 2025-12-12 18:35:20.355 [INFO][4362] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a637bc6687dbe93e32ef1fe44cd64623dc8f3460dfaa6747e82391c93ead994e" host="localhost" Dec 12 18:35:20.464548 containerd[1553]: 2025-12-12 18:35:20.380 [INFO][4362] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.a637bc6687dbe93e32ef1fe44cd64623dc8f3460dfaa6747e82391c93ead994e" host="localhost" Dec 12 18:35:20.464548 containerd[1553]: 2025-12-12 18:35:20.380 [INFO][4362] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.a637bc6687dbe93e32ef1fe44cd64623dc8f3460dfaa6747e82391c93ead994e" host="localhost" Dec 12 18:35:20.464548 containerd[1553]: 2025-12-12 18:35:20.380 [INFO][4362] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:35:20.464548 containerd[1553]: 2025-12-12 18:35:20.380 [INFO][4362] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="a637bc6687dbe93e32ef1fe44cd64623dc8f3460dfaa6747e82391c93ead994e" HandleID="k8s-pod-network.a637bc6687dbe93e32ef1fe44cd64623dc8f3460dfaa6747e82391c93ead994e" Workload="localhost-k8s-calico--apiserver--7b767d98d4--755s8-eth0" Dec 12 18:35:20.466381 containerd[1553]: 2025-12-12 18:35:20.407 [INFO][4201] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a637bc6687dbe93e32ef1fe44cd64623dc8f3460dfaa6747e82391c93ead994e" Namespace="calico-apiserver" Pod="calico-apiserver-7b767d98d4-755s8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b767d98d4--755s8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b767d98d4--755s8-eth0", GenerateName:"calico-apiserver-7b767d98d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"924b51e0-ed81-4bc8-a597-a44686b519ff", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 34, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b767d98d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7b767d98d4-755s8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic9be3cf222a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:35:20.466381 containerd[1553]: 2025-12-12 18:35:20.408 [INFO][4201] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="a637bc6687dbe93e32ef1fe44cd64623dc8f3460dfaa6747e82391c93ead994e" Namespace="calico-apiserver" Pod="calico-apiserver-7b767d98d4-755s8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b767d98d4--755s8-eth0" Dec 12 18:35:20.466381 containerd[1553]: 2025-12-12 18:35:20.408 [INFO][4201] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic9be3cf222a ContainerID="a637bc6687dbe93e32ef1fe44cd64623dc8f3460dfaa6747e82391c93ead994e" Namespace="calico-apiserver" Pod="calico-apiserver-7b767d98d4-755s8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b767d98d4--755s8-eth0" Dec 12 18:35:20.466381 containerd[1553]: 2025-12-12 18:35:20.425 [INFO][4201] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a637bc6687dbe93e32ef1fe44cd64623dc8f3460dfaa6747e82391c93ead994e" Namespace="calico-apiserver" Pod="calico-apiserver-7b767d98d4-755s8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b767d98d4--755s8-eth0" Dec 12 18:35:20.466381 containerd[1553]: 2025-12-12 18:35:20.429 [INFO][4201] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a637bc6687dbe93e32ef1fe44cd64623dc8f3460dfaa6747e82391c93ead994e" Namespace="calico-apiserver" Pod="calico-apiserver-7b767d98d4-755s8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b767d98d4--755s8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b767d98d4--755s8-eth0", GenerateName:"calico-apiserver-7b767d98d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"924b51e0-ed81-4bc8-a597-a44686b519ff", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 34, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b767d98d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a637bc6687dbe93e32ef1fe44cd64623dc8f3460dfaa6747e82391c93ead994e", Pod:"calico-apiserver-7b767d98d4-755s8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic9be3cf222a", MAC:"62:83:a7:0b:4b:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:35:20.466381 containerd[1553]: 2025-12-12 18:35:20.455 [INFO][4201] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a637bc6687dbe93e32ef1fe44cd64623dc8f3460dfaa6747e82391c93ead994e" Namespace="calico-apiserver" Pod="calico-apiserver-7b767d98d4-755s8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b767d98d4--755s8-eth0" Dec 12 18:35:20.497532 systemd-networkd[1458]: vxlan.calico: Link UP Dec 12 18:35:20.497807 systemd-networkd[1458]: vxlan.calico: Gained carrier Dec 12 18:35:20.499681 systemd[1]: Started cri-containerd-b7e3328d050f35b6224cf5f18fc2ab61f6bdfc68ff283e43dda98c6e654e2aee.scope - libcontainer container b7e3328d050f35b6224cf5f18fc2ab61f6bdfc68ff283e43dda98c6e654e2aee. Dec 12 18:35:20.505492 kubelet[2769]: E1212 18:35:20.505247 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wcf2b" podUID="4c88e5b7-6c17-45c7-92f0-9be254ebdd59" Dec 12 18:35:20.518104 containerd[1553]: time="2025-12-12T18:35:20.517889604Z" level=info msg="CreateContainer within sandbox \"1a0724caf725dad28a2ff3952f425f0d46c63363b5861b5ebb1605a589b6b83d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"047378a96dd0a80cc671778a57b02bd813487eaf00195b0237e55bacf2367699\"" Dec 12 18:35:20.519728 containerd[1553]: time="2025-12-12T18:35:20.519658747Z" level=info msg="StartContainer for \"047378a96dd0a80cc671778a57b02bd813487eaf00195b0237e55bacf2367699\"" Dec 12 18:35:20.521345 containerd[1553]: time="2025-12-12T18:35:20.520898636Z" level=info msg="connecting to shim 047378a96dd0a80cc671778a57b02bd813487eaf00195b0237e55bacf2367699" address="unix:///run/containerd/s/39da6b64d2f75900d76ff8f4aab628950881241562b5d212cffe35005c7d771d" protocol=ttrpc version=3 Dec 12 18:35:20.549127 containerd[1553]: time="2025-12-12T18:35:20.549015939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nscm8,Uid:fefe395e-a76c-40a6-a6e5-38c9f3e1ee92,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4ec145a5699f9a2a26f4e97f19aacd278c8e45200dec688a19466f0fcc19cee\"" Dec 12 18:35:20.551080 kubelet[2769]: E1212 18:35:20.551040 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:20.563030 systemd-resolved[1387]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 18:35:20.570207 systemd[1]: Started cri-containerd-047378a96dd0a80cc671778a57b02bd813487eaf00195b0237e55bacf2367699.scope - libcontainer container 047378a96dd0a80cc671778a57b02bd813487eaf00195b0237e55bacf2367699. Dec 12 18:35:20.592546 containerd[1553]: time="2025-12-12T18:35:20.592308220Z" level=info msg="CreateContainer within sandbox \"b4ec145a5699f9a2a26f4e97f19aacd278c8e45200dec688a19466f0fcc19cee\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:35:20.708967 containerd[1553]: time="2025-12-12T18:35:20.708873321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b5c98f7cb-flntl,Uid:1290709f-462a-4bdb-93db-9172d8fdb29d,Namespace:calico-system,Attempt:0,} returns sandbox id \"b7e3328d050f35b6224cf5f18fc2ab61f6bdfc68ff283e43dda98c6e654e2aee\"" Dec 12 18:35:20.712648 containerd[1553]: time="2025-12-12T18:35:20.712604838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:35:20.722143 containerd[1553]: time="2025-12-12T18:35:20.720785851Z" level=info msg="Container 273c44b7a13dc55b5d8ffe86711d91ad36f95bc6a60868c0ffe42fca97a86fde: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:35:20.729224 containerd[1553]: time="2025-12-12T18:35:20.729158601Z" level=info msg="connecting to shim a637bc6687dbe93e32ef1fe44cd64623dc8f3460dfaa6747e82391c93ead994e" address="unix:///run/containerd/s/5b1fee690673d44a873969bf28291421d4486061c68a66fc7067902ff9447d65" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:35:20.736660 containerd[1553]: time="2025-12-12T18:35:20.736584435Z" level=info msg="StartContainer for \"047378a96dd0a80cc671778a57b02bd813487eaf00195b0237e55bacf2367699\" returns successfully" Dec 12 18:35:20.748002 containerd[1553]: time="2025-12-12T18:35:20.747756064Z" level=info msg="CreateContainer within sandbox \"b4ec145a5699f9a2a26f4e97f19aacd278c8e45200dec688a19466f0fcc19cee\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"273c44b7a13dc55b5d8ffe86711d91ad36f95bc6a60868c0ffe42fca97a86fde\"" Dec 12 18:35:20.755195 containerd[1553]: time="2025-12-12T18:35:20.752415041Z" level=info msg="StartContainer for \"273c44b7a13dc55b5d8ffe86711d91ad36f95bc6a60868c0ffe42fca97a86fde\"" Dec 12 18:35:20.760279 containerd[1553]: time="2025-12-12T18:35:20.760220713Z" level=info msg="connecting to shim 273c44b7a13dc55b5d8ffe86711d91ad36f95bc6a60868c0ffe42fca97a86fde" address="unix:///run/containerd/s/c60d9263e2bf9e4c172a70d6475dcb73f4b08796e10865d75b44db772a93dbdb" protocol=ttrpc version=3 Dec 12 18:35:20.778420 systemd[1]: Started cri-containerd-a637bc6687dbe93e32ef1fe44cd64623dc8f3460dfaa6747e82391c93ead994e.scope - libcontainer container a637bc6687dbe93e32ef1fe44cd64623dc8f3460dfaa6747e82391c93ead994e. Dec 12 18:35:20.797498 systemd[1]: Started cri-containerd-273c44b7a13dc55b5d8ffe86711d91ad36f95bc6a60868c0ffe42fca97a86fde.scope - libcontainer container 273c44b7a13dc55b5d8ffe86711d91ad36f95bc6a60868c0ffe42fca97a86fde. Dec 12 18:35:20.826213 systemd-resolved[1387]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 18:35:20.972985 containerd[1553]: time="2025-12-12T18:35:20.972870519Z" level=info msg="StartContainer for \"273c44b7a13dc55b5d8ffe86711d91ad36f95bc6a60868c0ffe42fca97a86fde\" returns successfully" Dec 12 18:35:20.980981 containerd[1553]: time="2025-12-12T18:35:20.980861368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b767d98d4-755s8,Uid:924b51e0-ed81-4bc8-a597-a44686b519ff,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a637bc6687dbe93e32ef1fe44cd64623dc8f3460dfaa6747e82391c93ead994e\"" Dec 12 18:35:21.068948 containerd[1553]: time="2025-12-12T18:35:21.068618231Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:35:21.082548 containerd[1553]: time="2025-12-12T18:35:21.082437242Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:35:21.083521 kubelet[2769]: E1212 18:35:21.082733 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:35:21.083521 kubelet[2769]: E1212 18:35:21.082818 2769 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:35:21.083521 kubelet[2769]: E1212 18:35:21.083194 2769 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7b5c98f7cb-flntl_calico-system(1290709f-462a-4bdb-93db-9172d8fdb29d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:35:21.083791 containerd[1553]: time="2025-12-12T18:35:21.082829173Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:35:21.083791 containerd[1553]: time="2025-12-12T18:35:21.083254819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:35:21.221282 systemd-networkd[1458]: calid8d04edda95: Gained IPv6LL Dec 12 18:35:21.317679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount330241458.mount: Deactivated successfully. Dec 12 18:35:21.352293 systemd-networkd[1458]: cali8f847213ddb: Gained IPv6LL Dec 12 18:35:21.352696 systemd-networkd[1458]: calif35515c16ef: Gained IPv6LL Dec 12 18:35:21.512360 kubelet[2769]: E1212 18:35:21.512200 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:21.525955 containerd[1553]: time="2025-12-12T18:35:21.525830258Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:35:21.534308 kubelet[2769]: E1212 18:35:21.533975 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:21.537170 containerd[1553]: time="2025-12-12T18:35:21.534857473Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:35:21.537170 containerd[1553]: time="2025-12-12T18:35:21.535481930Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:35:21.537632 kubelet[2769]: E1212 18:35:21.535561 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wcf2b" podUID="4c88e5b7-6c17-45c7-92f0-9be254ebdd59" Dec 12 18:35:21.537632 kubelet[2769]: E1212 18:35:21.537220 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:35:21.537632 kubelet[2769]: E1212 18:35:21.537261 2769 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:35:21.537632 kubelet[2769]: E1212 18:35:21.537436 2769 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7b767d98d4-755s8_calico-apiserver(924b51e0-ed81-4bc8-a597-a44686b519ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:35:21.538356 kubelet[2769]: E1212 18:35:21.537477 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b767d98d4-755s8" podUID="924b51e0-ed81-4bc8-a597-a44686b519ff" Dec 12 18:35:21.539121 containerd[1553]: time="2025-12-12T18:35:21.537830312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:35:21.785710 kubelet[2769]: I1212 18:35:21.785509 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-nscm8" podStartSLOduration=46.785483393 podStartE2EDuration="46.785483393s" podCreationTimestamp="2025-12-12 18:34:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:35:21.640443462 +0000 UTC m=+52.483750617" watchObservedRunningTime="2025-12-12 18:35:21.785483393 +0000 UTC m=+52.628790528" Dec 12 18:35:21.798242 systemd-networkd[1458]: calic1eda70991f: Gained IPv6LL Dec 12 18:35:21.868115 kubelet[2769]: I1212 18:35:21.867841 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-74dh6" podStartSLOduration=46.86781331 podStartE2EDuration="46.86781331s" podCreationTimestamp="2025-12-12 18:34:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:35:21.843511143 +0000 UTC m=+52.686818298" watchObservedRunningTime="2025-12-12 18:35:21.86781331 +0000 UTC m=+52.711120445" Dec 12 18:35:21.895444 containerd[1553]: time="2025-12-12T18:35:21.895061916Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:35:21.902442 containerd[1553]: time="2025-12-12T18:35:21.902326201Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:35:21.903256 containerd[1553]: time="2025-12-12T18:35:21.902511206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:35:21.903318 kubelet[2769]: E1212 18:35:21.902811 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:35:21.903318 kubelet[2769]: E1212 18:35:21.902874 2769 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:35:21.903318 kubelet[2769]: E1212 18:35:21.903003 2769 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7b5c98f7cb-flntl_calico-system(1290709f-462a-4bdb-93db-9172d8fdb29d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:35:21.903516 kubelet[2769]: E1212 18:35:21.903061 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b5c98f7cb-flntl" podUID="1290709f-462a-4bdb-93db-9172d8fdb29d" Dec 12 18:35:22.246457 systemd-networkd[1458]: calic9be3cf222a: Gained IPv6LL Dec 12 18:35:22.373117 systemd-networkd[1458]: vxlan.calico: Gained IPv6LL Dec 12 18:35:22.541271 kubelet[2769]: E1212 18:35:22.541124 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:22.542736 kubelet[2769]: E1212 18:35:22.542343 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:22.545165 kubelet[2769]: E1212 18:35:22.544845 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b767d98d4-755s8" podUID="924b51e0-ed81-4bc8-a597-a44686b519ff" Dec 12 18:35:22.546020 kubelet[2769]: E1212 18:35:22.544904 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b5c98f7cb-flntl" podUID="1290709f-462a-4bdb-93db-9172d8fdb29d" Dec 12 18:35:22.949576 systemd[1]: Started sshd@10-10.0.0.38:22-10.0.0.1:34152.service - OpenSSH per-connection server daemon (10.0.0.1:34152). Dec 12 18:35:23.143180 sshd[4740]: Accepted publickey for core from 10.0.0.1 port 34152 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:35:23.151371 sshd-session[4740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:35:23.168780 systemd-logind[1540]: New session 11 of user core. Dec 12 18:35:23.178286 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 18:35:23.517169 sshd[4746]: Connection closed by 10.0.0.1 port 34152 Dec 12 18:35:23.518101 sshd-session[4740]: pam_unix(sshd:session): session closed for user core Dec 12 18:35:23.527770 systemd[1]: sshd@10-10.0.0.38:22-10.0.0.1:34152.service: Deactivated successfully. Dec 12 18:35:23.532312 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 18:35:23.534980 systemd-logind[1540]: Session 11 logged out. Waiting for processes to exit. Dec 12 18:35:23.541525 systemd-logind[1540]: Removed session 11. Dec 12 18:35:23.545183 kubelet[2769]: E1212 18:35:23.545144 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:23.549358 kubelet[2769]: E1212 18:35:23.549306 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:24.549649 kubelet[2769]: E1212 18:35:24.549586 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:24.550509 kubelet[2769]: E1212 18:35:24.550018 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:27.311966 containerd[1553]: time="2025-12-12T18:35:27.311241894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dtnq5,Uid:3590ca52-1c12-4793-a003-8621a1fe8861,Namespace:calico-system,Attempt:0,}" Dec 12 18:35:27.877504 systemd-networkd[1458]: cali7968f25438d: Link UP Dec 12 18:35:27.878499 systemd-networkd[1458]: cali7968f25438d: Gained carrier Dec 12 18:35:27.926365 containerd[1553]: 2025-12-12 18:35:27.622 [INFO][4772] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--dtnq5-eth0 csi-node-driver- calico-system 3590ca52-1c12-4793-a003-8621a1fe8861 715 0 2025-12-12 18:34:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-dtnq5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7968f25438d [] [] }} ContainerID="11c9b45f009be62723cf7e83936f1167e48c0a18d7dbb62289b514955fe15c3a" Namespace="calico-system" Pod="csi-node-driver-dtnq5" WorkloadEndpoint="localhost-k8s-csi--node--driver--dtnq5-" Dec 12 18:35:27.926365 containerd[1553]: 2025-12-12 18:35:27.623 [INFO][4772] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="11c9b45f009be62723cf7e83936f1167e48c0a18d7dbb62289b514955fe15c3a" Namespace="calico-system" Pod="csi-node-driver-dtnq5" WorkloadEndpoint="localhost-k8s-csi--node--driver--dtnq5-eth0" Dec 12 18:35:27.926365 containerd[1553]: 2025-12-12 18:35:27.709 [INFO][4787] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="11c9b45f009be62723cf7e83936f1167e48c0a18d7dbb62289b514955fe15c3a" HandleID="k8s-pod-network.11c9b45f009be62723cf7e83936f1167e48c0a18d7dbb62289b514955fe15c3a" Workload="localhost-k8s-csi--node--driver--dtnq5-eth0" Dec 12 18:35:27.926365 containerd[1553]: 2025-12-12 18:35:27.710 [INFO][4787] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="11c9b45f009be62723cf7e83936f1167e48c0a18d7dbb62289b514955fe15c3a" HandleID="k8s-pod-network.11c9b45f009be62723cf7e83936f1167e48c0a18d7dbb62289b514955fe15c3a" Workload="localhost-k8s-csi--node--driver--dtnq5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000417e60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-dtnq5", "timestamp":"2025-12-12 18:35:27.709122078 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:35:27.926365 containerd[1553]: 2025-12-12 18:35:27.711 [INFO][4787] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:35:27.926365 containerd[1553]: 2025-12-12 18:35:27.711 [INFO][4787] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:35:27.926365 containerd[1553]: 2025-12-12 18:35:27.711 [INFO][4787] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 18:35:27.926365 containerd[1553]: 2025-12-12 18:35:27.733 [INFO][4787] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.11c9b45f009be62723cf7e83936f1167e48c0a18d7dbb62289b514955fe15c3a" host="localhost" Dec 12 18:35:27.926365 containerd[1553]: 2025-12-12 18:35:27.743 [INFO][4787] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 18:35:27.926365 containerd[1553]: 2025-12-12 18:35:27.756 [INFO][4787] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 18:35:27.926365 containerd[1553]: 2025-12-12 18:35:27.763 [INFO][4787] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 18:35:27.926365 containerd[1553]: 2025-12-12 18:35:27.768 [INFO][4787] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 18:35:27.926365 containerd[1553]: 2025-12-12 18:35:27.768 [INFO][4787] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.11c9b45f009be62723cf7e83936f1167e48c0a18d7dbb62289b514955fe15c3a" host="localhost" Dec 12 18:35:27.926365 containerd[1553]: 2025-12-12 18:35:27.775 [INFO][4787] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.11c9b45f009be62723cf7e83936f1167e48c0a18d7dbb62289b514955fe15c3a Dec 12 18:35:27.926365 containerd[1553]: 2025-12-12 18:35:27.818 [INFO][4787] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.11c9b45f009be62723cf7e83936f1167e48c0a18d7dbb62289b514955fe15c3a" host="localhost" Dec 12 18:35:27.926365 containerd[1553]: 2025-12-12 18:35:27.854 [INFO][4787] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.11c9b45f009be62723cf7e83936f1167e48c0a18d7dbb62289b514955fe15c3a" host="localhost" Dec 12 18:35:27.926365 containerd[1553]: 2025-12-12 18:35:27.854 [INFO][4787] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.11c9b45f009be62723cf7e83936f1167e48c0a18d7dbb62289b514955fe15c3a" host="localhost" Dec 12 18:35:27.926365 containerd[1553]: 2025-12-12 18:35:27.855 [INFO][4787] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:35:27.926365 containerd[1553]: 2025-12-12 18:35:27.855 [INFO][4787] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="11c9b45f009be62723cf7e83936f1167e48c0a18d7dbb62289b514955fe15c3a" HandleID="k8s-pod-network.11c9b45f009be62723cf7e83936f1167e48c0a18d7dbb62289b514955fe15c3a" Workload="localhost-k8s-csi--node--driver--dtnq5-eth0" Dec 12 18:35:27.931142 containerd[1553]: 2025-12-12 18:35:27.871 [INFO][4772] cni-plugin/k8s.go 418: Populated endpoint ContainerID="11c9b45f009be62723cf7e83936f1167e48c0a18d7dbb62289b514955fe15c3a" Namespace="calico-system" Pod="csi-node-driver-dtnq5" WorkloadEndpoint="localhost-k8s-csi--node--driver--dtnq5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dtnq5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3590ca52-1c12-4793-a003-8621a1fe8861", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 34, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-dtnq5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7968f25438d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:35:27.931142 containerd[1553]: 2025-12-12 18:35:27.871 [INFO][4772] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="11c9b45f009be62723cf7e83936f1167e48c0a18d7dbb62289b514955fe15c3a" Namespace="calico-system" Pod="csi-node-driver-dtnq5" WorkloadEndpoint="localhost-k8s-csi--node--driver--dtnq5-eth0" Dec 12 18:35:27.931142 containerd[1553]: 2025-12-12 18:35:27.871 [INFO][4772] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7968f25438d ContainerID="11c9b45f009be62723cf7e83936f1167e48c0a18d7dbb62289b514955fe15c3a" Namespace="calico-system" Pod="csi-node-driver-dtnq5" WorkloadEndpoint="localhost-k8s-csi--node--driver--dtnq5-eth0" Dec 12 18:35:27.931142 containerd[1553]: 2025-12-12 18:35:27.879 [INFO][4772] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="11c9b45f009be62723cf7e83936f1167e48c0a18d7dbb62289b514955fe15c3a" Namespace="calico-system" Pod="csi-node-driver-dtnq5" WorkloadEndpoint="localhost-k8s-csi--node--driver--dtnq5-eth0" Dec 12 18:35:27.931142 containerd[1553]: 2025-12-12 18:35:27.880 [INFO][4772] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="11c9b45f009be62723cf7e83936f1167e48c0a18d7dbb62289b514955fe15c3a" Namespace="calico-system" Pod="csi-node-driver-dtnq5" WorkloadEndpoint="localhost-k8s-csi--node--driver--dtnq5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dtnq5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3590ca52-1c12-4793-a003-8621a1fe8861", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 34, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"11c9b45f009be62723cf7e83936f1167e48c0a18d7dbb62289b514955fe15c3a", Pod:"csi-node-driver-dtnq5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7968f25438d", MAC:"92:4f:86:99:b7:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:35:27.931142 containerd[1553]: 2025-12-12 18:35:27.915 [INFO][4772] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="11c9b45f009be62723cf7e83936f1167e48c0a18d7dbb62289b514955fe15c3a" Namespace="calico-system" Pod="csi-node-driver-dtnq5" WorkloadEndpoint="localhost-k8s-csi--node--driver--dtnq5-eth0" Dec 12 18:35:28.038166 containerd[1553]: time="2025-12-12T18:35:28.037896531Z" level=info msg="connecting to shim 11c9b45f009be62723cf7e83936f1167e48c0a18d7dbb62289b514955fe15c3a" address="unix:///run/containerd/s/eba542b2c05f822c487a32ba336e4a5d94f27724fa48dbfe1cf19f7dafe7d0ec" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:35:28.125530 systemd[1]: Started cri-containerd-11c9b45f009be62723cf7e83936f1167e48c0a18d7dbb62289b514955fe15c3a.scope - libcontainer container 11c9b45f009be62723cf7e83936f1167e48c0a18d7dbb62289b514955fe15c3a. Dec 12 18:35:28.166341 systemd-resolved[1387]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 18:35:28.270174 containerd[1553]: time="2025-12-12T18:35:28.269570065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dtnq5,Uid:3590ca52-1c12-4793-a003-8621a1fe8861,Namespace:calico-system,Attempt:0,} returns sandbox id \"11c9b45f009be62723cf7e83936f1167e48c0a18d7dbb62289b514955fe15c3a\"" Dec 12 18:35:28.290678 containerd[1553]: time="2025-12-12T18:35:28.288322910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:35:28.566650 systemd[1]: Started sshd@11-10.0.0.38:22-10.0.0.1:34164.service - OpenSSH per-connection server daemon (10.0.0.1:34164). Dec 12 18:35:28.681933 containerd[1553]: time="2025-12-12T18:35:28.681848559Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:35:28.693995 containerd[1553]: time="2025-12-12T18:35:28.691832867Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:35:28.694182 containerd[1553]: time="2025-12-12T18:35:28.694094296Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:35:28.694523 kubelet[2769]: E1212 18:35:28.694432 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:35:28.694996 kubelet[2769]: E1212 18:35:28.694554 2769 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:35:28.694996 kubelet[2769]: E1212 18:35:28.694961 2769 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-dtnq5_calico-system(3590ca52-1c12-4793-a003-8621a1fe8861): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:35:28.697223 containerd[1553]: time="2025-12-12T18:35:28.696275272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:35:28.791804 sshd[4852]: Accepted publickey for core from 10.0.0.1 port 34164 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:35:28.800857 sshd-session[4852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:35:28.833811 systemd-logind[1540]: New session 12 of user core. Dec 12 18:35:28.857337 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 18:35:29.062318 containerd[1553]: time="2025-12-12T18:35:29.062255608Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:35:29.065575 containerd[1553]: time="2025-12-12T18:35:29.065503287Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:35:29.065776 containerd[1553]: time="2025-12-12T18:35:29.065550959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:35:29.066115 kubelet[2769]: E1212 18:35:29.066057 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:35:29.066213 kubelet[2769]: E1212 18:35:29.066138 2769 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:35:29.066335 kubelet[2769]: E1212 18:35:29.066302 2769 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-dtnq5_calico-system(3590ca52-1c12-4793-a003-8621a1fe8861): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:35:29.066450 kubelet[2769]: E1212 18:35:29.066389 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dtnq5" podUID="3590ca52-1c12-4793-a003-8621a1fe8861" Dec 12 18:35:29.117069 sshd[4856]: Connection closed by 10.0.0.1 port 34164 Dec 12 18:35:29.117489 sshd-session[4852]: pam_unix(sshd:session): session closed for user core Dec 12 18:35:29.123390 systemd[1]: sshd@11-10.0.0.38:22-10.0.0.1:34164.service: Deactivated successfully. Dec 12 18:35:29.126120 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 18:35:29.130989 systemd-logind[1540]: Session 12 logged out. Waiting for processes to exit. Dec 12 18:35:29.133727 systemd-logind[1540]: Removed session 12. Dec 12 18:35:29.435700 kernel: hrtimer: interrupt took 5491403 ns Dec 12 18:35:29.608183 kubelet[2769]: E1212 18:35:29.597236 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dtnq5" podUID="3590ca52-1c12-4793-a003-8621a1fe8861" Dec 12 18:35:29.611153 systemd-networkd[1458]: cali7968f25438d: Gained IPv6LL Dec 12 18:35:30.294241 containerd[1553]: time="2025-12-12T18:35:30.292647138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b767d98d4-5tzst,Uid:1f0321c0-7695-4f53-9a29-c3900a354123,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:35:30.620020 systemd-networkd[1458]: cali9dccdbc5f94: Link UP Dec 12 18:35:30.621469 systemd-networkd[1458]: cali9dccdbc5f94: Gained carrier Dec 12 18:35:30.676688 containerd[1553]: 2025-12-12 18:35:30.394 [INFO][4874] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7b767d98d4--5tzst-eth0 calico-apiserver-7b767d98d4- calico-apiserver 1f0321c0-7695-4f53-9a29-c3900a354123 843 0 2025-12-12 18:34:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b767d98d4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7b767d98d4-5tzst eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9dccdbc5f94 [] [] }} ContainerID="4a6977866f1fb9bcb3bb3a1a3aa2b7b8b67dd165d4366e01b68d052d0fbf6a60" Namespace="calico-apiserver" Pod="calico-apiserver-7b767d98d4-5tzst" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b767d98d4--5tzst-" Dec 12 18:35:30.676688 containerd[1553]: 2025-12-12 18:35:30.394 [INFO][4874] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4a6977866f1fb9bcb3bb3a1a3aa2b7b8b67dd165d4366e01b68d052d0fbf6a60" Namespace="calico-apiserver" Pod="calico-apiserver-7b767d98d4-5tzst" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b767d98d4--5tzst-eth0" Dec 12 18:35:30.676688 containerd[1553]: 2025-12-12 18:35:30.497 [INFO][4887] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a6977866f1fb9bcb3bb3a1a3aa2b7b8b67dd165d4366e01b68d052d0fbf6a60" HandleID="k8s-pod-network.4a6977866f1fb9bcb3bb3a1a3aa2b7b8b67dd165d4366e01b68d052d0fbf6a60" Workload="localhost-k8s-calico--apiserver--7b767d98d4--5tzst-eth0" Dec 12 18:35:30.676688 containerd[1553]: 2025-12-12 18:35:30.497 [INFO][4887] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4a6977866f1fb9bcb3bb3a1a3aa2b7b8b67dd165d4366e01b68d052d0fbf6a60" HandleID="k8s-pod-network.4a6977866f1fb9bcb3bb3a1a3aa2b7b8b67dd165d4366e01b68d052d0fbf6a60" Workload="localhost-k8s-calico--apiserver--7b767d98d4--5tzst-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038e290), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7b767d98d4-5tzst", "timestamp":"2025-12-12 18:35:30.497278002 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:35:30.676688 containerd[1553]: 2025-12-12 18:35:30.497 [INFO][4887] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:35:30.676688 containerd[1553]: 2025-12-12 18:35:30.497 [INFO][4887] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:35:30.676688 containerd[1553]: 2025-12-12 18:35:30.497 [INFO][4887] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 18:35:30.676688 containerd[1553]: 2025-12-12 18:35:30.521 [INFO][4887] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4a6977866f1fb9bcb3bb3a1a3aa2b7b8b67dd165d4366e01b68d052d0fbf6a60" host="localhost" Dec 12 18:35:30.676688 containerd[1553]: 2025-12-12 18:35:30.534 [INFO][4887] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 18:35:30.676688 containerd[1553]: 2025-12-12 18:35:30.557 [INFO][4887] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 18:35:30.676688 containerd[1553]: 2025-12-12 18:35:30.560 [INFO][4887] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 18:35:30.676688 containerd[1553]: 2025-12-12 18:35:30.567 [INFO][4887] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 18:35:30.676688 containerd[1553]: 2025-12-12 18:35:30.567 [INFO][4887] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4a6977866f1fb9bcb3bb3a1a3aa2b7b8b67dd165d4366e01b68d052d0fbf6a60" host="localhost" Dec 12 18:35:30.676688 containerd[1553]: 2025-12-12 18:35:30.574 [INFO][4887] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4a6977866f1fb9bcb3bb3a1a3aa2b7b8b67dd165d4366e01b68d052d0fbf6a60 Dec 12 18:35:30.676688 containerd[1553]: 2025-12-12 18:35:30.585 [INFO][4887] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4a6977866f1fb9bcb3bb3a1a3aa2b7b8b67dd165d4366e01b68d052d0fbf6a60" host="localhost" Dec 12 18:35:30.676688 containerd[1553]: 2025-12-12 18:35:30.610 [INFO][4887] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.4a6977866f1fb9bcb3bb3a1a3aa2b7b8b67dd165d4366e01b68d052d0fbf6a60" host="localhost" Dec 12 18:35:30.676688 containerd[1553]: 2025-12-12 18:35:30.610 [INFO][4887] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.4a6977866f1fb9bcb3bb3a1a3aa2b7b8b67dd165d4366e01b68d052d0fbf6a60" host="localhost" Dec 12 18:35:30.676688 containerd[1553]: 2025-12-12 18:35:30.610 [INFO][4887] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:35:30.676688 containerd[1553]: 2025-12-12 18:35:30.610 [INFO][4887] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="4a6977866f1fb9bcb3bb3a1a3aa2b7b8b67dd165d4366e01b68d052d0fbf6a60" HandleID="k8s-pod-network.4a6977866f1fb9bcb3bb3a1a3aa2b7b8b67dd165d4366e01b68d052d0fbf6a60" Workload="localhost-k8s-calico--apiserver--7b767d98d4--5tzst-eth0" Dec 12 18:35:30.677528 containerd[1553]: 2025-12-12 18:35:30.615 [INFO][4874] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4a6977866f1fb9bcb3bb3a1a3aa2b7b8b67dd165d4366e01b68d052d0fbf6a60" Namespace="calico-apiserver" Pod="calico-apiserver-7b767d98d4-5tzst" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b767d98d4--5tzst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b767d98d4--5tzst-eth0", GenerateName:"calico-apiserver-7b767d98d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"1f0321c0-7695-4f53-9a29-c3900a354123", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 34, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b767d98d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7b767d98d4-5tzst", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9dccdbc5f94", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:35:30.677528 containerd[1553]: 2025-12-12 18:35:30.616 [INFO][4874] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="4a6977866f1fb9bcb3bb3a1a3aa2b7b8b67dd165d4366e01b68d052d0fbf6a60" Namespace="calico-apiserver" Pod="calico-apiserver-7b767d98d4-5tzst" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b767d98d4--5tzst-eth0" Dec 12 18:35:30.677528 containerd[1553]: 2025-12-12 18:35:30.616 [INFO][4874] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9dccdbc5f94 ContainerID="4a6977866f1fb9bcb3bb3a1a3aa2b7b8b67dd165d4366e01b68d052d0fbf6a60" Namespace="calico-apiserver" Pod="calico-apiserver-7b767d98d4-5tzst" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b767d98d4--5tzst-eth0" Dec 12 18:35:30.677528 containerd[1553]: 2025-12-12 18:35:30.620 [INFO][4874] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a6977866f1fb9bcb3bb3a1a3aa2b7b8b67dd165d4366e01b68d052d0fbf6a60" Namespace="calico-apiserver" Pod="calico-apiserver-7b767d98d4-5tzst" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b767d98d4--5tzst-eth0" Dec 12 18:35:30.677528 containerd[1553]: 2025-12-12 18:35:30.621 [INFO][4874] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4a6977866f1fb9bcb3bb3a1a3aa2b7b8b67dd165d4366e01b68d052d0fbf6a60" Namespace="calico-apiserver" Pod="calico-apiserver-7b767d98d4-5tzst" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b767d98d4--5tzst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b767d98d4--5tzst-eth0", GenerateName:"calico-apiserver-7b767d98d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"1f0321c0-7695-4f53-9a29-c3900a354123", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 34, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b767d98d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4a6977866f1fb9bcb3bb3a1a3aa2b7b8b67dd165d4366e01b68d052d0fbf6a60", Pod:"calico-apiserver-7b767d98d4-5tzst", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9dccdbc5f94", MAC:"16:8a:6d:76:23:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:35:30.677528 containerd[1553]: 2025-12-12 18:35:30.664 [INFO][4874] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4a6977866f1fb9bcb3bb3a1a3aa2b7b8b67dd165d4366e01b68d052d0fbf6a60" Namespace="calico-apiserver" Pod="calico-apiserver-7b767d98d4-5tzst" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b767d98d4--5tzst-eth0" Dec 12 18:35:30.770881 containerd[1553]: time="2025-12-12T18:35:30.770726578Z" level=info msg="connecting to shim 4a6977866f1fb9bcb3bb3a1a3aa2b7b8b67dd165d4366e01b68d052d0fbf6a60" address="unix:///run/containerd/s/9254540364b1e086b7f4eb504704799d940408e4d164d0d7e9751a137d706b9e" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:35:30.846562 systemd[1]: Started cri-containerd-4a6977866f1fb9bcb3bb3a1a3aa2b7b8b67dd165d4366e01b68d052d0fbf6a60.scope - libcontainer container 4a6977866f1fb9bcb3bb3a1a3aa2b7b8b67dd165d4366e01b68d052d0fbf6a60. Dec 12 18:35:30.881063 systemd-resolved[1387]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 18:35:30.992590 containerd[1553]: time="2025-12-12T18:35:30.992280073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b767d98d4-5tzst,Uid:1f0321c0-7695-4f53-9a29-c3900a354123,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4a6977866f1fb9bcb3bb3a1a3aa2b7b8b67dd165d4366e01b68d052d0fbf6a60\"" Dec 12 18:35:31.001072 containerd[1553]: time="2025-12-12T18:35:31.000846729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:35:31.297196 containerd[1553]: time="2025-12-12T18:35:31.285756975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57c994577d-zf2dw,Uid:5bd7d04f-25d6-4f6d-8d32-675830519b60,Namespace:calico-system,Attempt:0,}" Dec 12 18:35:31.374591 containerd[1553]: time="2025-12-12T18:35:31.374310499Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:35:31.392253 containerd[1553]: time="2025-12-12T18:35:31.392119195Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:35:31.392445 containerd[1553]: time="2025-12-12T18:35:31.392286905Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:35:31.392939 kubelet[2769]: E1212 18:35:31.392647 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:35:31.392939 kubelet[2769]: E1212 18:35:31.392727 2769 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:35:31.392939 kubelet[2769]: E1212 18:35:31.392835 2769 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7b767d98d4-5tzst_calico-apiserver(1f0321c0-7695-4f53-9a29-c3900a354123): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:35:31.393503 kubelet[2769]: E1212 18:35:31.392888 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b767d98d4-5tzst" podUID="1f0321c0-7695-4f53-9a29-c3900a354123" Dec 12 18:35:31.608662 kubelet[2769]: E1212 18:35:31.608033 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b767d98d4-5tzst" podUID="1f0321c0-7695-4f53-9a29-c3900a354123" Dec 12 18:35:31.971318 systemd-networkd[1458]: cali116221a8d5a: Link UP Dec 12 18:35:31.971670 systemd-networkd[1458]: cali116221a8d5a: Gained carrier Dec 12 18:35:32.027853 containerd[1553]: 2025-12-12 18:35:31.484 [INFO][4954] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--57c994577d--zf2dw-eth0 calico-kube-controllers-57c994577d- calico-system 5bd7d04f-25d6-4f6d-8d32-675830519b60 852 0 2025-12-12 18:34:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:57c994577d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-57c994577d-zf2dw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali116221a8d5a [] [] }} ContainerID="d6012b332151c5009be2911741e8504d0d978d5abcc62db1ef4f334e38b518a7" Namespace="calico-system" Pod="calico-kube-controllers-57c994577d-zf2dw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57c994577d--zf2dw-" Dec 12 18:35:32.027853 containerd[1553]: 2025-12-12 18:35:31.485 [INFO][4954] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d6012b332151c5009be2911741e8504d0d978d5abcc62db1ef4f334e38b518a7" Namespace="calico-system" Pod="calico-kube-controllers-57c994577d-zf2dw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57c994577d--zf2dw-eth0" Dec 12 18:35:32.027853 containerd[1553]: 2025-12-12 18:35:31.622 [INFO][4968] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d6012b332151c5009be2911741e8504d0d978d5abcc62db1ef4f334e38b518a7" HandleID="k8s-pod-network.d6012b332151c5009be2911741e8504d0d978d5abcc62db1ef4f334e38b518a7" Workload="localhost-k8s-calico--kube--controllers--57c994577d--zf2dw-eth0" Dec 12 18:35:32.027853 containerd[1553]: 2025-12-12 18:35:31.623 [INFO][4968] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d6012b332151c5009be2911741e8504d0d978d5abcc62db1ef4f334e38b518a7" HandleID="k8s-pod-network.d6012b332151c5009be2911741e8504d0d978d5abcc62db1ef4f334e38b518a7" Workload="localhost-k8s-calico--kube--controllers--57c994577d--zf2dw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000418cc0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-57c994577d-zf2dw", "timestamp":"2025-12-12 18:35:31.622789411 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:35:32.027853 containerd[1553]: 2025-12-12 18:35:31.623 [INFO][4968] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:35:32.027853 containerd[1553]: 2025-12-12 18:35:31.623 [INFO][4968] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:35:32.027853 containerd[1553]: 2025-12-12 18:35:31.623 [INFO][4968] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 18:35:32.027853 containerd[1553]: 2025-12-12 18:35:31.660 [INFO][4968] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d6012b332151c5009be2911741e8504d0d978d5abcc62db1ef4f334e38b518a7" host="localhost" Dec 12 18:35:32.027853 containerd[1553]: 2025-12-12 18:35:31.689 [INFO][4968] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 18:35:32.027853 containerd[1553]: 2025-12-12 18:35:31.704 [INFO][4968] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 18:35:32.027853 containerd[1553]: 2025-12-12 18:35:31.725 [INFO][4968] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 18:35:32.027853 containerd[1553]: 2025-12-12 18:35:31.733 [INFO][4968] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 18:35:32.027853 containerd[1553]: 2025-12-12 18:35:31.733 [INFO][4968] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d6012b332151c5009be2911741e8504d0d978d5abcc62db1ef4f334e38b518a7" host="localhost" Dec 12 18:35:32.027853 containerd[1553]: 2025-12-12 18:35:31.740 [INFO][4968] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d6012b332151c5009be2911741e8504d0d978d5abcc62db1ef4f334e38b518a7 Dec 12 18:35:32.027853 containerd[1553]: 2025-12-12 18:35:31.825 [INFO][4968] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d6012b332151c5009be2911741e8504d0d978d5abcc62db1ef4f334e38b518a7" host="localhost" Dec 12 18:35:32.027853 containerd[1553]: 2025-12-12 18:35:31.949 [INFO][4968] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.d6012b332151c5009be2911741e8504d0d978d5abcc62db1ef4f334e38b518a7" host="localhost" Dec 12 18:35:32.027853 containerd[1553]: 2025-12-12 18:35:31.949 [INFO][4968] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.d6012b332151c5009be2911741e8504d0d978d5abcc62db1ef4f334e38b518a7" host="localhost" Dec 12 18:35:32.027853 containerd[1553]: 2025-12-12 18:35:31.949 [INFO][4968] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:35:32.027853 containerd[1553]: 2025-12-12 18:35:31.949 [INFO][4968] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="d6012b332151c5009be2911741e8504d0d978d5abcc62db1ef4f334e38b518a7" HandleID="k8s-pod-network.d6012b332151c5009be2911741e8504d0d978d5abcc62db1ef4f334e38b518a7" Workload="localhost-k8s-calico--kube--controllers--57c994577d--zf2dw-eth0" Dec 12 18:35:32.028575 containerd[1553]: 2025-12-12 18:35:31.961 [INFO][4954] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d6012b332151c5009be2911741e8504d0d978d5abcc62db1ef4f334e38b518a7" Namespace="calico-system" Pod="calico-kube-controllers-57c994577d-zf2dw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57c994577d--zf2dw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--57c994577d--zf2dw-eth0", GenerateName:"calico-kube-controllers-57c994577d-", Namespace:"calico-system", SelfLink:"", UID:"5bd7d04f-25d6-4f6d-8d32-675830519b60", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 34, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57c994577d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-57c994577d-zf2dw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali116221a8d5a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:35:32.028575 containerd[1553]: 2025-12-12 18:35:31.961 [INFO][4954] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="d6012b332151c5009be2911741e8504d0d978d5abcc62db1ef4f334e38b518a7" Namespace="calico-system" Pod="calico-kube-controllers-57c994577d-zf2dw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57c994577d--zf2dw-eth0" Dec 12 18:35:32.028575 containerd[1553]: 2025-12-12 18:35:31.961 [INFO][4954] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali116221a8d5a ContainerID="d6012b332151c5009be2911741e8504d0d978d5abcc62db1ef4f334e38b518a7" Namespace="calico-system" Pod="calico-kube-controllers-57c994577d-zf2dw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57c994577d--zf2dw-eth0" Dec 12 18:35:32.028575 containerd[1553]: 2025-12-12 18:35:31.969 [INFO][4954] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d6012b332151c5009be2911741e8504d0d978d5abcc62db1ef4f334e38b518a7" Namespace="calico-system" Pod="calico-kube-controllers-57c994577d-zf2dw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57c994577d--zf2dw-eth0" Dec 12 18:35:32.028575 containerd[1553]: 2025-12-12 18:35:31.970 [INFO][4954] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d6012b332151c5009be2911741e8504d0d978d5abcc62db1ef4f334e38b518a7" Namespace="calico-system" Pod="calico-kube-controllers-57c994577d-zf2dw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57c994577d--zf2dw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--57c994577d--zf2dw-eth0", GenerateName:"calico-kube-controllers-57c994577d-", Namespace:"calico-system", SelfLink:"", UID:"5bd7d04f-25d6-4f6d-8d32-675830519b60", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 34, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57c994577d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d6012b332151c5009be2911741e8504d0d978d5abcc62db1ef4f334e38b518a7", Pod:"calico-kube-controllers-57c994577d-zf2dw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali116221a8d5a", MAC:"b2:bf:84:fb:ff:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:35:32.028575 containerd[1553]: 2025-12-12 18:35:32.014 [INFO][4954] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d6012b332151c5009be2911741e8504d0d978d5abcc62db1ef4f334e38b518a7" Namespace="calico-system" Pod="calico-kube-controllers-57c994577d-zf2dw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57c994577d--zf2dw-eth0" Dec 12 18:35:32.117606 containerd[1553]: time="2025-12-12T18:35:32.117337406Z" level=info msg="connecting to shim d6012b332151c5009be2911741e8504d0d978d5abcc62db1ef4f334e38b518a7" address="unix:///run/containerd/s/6e285676781a38ab43b9b53511a77175ce1c19ed03ca23cc142518aaed4b23b6" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:35:32.200375 systemd[1]: Started cri-containerd-d6012b332151c5009be2911741e8504d0d978d5abcc62db1ef4f334e38b518a7.scope - libcontainer container d6012b332151c5009be2911741e8504d0d978d5abcc62db1ef4f334e38b518a7. Dec 12 18:35:32.255071 systemd-resolved[1387]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 18:35:32.357315 systemd-networkd[1458]: cali9dccdbc5f94: Gained IPv6LL Dec 12 18:35:32.385000 containerd[1553]: time="2025-12-12T18:35:32.381824434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57c994577d-zf2dw,Uid:5bd7d04f-25d6-4f6d-8d32-675830519b60,Namespace:calico-system,Attempt:0,} returns sandbox id \"d6012b332151c5009be2911741e8504d0d978d5abcc62db1ef4f334e38b518a7\"" Dec 12 18:35:32.386791 containerd[1553]: time="2025-12-12T18:35:32.386686753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:35:32.619970 kubelet[2769]: E1212 18:35:32.619319 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b767d98d4-5tzst" podUID="1f0321c0-7695-4f53-9a29-c3900a354123" Dec 12 18:35:32.746483 containerd[1553]: time="2025-12-12T18:35:32.746387120Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:35:32.755036 containerd[1553]: time="2025-12-12T18:35:32.752208548Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:35:32.755036 containerd[1553]: time="2025-12-12T18:35:32.752362391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:35:32.755257 kubelet[2769]: E1212 18:35:32.752578 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:35:32.755257 kubelet[2769]: E1212 18:35:32.752641 2769 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:35:32.755257 kubelet[2769]: E1212 18:35:32.752742 2769 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-57c994577d-zf2dw_calico-system(5bd7d04f-25d6-4f6d-8d32-675830519b60): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:35:32.755257 kubelet[2769]: E1212 18:35:32.752789 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57c994577d-zf2dw" podUID="5bd7d04f-25d6-4f6d-8d32-675830519b60" Dec 12 18:35:33.253182 systemd-networkd[1458]: cali116221a8d5a: Gained IPv6LL Dec 12 18:35:33.635970 kubelet[2769]: E1212 18:35:33.635846 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57c994577d-zf2dw" podUID="5bd7d04f-25d6-4f6d-8d32-675830519b60" Dec 12 18:35:34.181337 systemd[1]: Started sshd@12-10.0.0.38:22-10.0.0.1:43492.service - OpenSSH per-connection server daemon (10.0.0.1:43492). Dec 12 18:35:34.623395 sshd[5034]: Accepted publickey for core from 10.0.0.1 port 43492 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:35:34.633425 sshd-session[5034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:35:34.656794 systemd-logind[1540]: New session 13 of user core. Dec 12 18:35:34.667284 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 18:35:35.111348 sshd[5037]: Connection closed by 10.0.0.1 port 43492 Dec 12 18:35:35.121061 sshd-session[5034]: pam_unix(sshd:session): session closed for user core Dec 12 18:35:35.131801 systemd[1]: sshd@12-10.0.0.38:22-10.0.0.1:43492.service: Deactivated successfully. Dec 12 18:35:35.136481 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 18:35:35.141037 systemd-logind[1540]: Session 13 logged out. Waiting for processes to exit. Dec 12 18:35:35.145678 systemd-logind[1540]: Removed session 13. Dec 12 18:35:35.287409 containerd[1553]: time="2025-12-12T18:35:35.287042636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:35:35.678753 containerd[1553]: time="2025-12-12T18:35:35.678518180Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:35:35.703504 containerd[1553]: time="2025-12-12T18:35:35.703286903Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:35:35.703504 containerd[1553]: time="2025-12-12T18:35:35.703453270Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 18:35:35.707310 kubelet[2769]: E1212 18:35:35.704327 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:35:35.707310 kubelet[2769]: E1212 18:35:35.704421 2769 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:35:35.707310 kubelet[2769]: E1212 18:35:35.704528 2769 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-wcf2b_calico-system(4c88e5b7-6c17-45c7-92f0-9be254ebdd59): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:35:35.707310 kubelet[2769]: E1212 18:35:35.704574 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wcf2b" podUID="4c88e5b7-6c17-45c7-92f0-9be254ebdd59" Dec 12 18:35:36.278291 containerd[1553]: time="2025-12-12T18:35:36.276441782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:35:36.823320 containerd[1553]: time="2025-12-12T18:35:36.823178972Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:35:36.827985 containerd[1553]: time="2025-12-12T18:35:36.827042546Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:35:36.827985 containerd[1553]: time="2025-12-12T18:35:36.827174117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:35:36.828093 kubelet[2769]: E1212 18:35:36.827395 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:35:36.828093 kubelet[2769]: E1212 18:35:36.827462 2769 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:35:36.828093 kubelet[2769]: E1212 18:35:36.827567 2769 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7b5c98f7cb-flntl_calico-system(1290709f-462a-4bdb-93db-9172d8fdb29d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:35:36.830587 containerd[1553]: time="2025-12-12T18:35:36.830107871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:35:37.254545 containerd[1553]: time="2025-12-12T18:35:37.253889109Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:35:37.330326 containerd[1553]: time="2025-12-12T18:35:37.330090399Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:35:37.330326 containerd[1553]: time="2025-12-12T18:35:37.330256896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:35:37.331433 kubelet[2769]: E1212 18:35:37.330957 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:35:37.331433 kubelet[2769]: E1212 18:35:37.331032 2769 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:35:37.331433 kubelet[2769]: E1212 18:35:37.331313 2769 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7b5c98f7cb-flntl_calico-system(1290709f-462a-4bdb-93db-9172d8fdb29d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:35:37.331571 kubelet[2769]: E1212 18:35:37.331369 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b5c98f7cb-flntl" podUID="1290709f-462a-4bdb-93db-9172d8fdb29d" Dec 12 18:35:37.331647 containerd[1553]: time="2025-12-12T18:35:37.331589131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:35:37.687963 containerd[1553]: time="2025-12-12T18:35:37.687620130Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:35:37.729714 containerd[1553]: time="2025-12-12T18:35:37.726864842Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:35:37.729714 containerd[1553]: time="2025-12-12T18:35:37.727000539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:35:37.729972 kubelet[2769]: E1212 18:35:37.727489 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:35:37.729972 kubelet[2769]: E1212 18:35:37.727572 2769 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:35:37.729972 kubelet[2769]: E1212 18:35:37.727710 2769 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7b767d98d4-755s8_calico-apiserver(924b51e0-ed81-4bc8-a597-a44686b519ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:35:37.729972 kubelet[2769]: E1212 18:35:37.727761 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b767d98d4-755s8" podUID="924b51e0-ed81-4bc8-a597-a44686b519ff" Dec 12 18:35:40.156364 systemd[1]: Started sshd@13-10.0.0.38:22-10.0.0.1:43500.service - OpenSSH per-connection server daemon (10.0.0.1:43500). Dec 12 18:35:40.256905 sshd[5066]: Accepted publickey for core from 10.0.0.1 port 43500 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:35:40.271453 sshd-session[5066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:35:40.292679 systemd-logind[1540]: New session 14 of user core. Dec 12 18:35:40.305653 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 18:35:40.538799 sshd[5069]: Connection closed by 10.0.0.1 port 43500 Dec 12 18:35:40.539351 sshd-session[5066]: pam_unix(sshd:session): session closed for user core Dec 12 18:35:40.546116 systemd[1]: sshd@13-10.0.0.38:22-10.0.0.1:43500.service: Deactivated successfully. Dec 12 18:35:40.548768 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 18:35:40.552077 systemd-logind[1540]: Session 14 logged out. Waiting for processes to exit. Dec 12 18:35:40.553339 systemd-logind[1540]: Removed session 14. Dec 12 18:35:43.270838 containerd[1553]: time="2025-12-12T18:35:43.270757599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:35:43.633723 containerd[1553]: time="2025-12-12T18:35:43.633659494Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:35:43.768453 containerd[1553]: time="2025-12-12T18:35:43.768144378Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:35:43.768453 containerd[1553]: time="2025-12-12T18:35:43.768269276Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:35:43.768707 kubelet[2769]: E1212 18:35:43.768604 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:35:43.768707 kubelet[2769]: E1212 18:35:43.768671 2769 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:35:43.769285 kubelet[2769]: E1212 18:35:43.768766 2769 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-dtnq5_calico-system(3590ca52-1c12-4793-a003-8621a1fe8861): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:35:43.779838 containerd[1553]: time="2025-12-12T18:35:43.779744577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:35:44.226799 containerd[1553]: time="2025-12-12T18:35:44.226561429Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:35:44.235722 containerd[1553]: time="2025-12-12T18:35:44.232451040Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:35:44.235722 containerd[1553]: time="2025-12-12T18:35:44.232635300Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:35:44.236060 kubelet[2769]: E1212 18:35:44.232870 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:35:44.238994 kubelet[2769]: E1212 18:35:44.237429 2769 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:35:44.238994 kubelet[2769]: E1212 18:35:44.237577 2769 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-dtnq5_calico-system(3590ca52-1c12-4793-a003-8621a1fe8861): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:35:44.238994 kubelet[2769]: E1212 18:35:44.237635 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dtnq5" podUID="3590ca52-1c12-4793-a003-8621a1fe8861" Dec 12 18:35:45.271017 containerd[1553]: time="2025-12-12T18:35:45.270884114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:35:45.560431 systemd[1]: Started sshd@14-10.0.0.38:22-10.0.0.1:49520.service - OpenSSH per-connection server daemon (10.0.0.1:49520). Dec 12 18:35:45.616682 containerd[1553]: time="2025-12-12T18:35:45.616613800Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:35:45.624078 containerd[1553]: time="2025-12-12T18:35:45.622458883Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:35:45.624078 containerd[1553]: time="2025-12-12T18:35:45.622572369Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:35:45.624274 kubelet[2769]: E1212 18:35:45.622751 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:35:45.624274 kubelet[2769]: E1212 18:35:45.622805 2769 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:35:45.624274 kubelet[2769]: E1212 18:35:45.622896 2769 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7b767d98d4-5tzst_calico-apiserver(1f0321c0-7695-4f53-9a29-c3900a354123): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:35:45.624274 kubelet[2769]: E1212 18:35:45.622969 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b767d98d4-5tzst" podUID="1f0321c0-7695-4f53-9a29-c3900a354123" Dec 12 18:35:45.636707 sshd[5091]: Accepted publickey for core from 10.0.0.1 port 49520 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:35:45.638136 sshd-session[5091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:35:45.647843 systemd-logind[1540]: New session 15 of user core. Dec 12 18:35:45.659173 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 18:35:45.830817 sshd[5094]: Connection closed by 10.0.0.1 port 49520 Dec 12 18:35:45.833130 sshd-session[5091]: pam_unix(sshd:session): session closed for user core Dec 12 18:35:45.842846 systemd[1]: sshd@14-10.0.0.38:22-10.0.0.1:49520.service: Deactivated successfully. Dec 12 18:35:45.846411 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 18:35:45.848493 systemd-logind[1540]: Session 15 logged out. Waiting for processes to exit. Dec 12 18:35:45.853746 systemd[1]: Started sshd@15-10.0.0.38:22-10.0.0.1:49524.service - OpenSSH per-connection server daemon (10.0.0.1:49524). Dec 12 18:35:45.855079 systemd-logind[1540]: Removed session 15. Dec 12 18:35:45.940792 sshd[5108]: Accepted publickey for core from 10.0.0.1 port 49524 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:35:45.945330 sshd-session[5108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:35:45.953145 systemd-logind[1540]: New session 16 of user core. Dec 12 18:35:45.963441 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 18:35:46.237797 sshd[5111]: Connection closed by 10.0.0.1 port 49524 Dec 12 18:35:46.238283 sshd-session[5108]: pam_unix(sshd:session): session closed for user core Dec 12 18:35:46.265663 systemd[1]: sshd@15-10.0.0.38:22-10.0.0.1:49524.service: Deactivated successfully. Dec 12 18:35:46.269062 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 18:35:46.271736 systemd-logind[1540]: Session 16 logged out. Waiting for processes to exit. Dec 12 18:35:46.275089 systemd-logind[1540]: Removed session 16. Dec 12 18:35:46.278367 systemd[1]: Started sshd@16-10.0.0.38:22-10.0.0.1:49526.service - OpenSSH per-connection server daemon (10.0.0.1:49526). Dec 12 18:35:46.381607 sshd[5122]: Accepted publickey for core from 10.0.0.1 port 49526 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:35:46.386287 sshd-session[5122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:35:46.399410 systemd-logind[1540]: New session 17 of user core. Dec 12 18:35:46.409071 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 18:35:46.554509 sshd[5125]: Connection closed by 10.0.0.1 port 49526 Dec 12 18:35:46.554871 sshd-session[5122]: pam_unix(sshd:session): session closed for user core Dec 12 18:35:46.559878 systemd[1]: sshd@16-10.0.0.38:22-10.0.0.1:49526.service: Deactivated successfully. Dec 12 18:35:46.562545 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 18:35:46.563667 systemd-logind[1540]: Session 17 logged out. Waiting for processes to exit. Dec 12 18:35:46.564888 systemd-logind[1540]: Removed session 17. Dec 12 18:35:47.268163 containerd[1553]: time="2025-12-12T18:35:47.268037649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:35:47.759329 containerd[1553]: time="2025-12-12T18:35:47.759271008Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:35:47.763127 containerd[1553]: time="2025-12-12T18:35:47.763057982Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:35:47.763349 containerd[1553]: time="2025-12-12T18:35:47.763166538Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:35:47.763549 kubelet[2769]: E1212 18:35:47.763489 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:35:47.764180 kubelet[2769]: E1212 18:35:47.763557 2769 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:35:47.764180 kubelet[2769]: E1212 18:35:47.763661 2769 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-57c994577d-zf2dw_calico-system(5bd7d04f-25d6-4f6d-8d32-675830519b60): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:35:47.764180 kubelet[2769]: E1212 18:35:47.763706 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57c994577d-zf2dw" podUID="5bd7d04f-25d6-4f6d-8d32-675830519b60" Dec 12 18:35:48.267377 kubelet[2769]: E1212 18:35:48.267335 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:49.268739 kubelet[2769]: E1212 18:35:49.268691 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:49.280746 kubelet[2769]: E1212 18:35:49.270995 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b767d98d4-755s8" podUID="924b51e0-ed81-4bc8-a597-a44686b519ff" Dec 12 18:35:49.280746 kubelet[2769]: E1212 18:35:49.273350 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b5c98f7cb-flntl" podUID="1290709f-462a-4bdb-93db-9172d8fdb29d" Dec 12 18:35:49.817954 kubelet[2769]: E1212 18:35:49.817783 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:50.271356 kubelet[2769]: E1212 18:35:50.269546 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wcf2b" podUID="4c88e5b7-6c17-45c7-92f0-9be254ebdd59" Dec 12 18:35:51.606087 systemd[1]: Started sshd@17-10.0.0.38:22-10.0.0.1:54068.service - OpenSSH per-connection server daemon (10.0.0.1:54068). Dec 12 18:35:51.754102 sshd[5164]: Accepted publickey for core from 10.0.0.1 port 54068 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:35:51.763011 sshd-session[5164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:35:51.784446 systemd-logind[1540]: New session 18 of user core. Dec 12 18:35:51.826065 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 18:35:52.268942 kubelet[2769]: E1212 18:35:52.268847 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:52.276822 sshd[5167]: Connection closed by 10.0.0.1 port 54068 Dec 12 18:35:52.279032 sshd-session[5164]: pam_unix(sshd:session): session closed for user core Dec 12 18:35:52.288876 systemd-logind[1540]: Session 18 logged out. Waiting for processes to exit. Dec 12 18:35:52.290777 systemd[1]: sshd@17-10.0.0.38:22-10.0.0.1:54068.service: Deactivated successfully. Dec 12 18:35:52.296418 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 18:35:52.305611 systemd-logind[1540]: Removed session 18. Dec 12 18:35:57.282001 kubelet[2769]: E1212 18:35:57.280126 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:57.339843 systemd[1]: Started sshd@18-10.0.0.38:22-10.0.0.1:54084.service - OpenSSH per-connection server daemon (10.0.0.1:54084). Dec 12 18:35:57.573098 sshd[5183]: Accepted publickey for core from 10.0.0.1 port 54084 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:35:57.585393 sshd-session[5183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:35:57.620521 systemd-logind[1540]: New session 19 of user core. Dec 12 18:35:57.630409 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 18:35:58.072789 sshd[5186]: Connection closed by 10.0.0.1 port 54084 Dec 12 18:35:58.080536 sshd-session[5183]: pam_unix(sshd:session): session closed for user core Dec 12 18:35:58.097376 systemd[1]: sshd@18-10.0.0.38:22-10.0.0.1:54084.service: Deactivated successfully. Dec 12 18:35:58.101581 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 18:35:58.105208 systemd-logind[1540]: Session 19 logged out. Waiting for processes to exit. Dec 12 18:35:58.108717 systemd-logind[1540]: Removed session 19. Dec 12 18:35:59.297503 kubelet[2769]: E1212 18:35:59.297349 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dtnq5" podUID="3590ca52-1c12-4793-a003-8621a1fe8861" Dec 12 18:36:00.269300 kubelet[2769]: E1212 18:36:00.268186 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57c994577d-zf2dw" podUID="5bd7d04f-25d6-4f6d-8d32-675830519b60" Dec 12 18:36:00.269300 kubelet[2769]: E1212 18:36:00.268603 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b767d98d4-5tzst" podUID="1f0321c0-7695-4f53-9a29-c3900a354123" Dec 12 18:36:01.279190 containerd[1553]: time="2025-12-12T18:36:01.279066002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:36:01.730960 containerd[1553]: time="2025-12-12T18:36:01.730542616Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:36:01.741455 containerd[1553]: time="2025-12-12T18:36:01.741245327Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:36:01.741455 containerd[1553]: time="2025-12-12T18:36:01.741416261Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:36:01.742504 kubelet[2769]: E1212 18:36:01.741830 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:36:01.742504 kubelet[2769]: E1212 18:36:01.741937 2769 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:36:01.742504 kubelet[2769]: E1212 18:36:01.742026 2769 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7b767d98d4-755s8_calico-apiserver(924b51e0-ed81-4bc8-a597-a44686b519ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:36:01.742504 kubelet[2769]: E1212 18:36:01.742072 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b767d98d4-755s8" podUID="924b51e0-ed81-4bc8-a597-a44686b519ff" Dec 12 18:36:03.104225 systemd[1]: Started sshd@19-10.0.0.38:22-10.0.0.1:35218.service - OpenSSH per-connection server daemon (10.0.0.1:35218). Dec 12 18:36:03.287719 containerd[1553]: time="2025-12-12T18:36:03.287671762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:36:03.352773 sshd[5220]: Accepted publickey for core from 10.0.0.1 port 35218 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:36:03.357588 sshd-session[5220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:36:03.391215 systemd-logind[1540]: New session 20 of user core. Dec 12 18:36:03.414230 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 18:36:03.704794 containerd[1553]: time="2025-12-12T18:36:03.704430778Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:36:03.712537 containerd[1553]: time="2025-12-12T18:36:03.712440371Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:36:03.713287 containerd[1553]: time="2025-12-12T18:36:03.713137850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:36:03.714752 kubelet[2769]: E1212 18:36:03.714608 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:36:03.716669 kubelet[2769]: E1212 18:36:03.714746 2769 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:36:03.716669 kubelet[2769]: E1212 18:36:03.715124 2769 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7b5c98f7cb-flntl_calico-system(1290709f-462a-4bdb-93db-9172d8fdb29d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:36:03.721902 containerd[1553]: time="2025-12-12T18:36:03.719805166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:36:03.755360 sshd[5223]: Connection closed by 10.0.0.1 port 35218 Dec 12 18:36:03.756171 sshd-session[5220]: pam_unix(sshd:session): session closed for user core Dec 12 18:36:03.776260 systemd[1]: sshd@19-10.0.0.38:22-10.0.0.1:35218.service: Deactivated successfully. Dec 12 18:36:03.783525 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 18:36:03.785863 systemd-logind[1540]: Session 20 logged out. Waiting for processes to exit. Dec 12 18:36:03.794860 systemd-logind[1540]: Removed session 20. Dec 12 18:36:04.126526 containerd[1553]: time="2025-12-12T18:36:04.126304266Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:36:04.142589 containerd[1553]: time="2025-12-12T18:36:04.142450148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:36:04.142785 containerd[1553]: time="2025-12-12T18:36:04.142562801Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:36:04.142995 kubelet[2769]: E1212 18:36:04.142895 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:36:04.143082 kubelet[2769]: E1212 18:36:04.142996 2769 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:36:04.143154 kubelet[2769]: E1212 18:36:04.143118 2769 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7b5c98f7cb-flntl_calico-system(1290709f-462a-4bdb-93db-9172d8fdb29d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:36:04.143232 kubelet[2769]: E1212 18:36:04.143185 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b5c98f7cb-flntl" podUID="1290709f-462a-4bdb-93db-9172d8fdb29d" Dec 12 18:36:05.278974 containerd[1553]: time="2025-12-12T18:36:05.277308182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:36:05.643923 containerd[1553]: time="2025-12-12T18:36:05.643502597Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:36:05.658229 containerd[1553]: time="2025-12-12T18:36:05.658059650Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:36:05.658229 containerd[1553]: time="2025-12-12T18:36:05.658189726Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 18:36:05.658688 kubelet[2769]: E1212 18:36:05.658632 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:36:05.659811 kubelet[2769]: E1212 18:36:05.659193 2769 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:36:05.659811 kubelet[2769]: E1212 18:36:05.659314 2769 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-wcf2b_calico-system(4c88e5b7-6c17-45c7-92f0-9be254ebdd59): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:36:05.659811 kubelet[2769]: E1212 18:36:05.659358 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wcf2b" podUID="4c88e5b7-6c17-45c7-92f0-9be254ebdd59" Dec 12 18:36:08.787374 systemd[1]: Started sshd@20-10.0.0.38:22-10.0.0.1:35220.service - OpenSSH per-connection server daemon (10.0.0.1:35220). Dec 12 18:36:08.991928 sshd[5239]: Accepted publickey for core from 10.0.0.1 port 35220 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:36:08.992508 sshd-session[5239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:36:09.021474 systemd-logind[1540]: New session 21 of user core. Dec 12 18:36:09.048536 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 18:36:09.442038 sshd[5242]: Connection closed by 10.0.0.1 port 35220 Dec 12 18:36:09.445442 sshd-session[5239]: pam_unix(sshd:session): session closed for user core Dec 12 18:36:09.456048 systemd[1]: sshd@20-10.0.0.38:22-10.0.0.1:35220.service: Deactivated successfully. Dec 12 18:36:09.469868 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 18:36:09.474217 systemd-logind[1540]: Session 21 logged out. Waiting for processes to exit. Dec 12 18:36:09.475576 systemd-logind[1540]: Removed session 21. Dec 12 18:36:12.275025 containerd[1553]: time="2025-12-12T18:36:12.270939162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:36:12.630688 containerd[1553]: time="2025-12-12T18:36:12.630339862Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:36:12.645322 containerd[1553]: time="2025-12-12T18:36:12.641131399Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:36:12.645322 containerd[1553]: time="2025-12-12T18:36:12.641894650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:36:12.645578 kubelet[2769]: E1212 18:36:12.642261 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:36:12.645578 kubelet[2769]: E1212 18:36:12.642336 2769 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:36:12.645578 kubelet[2769]: E1212 18:36:12.642579 2769 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-dtnq5_calico-system(3590ca52-1c12-4793-a003-8621a1fe8861): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:36:12.647090 containerd[1553]: time="2025-12-12T18:36:12.646721965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:36:13.043979 containerd[1553]: time="2025-12-12T18:36:13.043621724Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:36:13.058803 containerd[1553]: time="2025-12-12T18:36:13.057821093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:36:13.058803 containerd[1553]: time="2025-12-12T18:36:13.057965465Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:36:13.060618 kubelet[2769]: E1212 18:36:13.060553 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:36:13.060726 kubelet[2769]: E1212 18:36:13.060629 2769 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:36:13.060930 kubelet[2769]: E1212 18:36:13.060878 2769 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7b767d98d4-5tzst_calico-apiserver(1f0321c0-7695-4f53-9a29-c3900a354123): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:36:13.061100 kubelet[2769]: E1212 18:36:13.061039 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b767d98d4-5tzst" podUID="1f0321c0-7695-4f53-9a29-c3900a354123" Dec 12 18:36:13.061867 containerd[1553]: time="2025-12-12T18:36:13.061785448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:36:13.425713 containerd[1553]: time="2025-12-12T18:36:13.421088434Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:36:13.436329 containerd[1553]: time="2025-12-12T18:36:13.436220434Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:36:13.436532 containerd[1553]: time="2025-12-12T18:36:13.436383161Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:36:13.437710 kubelet[2769]: E1212 18:36:13.436696 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:36:13.437710 kubelet[2769]: E1212 18:36:13.436777 2769 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:36:13.437710 kubelet[2769]: E1212 18:36:13.437035 2769 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-dtnq5_calico-system(3590ca52-1c12-4793-a003-8621a1fe8861): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:36:13.437969 kubelet[2769]: E1212 18:36:13.437092 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dtnq5" podUID="3590ca52-1c12-4793-a003-8621a1fe8861" Dec 12 18:36:13.444867 containerd[1553]: time="2025-12-12T18:36:13.437736126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:36:13.841370 containerd[1553]: time="2025-12-12T18:36:13.841022474Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:36:13.857102 containerd[1553]: time="2025-12-12T18:36:13.856888048Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:36:13.857102 containerd[1553]: time="2025-12-12T18:36:13.857047569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:36:13.857632 kubelet[2769]: E1212 18:36:13.857560 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:36:13.865364 kubelet[2769]: E1212 18:36:13.859739 2769 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:36:13.865364 kubelet[2769]: E1212 18:36:13.860078 2769 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-57c994577d-zf2dw_calico-system(5bd7d04f-25d6-4f6d-8d32-675830519b60): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:36:13.865364 kubelet[2769]: E1212 18:36:13.860127 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57c994577d-zf2dw" podUID="5bd7d04f-25d6-4f6d-8d32-675830519b60" Dec 12 18:36:14.499275 systemd[1]: Started sshd@21-10.0.0.38:22-10.0.0.1:55350.service - OpenSSH per-connection server daemon (10.0.0.1:55350). Dec 12 18:36:14.640317 sshd[5258]: Accepted publickey for core from 10.0.0.1 port 55350 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:36:14.642422 sshd-session[5258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:36:14.672321 systemd-logind[1540]: New session 22 of user core. Dec 12 18:36:14.684214 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 18:36:14.919986 sshd[5261]: Connection closed by 10.0.0.1 port 55350 Dec 12 18:36:14.919808 sshd-session[5258]: pam_unix(sshd:session): session closed for user core Dec 12 18:36:14.939946 systemd[1]: sshd@21-10.0.0.38:22-10.0.0.1:55350.service: Deactivated successfully. Dec 12 18:36:14.954688 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 18:36:14.962011 systemd-logind[1540]: Session 22 logged out. Waiting for processes to exit. Dec 12 18:36:14.972399 systemd-logind[1540]: Removed session 22. Dec 12 18:36:15.281470 kubelet[2769]: E1212 18:36:15.281221 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b767d98d4-755s8" podUID="924b51e0-ed81-4bc8-a597-a44686b519ff" Dec 12 18:36:16.273144 kubelet[2769]: E1212 18:36:16.271536 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wcf2b" podUID="4c88e5b7-6c17-45c7-92f0-9be254ebdd59" Dec 12 18:36:18.277623 kubelet[2769]: E1212 18:36:18.271689 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b5c98f7cb-flntl" podUID="1290709f-462a-4bdb-93db-9172d8fdb29d" Dec 12 18:36:19.943189 systemd[1]: Started sshd@22-10.0.0.38:22-10.0.0.1:55366.service - OpenSSH per-connection server daemon (10.0.0.1:55366). Dec 12 18:36:20.070128 sshd[5300]: Accepted publickey for core from 10.0.0.1 port 55366 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:36:20.075215 sshd-session[5300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:36:20.096036 systemd-logind[1540]: New session 23 of user core. Dec 12 18:36:20.115932 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 18:36:20.267041 kubelet[2769]: E1212 18:36:20.266865 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:36:20.389068 sshd[5303]: Connection closed by 10.0.0.1 port 55366 Dec 12 18:36:20.389661 sshd-session[5300]: pam_unix(sshd:session): session closed for user core Dec 12 18:36:20.402821 systemd[1]: sshd@22-10.0.0.38:22-10.0.0.1:55366.service: Deactivated successfully. Dec 12 18:36:20.412261 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 18:36:20.417802 systemd-logind[1540]: Session 23 logged out. Waiting for processes to exit. Dec 12 18:36:20.427098 systemd-logind[1540]: Removed session 23. Dec 12 18:36:23.271767 kubelet[2769]: E1212 18:36:23.271699 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b767d98d4-5tzst" podUID="1f0321c0-7695-4f53-9a29-c3900a354123" Dec 12 18:36:25.438128 systemd[1]: Started sshd@23-10.0.0.38:22-10.0.0.1:58646.service - OpenSSH per-connection server daemon (10.0.0.1:58646). Dec 12 18:36:25.632846 sshd[5317]: Accepted publickey for core from 10.0.0.1 port 58646 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:36:25.642575 sshd-session[5317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:36:25.671583 systemd-logind[1540]: New session 24 of user core. Dec 12 18:36:25.688302 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 12 18:36:26.034170 sshd[5320]: Connection closed by 10.0.0.1 port 58646 Dec 12 18:36:26.027124 sshd-session[5317]: pam_unix(sshd:session): session closed for user core Dec 12 18:36:26.054710 systemd[1]: sshd@23-10.0.0.38:22-10.0.0.1:58646.service: Deactivated successfully. Dec 12 18:36:26.062459 systemd[1]: session-24.scope: Deactivated successfully. Dec 12 18:36:26.072705 systemd-logind[1540]: Session 24 logged out. Waiting for processes to exit. Dec 12 18:36:26.077384 systemd-logind[1540]: Removed session 24. Dec 12 18:36:26.291623 kubelet[2769]: E1212 18:36:26.285993 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dtnq5" podUID="3590ca52-1c12-4793-a003-8621a1fe8861" Dec 12 18:36:28.269463 kubelet[2769]: E1212 18:36:28.269379 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b767d98d4-755s8" podUID="924b51e0-ed81-4bc8-a597-a44686b519ff" Dec 12 18:36:29.276358 kubelet[2769]: E1212 18:36:29.275737 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wcf2b" podUID="4c88e5b7-6c17-45c7-92f0-9be254ebdd59" Dec 12 18:36:29.276358 kubelet[2769]: E1212 18:36:29.276172 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57c994577d-zf2dw" podUID="5bd7d04f-25d6-4f6d-8d32-675830519b60" Dec 12 18:36:30.274159 kubelet[2769]: E1212 18:36:30.274063 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b5c98f7cb-flntl" podUID="1290709f-462a-4bdb-93db-9172d8fdb29d" Dec 12 18:36:31.074132 systemd[1]: Started sshd@24-10.0.0.38:22-10.0.0.1:40412.service - OpenSSH per-connection server daemon (10.0.0.1:40412). Dec 12 18:36:31.237845 sshd[5335]: Accepted publickey for core from 10.0.0.1 port 40412 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:36:31.242373 sshd-session[5335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:36:31.277281 systemd-logind[1540]: New session 25 of user core. Dec 12 18:36:31.328396 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 12 18:36:31.660835 sshd[5338]: Connection closed by 10.0.0.1 port 40412 Dec 12 18:36:31.662542 sshd-session[5335]: pam_unix(sshd:session): session closed for user core Dec 12 18:36:31.675450 systemd[1]: sshd@24-10.0.0.38:22-10.0.0.1:40412.service: Deactivated successfully. Dec 12 18:36:31.678880 systemd[1]: session-25.scope: Deactivated successfully. Dec 12 18:36:31.681028 systemd-logind[1540]: Session 25 logged out. Waiting for processes to exit. Dec 12 18:36:31.687422 systemd[1]: Started sshd@25-10.0.0.38:22-10.0.0.1:40428.service - OpenSSH per-connection server daemon (10.0.0.1:40428). Dec 12 18:36:31.702400 systemd-logind[1540]: Removed session 25. Dec 12 18:36:31.770785 sshd[5351]: Accepted publickey for core from 10.0.0.1 port 40428 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:36:31.778486 sshd-session[5351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:36:31.819794 systemd-logind[1540]: New session 26 of user core. Dec 12 18:36:31.830866 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 12 18:36:33.860092 sshd[5354]: Connection closed by 10.0.0.1 port 40428 Dec 12 18:36:33.860941 sshd-session[5351]: pam_unix(sshd:session): session closed for user core Dec 12 18:36:33.901895 systemd[1]: sshd@25-10.0.0.38:22-10.0.0.1:40428.service: Deactivated successfully. Dec 12 18:36:33.916544 systemd[1]: session-26.scope: Deactivated successfully. Dec 12 18:36:33.927090 systemd-logind[1540]: Session 26 logged out. Waiting for processes to exit. Dec 12 18:36:33.937974 systemd[1]: Started sshd@26-10.0.0.38:22-10.0.0.1:40444.service - OpenSSH per-connection server daemon (10.0.0.1:40444). Dec 12 18:36:33.951021 systemd-logind[1540]: Removed session 26. Dec 12 18:36:34.294341 kubelet[2769]: E1212 18:36:34.289464 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b767d98d4-5tzst" podUID="1f0321c0-7695-4f53-9a29-c3900a354123" Dec 12 18:36:34.511114 sshd[5365]: Accepted publickey for core from 10.0.0.1 port 40444 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:36:34.515346 sshd-session[5365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:36:34.541117 systemd-logind[1540]: New session 27 of user core. Dec 12 18:36:34.567790 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 12 18:36:36.583197 sshd[5368]: Connection closed by 10.0.0.1 port 40444 Dec 12 18:36:36.589405 sshd-session[5365]: pam_unix(sshd:session): session closed for user core Dec 12 18:36:36.616688 systemd[1]: sshd@26-10.0.0.38:22-10.0.0.1:40444.service: Deactivated successfully. Dec 12 18:36:36.619742 systemd[1]: session-27.scope: Deactivated successfully. Dec 12 18:36:36.626899 systemd-logind[1540]: Session 27 logged out. Waiting for processes to exit. Dec 12 18:36:36.640091 systemd-logind[1540]: Removed session 27. Dec 12 18:36:36.646515 systemd[1]: Started sshd@27-10.0.0.38:22-10.0.0.1:40454.service - OpenSSH per-connection server daemon (10.0.0.1:40454). Dec 12 18:36:36.859571 sshd[5402]: Accepted publickey for core from 10.0.0.1 port 40454 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:36:36.875275 sshd-session[5402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:36:36.915671 systemd-logind[1540]: New session 28 of user core. Dec 12 18:36:36.926112 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 12 18:36:37.731515 sshd[5405]: Connection closed by 10.0.0.1 port 40454 Dec 12 18:36:37.738783 sshd-session[5402]: pam_unix(sshd:session): session closed for user core Dec 12 18:36:37.783565 systemd[1]: sshd@27-10.0.0.38:22-10.0.0.1:40454.service: Deactivated successfully. Dec 12 18:36:37.798244 systemd[1]: session-28.scope: Deactivated successfully. Dec 12 18:36:37.805850 systemd-logind[1540]: Session 28 logged out. Waiting for processes to exit. Dec 12 18:36:37.826353 systemd[1]: Started sshd@28-10.0.0.38:22-10.0.0.1:40462.service - OpenSSH per-connection server daemon (10.0.0.1:40462). Dec 12 18:36:37.837540 systemd-logind[1540]: Removed session 28. Dec 12 18:36:37.998791 sshd[5421]: Accepted publickey for core from 10.0.0.1 port 40462 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:36:38.000697 sshd-session[5421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:36:38.018902 systemd-logind[1540]: New session 29 of user core. Dec 12 18:36:38.031272 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 12 18:36:38.232473 sshd[5424]: Connection closed by 10.0.0.1 port 40462 Dec 12 18:36:38.232871 sshd-session[5421]: pam_unix(sshd:session): session closed for user core Dec 12 18:36:38.237949 systemd[1]: sshd@28-10.0.0.38:22-10.0.0.1:40462.service: Deactivated successfully. Dec 12 18:36:38.241795 systemd[1]: session-29.scope: Deactivated successfully. Dec 12 18:36:38.247689 systemd-logind[1540]: Session 29 logged out. Waiting for processes to exit. Dec 12 18:36:38.249578 systemd-logind[1540]: Removed session 29. Dec 12 18:36:38.268463 kubelet[2769]: E1212 18:36:38.268220 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:36:40.270487 kubelet[2769]: E1212 18:36:40.269518 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b767d98d4-755s8" podUID="924b51e0-ed81-4bc8-a597-a44686b519ff" Dec 12 18:36:41.281497 kubelet[2769]: E1212 18:36:41.281286 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57c994577d-zf2dw" podUID="5bd7d04f-25d6-4f6d-8d32-675830519b60" Dec 12 18:36:41.300999 kubelet[2769]: E1212 18:36:41.285047 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dtnq5" podUID="3590ca52-1c12-4793-a003-8621a1fe8861" Dec 12 18:36:43.258801 systemd[1]: Started sshd@29-10.0.0.38:22-10.0.0.1:48256.service - OpenSSH per-connection server daemon (10.0.0.1:48256). Dec 12 18:36:43.273500 kubelet[2769]: E1212 18:36:43.273389 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wcf2b" podUID="4c88e5b7-6c17-45c7-92f0-9be254ebdd59" Dec 12 18:36:43.350363 sshd[5443]: Accepted publickey for core from 10.0.0.1 port 48256 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:36:43.362722 sshd-session[5443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:36:43.383674 systemd-logind[1540]: New session 30 of user core. Dec 12 18:36:43.394176 systemd[1]: Started session-30.scope - Session 30 of User core. Dec 12 18:36:43.645221 sshd[5446]: Connection closed by 10.0.0.1 port 48256 Dec 12 18:36:43.645701 sshd-session[5443]: pam_unix(sshd:session): session closed for user core Dec 12 18:36:43.668614 systemd[1]: sshd@29-10.0.0.38:22-10.0.0.1:48256.service: Deactivated successfully. Dec 12 18:36:43.669741 systemd-logind[1540]: Session 30 logged out. Waiting for processes to exit. Dec 12 18:36:43.677663 systemd[1]: session-30.scope: Deactivated successfully. Dec 12 18:36:43.687075 systemd-logind[1540]: Removed session 30. Dec 12 18:36:45.276493 containerd[1553]: time="2025-12-12T18:36:45.276425938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:36:45.668222 containerd[1553]: time="2025-12-12T18:36:45.668113248Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:36:45.673106 containerd[1553]: time="2025-12-12T18:36:45.672796259Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:36:45.673106 containerd[1553]: time="2025-12-12T18:36:45.672900576Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:36:45.673485 kubelet[2769]: E1212 18:36:45.673401 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:36:45.673891 kubelet[2769]: E1212 18:36:45.673485 2769 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:36:45.673968 kubelet[2769]: E1212 18:36:45.673815 2769 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7b5c98f7cb-flntl_calico-system(1290709f-462a-4bdb-93db-9172d8fdb29d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:36:45.681739 containerd[1553]: time="2025-12-12T18:36:45.681604762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:36:46.053064 containerd[1553]: time="2025-12-12T18:36:46.052518036Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:36:46.060782 containerd[1553]: time="2025-12-12T18:36:46.060703445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:36:46.060782 containerd[1553]: time="2025-12-12T18:36:46.060664611Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:36:46.061771 kubelet[2769]: E1212 18:36:46.061107 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:36:46.061771 kubelet[2769]: E1212 18:36:46.061182 2769 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:36:46.061771 kubelet[2769]: E1212 18:36:46.061272 2769 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7b5c98f7cb-flntl_calico-system(1290709f-462a-4bdb-93db-9172d8fdb29d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:36:46.061973 kubelet[2769]: E1212 18:36:46.061337 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b5c98f7cb-flntl" podUID="1290709f-462a-4bdb-93db-9172d8fdb29d" Dec 12 18:36:48.678102 systemd[1]: Started sshd@30-10.0.0.38:22-10.0.0.1:48272.service - OpenSSH per-connection server daemon (10.0.0.1:48272). Dec 12 18:36:48.840985 sshd[5459]: Accepted publickey for core from 10.0.0.1 port 48272 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:36:48.846090 sshd-session[5459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:36:48.870835 systemd-logind[1540]: New session 31 of user core. Dec 12 18:36:48.882947 systemd[1]: Started session-31.scope - Session 31 of User core. Dec 12 18:36:49.120322 sshd[5462]: Connection closed by 10.0.0.1 port 48272 Dec 12 18:36:49.119814 sshd-session[5459]: pam_unix(sshd:session): session closed for user core Dec 12 18:36:49.133588 systemd[1]: sshd@30-10.0.0.38:22-10.0.0.1:48272.service: Deactivated successfully. Dec 12 18:36:49.139109 systemd[1]: session-31.scope: Deactivated successfully. Dec 12 18:36:49.146495 systemd-logind[1540]: Session 31 logged out. Waiting for processes to exit. Dec 12 18:36:49.158544 systemd-logind[1540]: Removed session 31. Dec 12 18:36:49.275184 kubelet[2769]: E1212 18:36:49.274898 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b767d98d4-5tzst" podUID="1f0321c0-7695-4f53-9a29-c3900a354123" Dec 12 18:36:50.267647 kubelet[2769]: E1212 18:36:50.267284 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:36:53.276383 containerd[1553]: time="2025-12-12T18:36:53.275631694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:36:53.632096 containerd[1553]: time="2025-12-12T18:36:53.631845355Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:36:53.634108 containerd[1553]: time="2025-12-12T18:36:53.634050137Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:36:53.638591 containerd[1553]: time="2025-12-12T18:36:53.634086555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:36:53.639618 kubelet[2769]: E1212 18:36:53.638964 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:36:53.639618 kubelet[2769]: E1212 18:36:53.639035 2769 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:36:53.639618 kubelet[2769]: E1212 18:36:53.639133 2769 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-dtnq5_calico-system(3590ca52-1c12-4793-a003-8621a1fe8861): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:36:53.645945 containerd[1553]: time="2025-12-12T18:36:53.645825581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:36:54.026477 containerd[1553]: time="2025-12-12T18:36:54.025824721Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:36:54.032006 containerd[1553]: time="2025-12-12T18:36:54.031789894Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:36:54.032006 containerd[1553]: time="2025-12-12T18:36:54.031883129Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:36:54.034250 kubelet[2769]: E1212 18:36:54.032244 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:36:54.034250 kubelet[2769]: E1212 18:36:54.032333 2769 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:36:54.034250 kubelet[2769]: E1212 18:36:54.032460 2769 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-dtnq5_calico-system(3590ca52-1c12-4793-a003-8621a1fe8861): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:36:54.034512 kubelet[2769]: E1212 18:36:54.032524 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dtnq5" podUID="3590ca52-1c12-4793-a003-8621a1fe8861" Dec 12 18:36:54.162066 systemd[1]: Started sshd@31-10.0.0.38:22-10.0.0.1:34974.service - OpenSSH per-connection server daemon (10.0.0.1:34974). Dec 12 18:36:54.269942 kubelet[2769]: E1212 18:36:54.269672 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:36:54.287617 sshd[5518]: Accepted publickey for core from 10.0.0.1 port 34974 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:36:54.293380 sshd-session[5518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:36:54.329112 systemd-logind[1540]: New session 32 of user core. Dec 12 18:36:54.345610 systemd[1]: Started session-32.scope - Session 32 of User core. Dec 12 18:36:54.620784 sshd[5521]: Connection closed by 10.0.0.1 port 34974 Dec 12 18:36:54.621223 sshd-session[5518]: pam_unix(sshd:session): session closed for user core Dec 12 18:36:54.635803 systemd[1]: sshd@31-10.0.0.38:22-10.0.0.1:34974.service: Deactivated successfully. Dec 12 18:36:54.641761 systemd[1]: session-32.scope: Deactivated successfully. Dec 12 18:36:54.647860 systemd-logind[1540]: Session 32 logged out. Waiting for processes to exit. Dec 12 18:36:54.650767 systemd-logind[1540]: Removed session 32. Dec 12 18:36:55.278857 containerd[1553]: time="2025-12-12T18:36:55.278280281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:36:55.618486 containerd[1553]: time="2025-12-12T18:36:55.618173450Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:36:55.619926 containerd[1553]: time="2025-12-12T18:36:55.619815662Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:36:55.620046 containerd[1553]: time="2025-12-12T18:36:55.620025447Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 18:36:55.620635 kubelet[2769]: E1212 18:36:55.620542 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:36:55.621080 kubelet[2769]: E1212 18:36:55.620644 2769 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:36:55.621080 kubelet[2769]: E1212 18:36:55.621004 2769 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-wcf2b_calico-system(4c88e5b7-6c17-45c7-92f0-9be254ebdd59): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:36:55.621080 kubelet[2769]: E1212 18:36:55.621046 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wcf2b" podUID="4c88e5b7-6c17-45c7-92f0-9be254ebdd59" Dec 12 18:36:55.621802 containerd[1553]: time="2025-12-12T18:36:55.621724757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:36:56.000897 containerd[1553]: time="2025-12-12T18:36:55.999538018Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:36:56.004206 containerd[1553]: time="2025-12-12T18:36:56.004001822Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:36:56.004206 containerd[1553]: time="2025-12-12T18:36:56.004133640Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:36:56.004496 kubelet[2769]: E1212 18:36:56.004353 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:36:56.004496 kubelet[2769]: E1212 18:36:56.004419 2769 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:36:56.004590 kubelet[2769]: E1212 18:36:56.004521 2769 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7b767d98d4-755s8_calico-apiserver(924b51e0-ed81-4bc8-a597-a44686b519ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:36:56.004590 kubelet[2769]: E1212 18:36:56.004565 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b767d98d4-755s8" podUID="924b51e0-ed81-4bc8-a597-a44686b519ff" Dec 12 18:36:56.278977 containerd[1553]: time="2025-12-12T18:36:56.278650752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:36:56.652147 containerd[1553]: time="2025-12-12T18:36:56.651843450Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:36:56.685196 containerd[1553]: time="2025-12-12T18:36:56.682422739Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:36:56.685196 containerd[1553]: time="2025-12-12T18:36:56.682570738Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:36:56.685459 kubelet[2769]: E1212 18:36:56.683385 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:36:56.685459 kubelet[2769]: E1212 18:36:56.683451 2769 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:36:56.685459 kubelet[2769]: E1212 18:36:56.683554 2769 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-57c994577d-zf2dw_calico-system(5bd7d04f-25d6-4f6d-8d32-675830519b60): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:36:56.685459 kubelet[2769]: E1212 18:36:56.683592 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57c994577d-zf2dw" podUID="5bd7d04f-25d6-4f6d-8d32-675830519b60" Dec 12 18:36:59.309437 kubelet[2769]: E1212 18:36:59.309298 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b5c98f7cb-flntl" podUID="1290709f-462a-4bdb-93db-9172d8fdb29d" Dec 12 18:36:59.668033 systemd[1]: Started sshd@32-10.0.0.38:22-10.0.0.1:34982.service - OpenSSH per-connection server daemon (10.0.0.1:34982). Dec 12 18:36:59.875657 sshd[5541]: Accepted publickey for core from 10.0.0.1 port 34982 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:36:59.879566 sshd-session[5541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:36:59.918506 systemd-logind[1540]: New session 33 of user core. Dec 12 18:36:59.932239 systemd[1]: Started session-33.scope - Session 33 of User core. Dec 12 18:37:00.268804 sshd[5544]: Connection closed by 10.0.0.1 port 34982 Dec 12 18:37:00.270361 sshd-session[5541]: pam_unix(sshd:session): session closed for user core Dec 12 18:37:00.274118 kubelet[2769]: E1212 18:37:00.273963 2769 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:37:00.279534 containerd[1553]: time="2025-12-12T18:37:00.279487354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:37:00.280828 systemd[1]: sshd@32-10.0.0.38:22-10.0.0.1:34982.service: Deactivated successfully. Dec 12 18:37:00.288260 systemd[1]: session-33.scope: Deactivated successfully. Dec 12 18:37:00.294054 systemd-logind[1540]: Session 33 logged out. Waiting for processes to exit. Dec 12 18:37:00.297362 systemd-logind[1540]: Removed session 33. Dec 12 18:37:00.609741 containerd[1553]: time="2025-12-12T18:37:00.604529972Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:37:00.617960 containerd[1553]: time="2025-12-12T18:37:00.616112157Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:37:00.617960 containerd[1553]: time="2025-12-12T18:37:00.616278401Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:37:00.618164 kubelet[2769]: E1212 18:37:00.616574 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:37:00.618164 kubelet[2769]: E1212 18:37:00.616645 2769 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:37:00.618164 kubelet[2769]: E1212 18:37:00.616872 2769 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7b767d98d4-5tzst_calico-apiserver(1f0321c0-7695-4f53-9a29-c3900a354123): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:37:00.619096 kubelet[2769]: E1212 18:37:00.618930 2769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b767d98d4-5tzst" podUID="1f0321c0-7695-4f53-9a29-c3900a354123"