Nov 5 15:56:07.957946 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Nov 5 13:45:21 -00 2025 Nov 5 15:56:07.957985 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:56:07.957998 kernel: BIOS-provided physical RAM map: Nov 5 15:56:07.958007 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Nov 5 15:56:07.958016 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Nov 5 15:56:07.958029 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Nov 5 15:56:07.958040 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Nov 5 15:56:07.958049 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Nov 5 15:56:07.958062 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Nov 5 15:56:07.958072 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Nov 5 15:56:07.958082 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Nov 5 15:56:07.958091 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Nov 5 15:56:07.958100 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Nov 5 15:56:07.958112 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Nov 5 15:56:07.958124 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Nov 5 15:56:07.958134 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Nov 5 15:56:07.958147 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 5 15:56:07.958160 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 5 15:56:07.958170 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 5 15:56:07.958180 kernel: NX (Execute Disable) protection: active Nov 5 15:56:07.958190 kernel: APIC: Static calls initialized Nov 5 15:56:07.958200 kernel: e820: update [mem 0x9a13d018-0x9a146c57] usable ==> usable Nov 5 15:56:07.958210 kernel: e820: update [mem 0x9a100018-0x9a13ce57] usable ==> usable Nov 5 15:56:07.958220 kernel: extended physical RAM map: Nov 5 15:56:07.958230 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Nov 5 15:56:07.958240 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Nov 5 15:56:07.958249 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Nov 5 15:56:07.958259 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Nov 5 15:56:07.958272 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a100017] usable Nov 5 15:56:07.958281 kernel: reserve setup_data: [mem 0x000000009a100018-0x000000009a13ce57] usable Nov 5 15:56:07.958291 kernel: reserve setup_data: [mem 0x000000009a13ce58-0x000000009a13d017] usable Nov 5 15:56:07.958301 kernel: reserve setup_data: [mem 0x000000009a13d018-0x000000009a146c57] usable Nov 5 15:56:07.958311 kernel: reserve setup_data: [mem 0x000000009a146c58-0x000000009b8ecfff] usable Nov 5 15:56:07.958321 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Nov 5 15:56:07.958332 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Nov 5 15:56:07.958342 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Nov 5 15:56:07.958352 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Nov 5 15:56:07.958362 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Nov 5 15:56:07.958376 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Nov 5 15:56:07.958387 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Nov 5 15:56:07.958402 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Nov 5 15:56:07.958412 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 5 15:56:07.958440 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 5 15:56:07.958456 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 5 15:56:07.958467 kernel: efi: EFI v2.7 by EDK II Nov 5 15:56:07.958477 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Nov 5 15:56:07.958487 kernel: random: crng init done Nov 5 15:56:07.958498 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Nov 5 15:56:07.958509 kernel: secureboot: Secure boot enabled Nov 5 15:56:07.958519 kernel: SMBIOS 2.8 present. Nov 5 15:56:07.958529 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Nov 5 15:56:07.958540 kernel: DMI: Memory slots populated: 1/1 Nov 5 15:56:07.958554 kernel: Hypervisor detected: KVM Nov 5 15:56:07.958565 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Nov 5 15:56:07.958576 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 5 15:56:07.958587 kernel: kvm-clock: using sched offset of 6286718528 cycles Nov 5 15:56:07.958598 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 5 15:56:07.958610 kernel: tsc: Detected 2794.748 MHz processor Nov 5 15:56:07.958621 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 5 15:56:07.958632 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 5 15:56:07.958643 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Nov 5 15:56:07.958663 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 5 15:56:07.958688 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 5 15:56:07.958702 kernel: Using GB pages for direct mapping Nov 5 15:56:07.958713 kernel: ACPI: Early table checksum verification disabled Nov 5 15:56:07.958724 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Nov 5 15:56:07.958736 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Nov 5 15:56:07.958747 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:56:07.958763 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:56:07.958774 kernel: ACPI: FACS 0x000000009BBDD000 000040 Nov 5 15:56:07.958786 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:56:07.958797 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:56:07.958808 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:56:07.958820 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:56:07.958832 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Nov 5 15:56:07.958847 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Nov 5 15:56:07.958858 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Nov 5 15:56:07.958870 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Nov 5 15:56:07.958881 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Nov 5 15:56:07.958892 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Nov 5 15:56:07.958903 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Nov 5 15:56:07.958915 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Nov 5 15:56:07.958930 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Nov 5 15:56:07.958942 kernel: No NUMA configuration found Nov 5 15:56:07.958953 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Nov 5 15:56:07.958965 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Nov 5 15:56:07.958976 kernel: Zone ranges: Nov 5 15:56:07.958987 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 5 15:56:07.958999 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Nov 5 15:56:07.959010 kernel: Normal empty Nov 5 15:56:07.959026 kernel: Device empty Nov 5 15:56:07.959037 kernel: Movable zone start for each node Nov 5 15:56:07.959048 kernel: Early memory node ranges Nov 5 15:56:07.959060 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Nov 5 15:56:07.959071 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Nov 5 15:56:07.959083 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Nov 5 15:56:07.959094 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Nov 5 15:56:07.959105 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Nov 5 15:56:07.959121 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Nov 5 15:56:07.959132 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 5 15:56:07.959144 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Nov 5 15:56:07.959155 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 5 15:56:07.959167 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Nov 5 15:56:07.959178 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Nov 5 15:56:07.959190 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Nov 5 15:56:07.959206 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 5 15:56:07.959217 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 5 15:56:07.959229 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 5 15:56:07.959240 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 5 15:56:07.959256 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 5 15:56:07.959268 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 5 15:56:07.959280 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 5 15:56:07.959295 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 5 15:56:07.959306 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 5 15:56:07.959318 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 5 15:56:07.959329 kernel: TSC deadline timer available Nov 5 15:56:07.959341 kernel: CPU topo: Max. logical packages: 1 Nov 5 15:56:07.959352 kernel: CPU topo: Max. logical dies: 1 Nov 5 15:56:07.959377 kernel: CPU topo: Max. dies per package: 1 Nov 5 15:56:07.959388 kernel: CPU topo: Max. threads per core: 1 Nov 5 15:56:07.959400 kernel: CPU topo: Num. cores per package: 4 Nov 5 15:56:07.959412 kernel: CPU topo: Num. threads per package: 4 Nov 5 15:56:07.959448 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Nov 5 15:56:07.959461 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 5 15:56:07.959473 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 5 15:56:07.959485 kernel: kvm-guest: setup PV sched yield Nov 5 15:56:07.959502 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Nov 5 15:56:07.959513 kernel: Booting paravirtualized kernel on KVM Nov 5 15:56:07.959525 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 5 15:56:07.959538 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 5 15:56:07.959550 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Nov 5 15:56:07.959561 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Nov 5 15:56:07.959573 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 5 15:56:07.959589 kernel: kvm-guest: PV spinlocks enabled Nov 5 15:56:07.959601 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 5 15:56:07.959615 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:56:07.959627 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 5 15:56:07.959639 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 5 15:56:07.959651 kernel: Fallback order for Node 0: 0 Nov 5 15:56:07.959667 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Nov 5 15:56:07.959691 kernel: Policy zone: DMA32 Nov 5 15:56:07.959703 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 5 15:56:07.959715 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 5 15:56:07.959727 kernel: ftrace: allocating 40092 entries in 157 pages Nov 5 15:56:07.959739 kernel: ftrace: allocated 157 pages with 5 groups Nov 5 15:56:07.959752 kernel: Dynamic Preempt: voluntary Nov 5 15:56:07.959763 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 5 15:56:07.959781 kernel: rcu: RCU event tracing is enabled. Nov 5 15:56:07.959793 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 5 15:56:07.959806 kernel: Trampoline variant of Tasks RCU enabled. Nov 5 15:56:07.959818 kernel: Rude variant of Tasks RCU enabled. Nov 5 15:56:07.959830 kernel: Tracing variant of Tasks RCU enabled. Nov 5 15:56:07.959842 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 5 15:56:07.959854 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 5 15:56:07.959871 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 5 15:56:07.959883 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 5 15:56:07.959900 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 5 15:56:07.959912 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 5 15:56:07.959924 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 5 15:56:07.959936 kernel: Console: colour dummy device 80x25 Nov 5 15:56:07.959948 kernel: printk: legacy console [ttyS0] enabled Nov 5 15:56:07.959965 kernel: ACPI: Core revision 20240827 Nov 5 15:56:07.959977 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 5 15:56:07.959988 kernel: APIC: Switch to symmetric I/O mode setup Nov 5 15:56:07.960000 kernel: x2apic enabled Nov 5 15:56:07.960012 kernel: APIC: Switched APIC routing to: physical x2apic Nov 5 15:56:07.960024 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 5 15:56:07.960036 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 5 15:56:07.960052 kernel: kvm-guest: setup PV IPIs Nov 5 15:56:07.960065 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 5 15:56:07.960077 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 5 15:56:07.960089 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Nov 5 15:56:07.960101 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 5 15:56:07.960113 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 5 15:56:07.960125 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 5 15:56:07.960140 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 5 15:56:07.960156 kernel: Spectre V2 : Mitigation: Retpolines Nov 5 15:56:07.960168 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 5 15:56:07.960180 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 5 15:56:07.960191 kernel: active return thunk: retbleed_return_thunk Nov 5 15:56:07.960202 kernel: RETBleed: Mitigation: untrained return thunk Nov 5 15:56:07.960214 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 5 15:56:07.960229 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 5 15:56:07.960241 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 5 15:56:07.960253 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 5 15:56:07.960265 kernel: active return thunk: srso_return_thunk Nov 5 15:56:07.960276 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 5 15:56:07.960288 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 5 15:56:07.960299 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 5 15:56:07.960315 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 5 15:56:07.960327 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 5 15:56:07.960338 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 5 15:56:07.960350 kernel: Freeing SMP alternatives memory: 32K Nov 5 15:56:07.960361 kernel: pid_max: default: 32768 minimum: 301 Nov 5 15:56:07.960373 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 5 15:56:07.960384 kernel: landlock: Up and running. Nov 5 15:56:07.960399 kernel: SELinux: Initializing. Nov 5 15:56:07.960411 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 15:56:07.960449 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 15:56:07.960463 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 5 15:56:07.960475 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 5 15:56:07.960486 kernel: ... version: 0 Nov 5 15:56:07.960502 kernel: ... bit width: 48 Nov 5 15:56:07.960519 kernel: ... generic registers: 6 Nov 5 15:56:07.960531 kernel: ... value mask: 0000ffffffffffff Nov 5 15:56:07.960543 kernel: ... max period: 00007fffffffffff Nov 5 15:56:07.960555 kernel: ... fixed-purpose events: 0 Nov 5 15:56:07.960567 kernel: ... event mask: 000000000000003f Nov 5 15:56:07.960579 kernel: signal: max sigframe size: 1776 Nov 5 15:56:07.960591 kernel: rcu: Hierarchical SRCU implementation. Nov 5 15:56:07.960607 kernel: rcu: Max phase no-delay instances is 400. Nov 5 15:56:07.960619 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 5 15:56:07.960631 kernel: smp: Bringing up secondary CPUs ... Nov 5 15:56:07.960643 kernel: smpboot: x86: Booting SMP configuration: Nov 5 15:56:07.960654 kernel: .... node #0, CPUs: #1 #2 #3 Nov 5 15:56:07.960666 kernel: smp: Brought up 1 node, 4 CPUs Nov 5 15:56:07.960689 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Nov 5 15:56:07.960706 kernel: Memory: 2431744K/2552216K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15964K init, 2080K bss, 114536K reserved, 0K cma-reserved) Nov 5 15:56:07.960719 kernel: devtmpfs: initialized Nov 5 15:56:07.960731 kernel: x86/mm: Memory block size: 128MB Nov 5 15:56:07.960743 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Nov 5 15:56:07.960755 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Nov 5 15:56:07.960767 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 5 15:56:07.960779 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 5 15:56:07.960795 kernel: pinctrl core: initialized pinctrl subsystem Nov 5 15:56:07.960808 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 5 15:56:07.960819 kernel: audit: initializing netlink subsys (disabled) Nov 5 15:56:07.960832 kernel: audit: type=2000 audit(1762358165.149:1): state=initialized audit_enabled=0 res=1 Nov 5 15:56:07.960844 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 5 15:56:07.960856 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 5 15:56:07.960867 kernel: cpuidle: using governor menu Nov 5 15:56:07.960882 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 5 15:56:07.960894 kernel: dca service started, version 1.12.1 Nov 5 15:56:07.960906 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Nov 5 15:56:07.960918 kernel: PCI: Using configuration type 1 for base access Nov 5 15:56:07.960929 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 5 15:56:07.960941 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 5 15:56:07.960954 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 5 15:56:07.960969 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 5 15:56:07.960981 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 5 15:56:07.960993 kernel: ACPI: Added _OSI(Module Device) Nov 5 15:56:07.961005 kernel: ACPI: Added _OSI(Processor Device) Nov 5 15:56:07.961017 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 5 15:56:07.961028 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 5 15:56:07.961040 kernel: ACPI: Interpreter enabled Nov 5 15:56:07.961051 kernel: ACPI: PM: (supports S0 S5) Nov 5 15:56:07.961067 kernel: ACPI: Using IOAPIC for interrupt routing Nov 5 15:56:07.961079 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 5 15:56:07.961090 kernel: PCI: Using E820 reservations for host bridge windows Nov 5 15:56:07.961103 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 5 15:56:07.961114 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 5 15:56:07.961481 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 5 15:56:07.961737 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 5 15:56:07.961965 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 5 15:56:07.961982 kernel: PCI host bridge to bus 0000:00 Nov 5 15:56:07.962212 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 5 15:56:07.962438 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 5 15:56:07.962653 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 5 15:56:07.962885 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Nov 5 15:56:07.963099 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Nov 5 15:56:07.963306 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Nov 5 15:56:07.963508 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 5 15:56:07.963748 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 5 15:56:07.963945 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 5 15:56:07.964121 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Nov 5 15:56:07.964317 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Nov 5 15:56:07.964545 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Nov 5 15:56:07.964738 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 5 15:56:07.964931 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 5 15:56:07.965155 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Nov 5 15:56:07.965390 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Nov 5 15:56:07.965643 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Nov 5 15:56:07.965898 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 5 15:56:07.966088 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Nov 5 15:56:07.966275 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Nov 5 15:56:07.966485 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Nov 5 15:56:07.966731 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 5 15:56:07.966950 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Nov 5 15:56:07.967164 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Nov 5 15:56:07.967394 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Nov 5 15:56:07.967666 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Nov 5 15:56:07.967924 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 5 15:56:07.968134 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 5 15:56:07.968356 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 5 15:56:07.968617 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Nov 5 15:56:07.968896 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Nov 5 15:56:07.969136 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 5 15:56:07.969370 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Nov 5 15:56:07.969389 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 5 15:56:07.969403 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 5 15:56:07.969415 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 5 15:56:07.969445 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 5 15:56:07.969464 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 5 15:56:07.969476 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 5 15:56:07.969489 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 5 15:56:07.969500 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 5 15:56:07.969513 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 5 15:56:07.969525 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 5 15:56:07.969540 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 5 15:56:07.969560 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 5 15:56:07.969575 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 5 15:56:07.969590 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 5 15:56:07.969605 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 5 15:56:07.969619 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 5 15:56:07.969635 kernel: iommu: Default domain type: Translated Nov 5 15:56:07.969649 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 5 15:56:07.969667 kernel: efivars: Registered efivars operations Nov 5 15:56:07.969690 kernel: PCI: Using ACPI for IRQ routing Nov 5 15:56:07.969702 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 5 15:56:07.969715 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Nov 5 15:56:07.969727 kernel: e820: reserve RAM buffer [mem 0x9a100018-0x9bffffff] Nov 5 15:56:07.969738 kernel: e820: reserve RAM buffer [mem 0x9a13d018-0x9bffffff] Nov 5 15:56:07.969750 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Nov 5 15:56:07.969766 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Nov 5 15:56:07.970005 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 5 15:56:07.970234 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 5 15:56:07.970493 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 5 15:56:07.970511 kernel: vgaarb: loaded Nov 5 15:56:07.970524 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 5 15:56:07.970537 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 5 15:56:07.970555 kernel: clocksource: Switched to clocksource kvm-clock Nov 5 15:56:07.970568 kernel: VFS: Disk quotas dquot_6.6.0 Nov 5 15:56:07.970583 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 5 15:56:07.970598 kernel: pnp: PnP ACPI init Nov 5 15:56:07.970863 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Nov 5 15:56:07.970883 kernel: pnp: PnP ACPI: found 6 devices Nov 5 15:56:07.970897 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 5 15:56:07.970914 kernel: NET: Registered PF_INET protocol family Nov 5 15:56:07.970927 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 5 15:56:07.970939 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 5 15:56:07.970952 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 5 15:56:07.970964 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 5 15:56:07.970977 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 5 15:56:07.970989 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 5 15:56:07.971005 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 15:56:07.971017 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 15:56:07.971030 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 5 15:56:07.971043 kernel: NET: Registered PF_XDP protocol family Nov 5 15:56:07.971281 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Nov 5 15:56:07.971651 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Nov 5 15:56:07.971896 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 5 15:56:07.972100 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 5 15:56:07.972295 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 5 15:56:07.972513 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Nov 5 15:56:07.972727 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Nov 5 15:56:07.972938 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Nov 5 15:56:07.972956 kernel: PCI: CLS 0 bytes, default 64 Nov 5 15:56:07.972975 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 5 15:56:07.972993 kernel: Initialise system trusted keyrings Nov 5 15:56:07.973006 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 5 15:56:07.973017 kernel: Key type asymmetric registered Nov 5 15:56:07.973030 kernel: Asymmetric key parser 'x509' registered Nov 5 15:56:07.973060 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 5 15:56:07.973072 kernel: io scheduler mq-deadline registered Nov 5 15:56:07.973084 kernel: io scheduler kyber registered Nov 5 15:56:07.973093 kernel: io scheduler bfq registered Nov 5 15:56:07.973102 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 5 15:56:07.973112 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 5 15:56:07.973121 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 5 15:56:07.973130 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 5 15:56:07.973139 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 5 15:56:07.973151 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 5 15:56:07.973160 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 5 15:56:07.973169 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 5 15:56:07.973178 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 5 15:56:07.973188 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 5 15:56:07.973381 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 5 15:56:07.973623 kernel: rtc_cmos 00:04: registered as rtc0 Nov 5 15:56:07.973856 kernel: rtc_cmos 00:04: setting system clock to 2025-11-05T15:56:05 UTC (1762358165) Nov 5 15:56:07.974063 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Nov 5 15:56:07.974081 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 5 15:56:07.974099 kernel: efifb: probing for efifb Nov 5 15:56:07.974113 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Nov 5 15:56:07.974126 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Nov 5 15:56:07.974142 kernel: efifb: scrolling: redraw Nov 5 15:56:07.974155 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 5 15:56:07.974168 kernel: Console: switching to colour frame buffer device 160x50 Nov 5 15:56:07.974184 kernel: fb0: EFI VGA frame buffer device Nov 5 15:56:07.974197 kernel: pstore: Using crash dump compression: deflate Nov 5 15:56:07.974213 kernel: pstore: Registered efi_pstore as persistent store backend Nov 5 15:56:07.974226 kernel: NET: Registered PF_INET6 protocol family Nov 5 15:56:07.974238 kernel: Segment Routing with IPv6 Nov 5 15:56:07.974251 kernel: In-situ OAM (IOAM) with IPv6 Nov 5 15:56:07.974263 kernel: NET: Registered PF_PACKET protocol family Nov 5 15:56:07.974275 kernel: Key type dns_resolver registered Nov 5 15:56:07.974288 kernel: IPI shorthand broadcast: enabled Nov 5 15:56:07.974303 kernel: sched_clock: Marking stable (1554004641, 274302989)->(1888714364, -60406734) Nov 5 15:56:07.974315 kernel: registered taskstats version 1 Nov 5 15:56:07.974327 kernel: Loading compiled-in X.509 certificates Nov 5 15:56:07.974338 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 9f02cc8d588ce542f03b0da66dde47a90a145382' Nov 5 15:56:07.974351 kernel: Demotion targets for Node 0: null Nov 5 15:56:07.974363 kernel: Key type .fscrypt registered Nov 5 15:56:07.974376 kernel: Key type fscrypt-provisioning registered Nov 5 15:56:07.974391 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 5 15:56:07.974404 kernel: ima: Allocated hash algorithm: sha1 Nov 5 15:56:07.974416 kernel: ima: No architecture policies found Nov 5 15:56:07.974444 kernel: clk: Disabling unused clocks Nov 5 15:56:07.974457 kernel: Freeing unused kernel image (initmem) memory: 15964K Nov 5 15:56:07.974470 kernel: Write protecting the kernel read-only data: 40960k Nov 5 15:56:07.974483 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 5 15:56:07.974499 kernel: Run /init as init process Nov 5 15:56:07.974511 kernel: with arguments: Nov 5 15:56:07.974524 kernel: /init Nov 5 15:56:07.974537 kernel: with environment: Nov 5 15:56:07.974549 kernel: HOME=/ Nov 5 15:56:07.974562 kernel: TERM=linux Nov 5 15:56:07.974574 kernel: SCSI subsystem initialized Nov 5 15:56:07.974589 kernel: libata version 3.00 loaded. Nov 5 15:56:07.974833 kernel: ahci 0000:00:1f.2: version 3.0 Nov 5 15:56:07.974851 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 5 15:56:07.975085 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 5 15:56:07.975301 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 5 15:56:07.975548 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 5 15:56:07.975832 kernel: scsi host0: ahci Nov 5 15:56:07.976077 kernel: scsi host1: ahci Nov 5 15:56:07.976313 kernel: scsi host2: ahci Nov 5 15:56:07.976567 kernel: scsi host3: ahci Nov 5 15:56:07.976802 kernel: scsi host4: ahci Nov 5 15:56:07.977104 kernel: scsi host5: ahci Nov 5 15:56:07.977139 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Nov 5 15:56:07.977153 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Nov 5 15:56:07.977171 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Nov 5 15:56:07.977191 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Nov 5 15:56:07.977211 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Nov 5 15:56:07.977232 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Nov 5 15:56:07.977258 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 5 15:56:07.977279 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 5 15:56:07.977300 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 5 15:56:07.977320 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 5 15:56:07.977341 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 5 15:56:07.977361 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 5 15:56:07.977381 kernel: ata3.00: LPM support broken, forcing max_power Nov 5 15:56:07.977407 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 5 15:56:07.977446 kernel: ata3.00: applying bridge limits Nov 5 15:56:07.977469 kernel: ata3.00: LPM support broken, forcing max_power Nov 5 15:56:07.977489 kernel: ata3.00: configured for UDMA/100 Nov 5 15:56:07.977951 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 5 15:56:07.978321 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 5 15:56:07.978690 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 5 15:56:07.978721 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 5 15:56:07.978731 kernel: GPT:16515071 != 27000831 Nov 5 15:56:07.978740 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 5 15:56:07.978749 kernel: GPT:16515071 != 27000831 Nov 5 15:56:07.978757 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 5 15:56:07.978766 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 5 15:56:07.979076 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 5 15:56:07.979101 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 5 15:56:07.979484 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 5 15:56:07.979509 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 5 15:56:07.979530 kernel: device-mapper: uevent: version 1.0.3 Nov 5 15:56:07.979551 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 5 15:56:07.979571 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 5 15:56:07.979598 kernel: raid6: avx2x4 gen() 23680 MB/s Nov 5 15:56:07.979620 kernel: raid6: avx2x2 gen() 18528 MB/s Nov 5 15:56:07.979640 kernel: raid6: avx2x1 gen() 16929 MB/s Nov 5 15:56:07.979661 kernel: raid6: using algorithm avx2x4 gen() 23680 MB/s Nov 5 15:56:07.979697 kernel: raid6: .... xor() 5122 MB/s, rmw enabled Nov 5 15:56:07.979722 kernel: raid6: using avx2x2 recovery algorithm Nov 5 15:56:07.979741 kernel: xor: automatically using best checksumming function avx Nov 5 15:56:07.979764 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 5 15:56:07.979783 kernel: BTRFS: device fsid a4c7be9c-39f6-471d-8a4c-d50144c6bf01 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (181) Nov 5 15:56:07.979809 kernel: BTRFS info (device dm-0): first mount of filesystem a4c7be9c-39f6-471d-8a4c-d50144c6bf01 Nov 5 15:56:07.979838 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:56:07.979863 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 5 15:56:07.979889 kernel: BTRFS info (device dm-0): enabling free space tree Nov 5 15:56:07.979914 kernel: loop: module loaded Nov 5 15:56:07.979948 kernel: loop0: detected capacity change from 0 to 100120 Nov 5 15:56:07.979974 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 5 15:56:07.980005 systemd[1]: Successfully made /usr/ read-only. Nov 5 15:56:07.980041 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 15:56:07.980068 systemd[1]: Detected virtualization kvm. Nov 5 15:56:07.980096 systemd[1]: Detected architecture x86-64. Nov 5 15:56:07.980129 systemd[1]: Running in initrd. Nov 5 15:56:07.980158 systemd[1]: No hostname configured, using default hostname. Nov 5 15:56:07.980184 systemd[1]: Hostname set to . Nov 5 15:56:07.980213 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 15:56:07.980238 systemd[1]: Queued start job for default target initrd.target. Nov 5 15:56:07.980264 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 15:56:07.980290 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:56:07.980323 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:56:07.980353 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 5 15:56:07.980379 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 15:56:07.980406 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 5 15:56:07.980457 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 5 15:56:07.980491 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:56:07.980517 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:56:07.980531 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 5 15:56:07.980544 systemd[1]: Reached target paths.target - Path Units. Nov 5 15:56:07.980557 systemd[1]: Reached target slices.target - Slice Units. Nov 5 15:56:07.980571 systemd[1]: Reached target swap.target - Swaps. Nov 5 15:56:07.980583 systemd[1]: Reached target timers.target - Timer Units. Nov 5 15:56:07.980600 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 15:56:07.980615 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 15:56:07.980629 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 5 15:56:07.980642 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 5 15:56:07.980655 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:56:07.980668 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 15:56:07.980695 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:56:07.980712 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 15:56:07.980725 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 5 15:56:07.980739 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 5 15:56:07.980752 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 15:56:07.980764 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 5 15:56:07.980778 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 5 15:56:07.980794 systemd[1]: Starting systemd-fsck-usr.service... Nov 5 15:56:07.980807 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 15:56:07.980819 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 15:56:07.980836 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:56:07.980850 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 5 15:56:07.980866 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:56:07.980879 systemd[1]: Finished systemd-fsck-usr.service. Nov 5 15:56:07.980892 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 15:56:07.980971 systemd-journald[314]: Collecting audit messages is disabled. Nov 5 15:56:07.981006 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 15:56:07.981019 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 5 15:56:07.981032 systemd-journald[314]: Journal started Nov 5 15:56:07.981061 systemd-journald[314]: Runtime Journal (/run/log/journal/8e3d0e543d8343fe890c313c4e899156) is 5.9M, max 47.9M, 41.9M free. Nov 5 15:56:07.987443 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 15:56:07.990170 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:56:07.995739 kernel: Bridge firewalling registered Nov 5 15:56:07.993498 systemd-modules-load[317]: Inserted module 'br_netfilter' Nov 5 15:56:07.997855 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 5 15:56:07.999864 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 15:56:08.001126 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 15:56:08.001559 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 15:56:08.013256 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 15:56:08.027599 systemd-tmpfiles[338]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 5 15:56:08.032008 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:56:08.038178 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:56:08.040879 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 15:56:08.044352 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:56:08.049532 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 15:56:08.064422 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 5 15:56:08.088096 dracut-cmdline[363]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:56:08.131004 systemd-resolved[358]: Positive Trust Anchors: Nov 5 15:56:08.131034 systemd-resolved[358]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 15:56:08.131039 systemd-resolved[358]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 15:56:08.131081 systemd-resolved[358]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 15:56:08.204596 systemd-resolved[358]: Defaulting to hostname 'linux'. Nov 5 15:56:08.209692 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 15:56:08.220954 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:56:08.388093 kernel: Loading iSCSI transport class v2.0-870. Nov 5 15:56:08.416608 kernel: iscsi: registered transport (tcp) Nov 5 15:56:08.474874 kernel: iscsi: registered transport (qla4xxx) Nov 5 15:56:08.474965 kernel: QLogic iSCSI HBA Driver Nov 5 15:56:08.542645 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 15:56:08.608392 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:56:08.613492 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 15:56:08.748716 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 5 15:56:08.756749 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 5 15:56:08.792557 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 5 15:56:08.860829 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 5 15:56:08.864842 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:56:08.905088 systemd-udevd[594]: Using default interface naming scheme 'v257'. Nov 5 15:56:08.924033 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:56:08.930592 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 5 15:56:08.972485 dracut-pre-trigger[662]: rd.md=0: removing MD RAID activation Nov 5 15:56:08.988879 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 15:56:08.992733 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 15:56:09.016104 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 15:56:09.019169 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 15:56:09.052514 systemd-networkd[725]: lo: Link UP Nov 5 15:56:09.052525 systemd-networkd[725]: lo: Gained carrier Nov 5 15:56:09.053391 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 15:56:09.054919 systemd[1]: Reached target network.target - Network. Nov 5 15:56:09.122993 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:56:09.130627 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 5 15:56:09.190451 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 5 15:56:09.203891 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 5 15:56:09.220467 kernel: cryptd: max_cpu_qlen set to 1000 Nov 5 15:56:09.235785 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 5 15:56:09.250034 systemd-networkd[725]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:56:09.256523 kernel: AES CTR mode by8 optimization enabled Nov 5 15:56:09.251894 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 5 15:56:09.252509 systemd-networkd[725]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 15:56:09.253443 systemd-networkd[725]: eth0: Link UP Nov 5 15:56:09.253685 systemd-networkd[725]: eth0: Gained carrier Nov 5 15:56:09.253696 systemd-networkd[725]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:56:09.270384 systemd-networkd[725]: eth0: DHCPv4 address 10.0.0.107/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 5 15:56:09.272451 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 5 15:56:09.276035 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:56:09.276207 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:56:09.279152 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:56:09.288395 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:56:09.295132 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 5 15:56:09.350683 disk-uuid[843]: Primary Header is updated. Nov 5 15:56:09.350683 disk-uuid[843]: Secondary Entries is updated. Nov 5 15:56:09.350683 disk-uuid[843]: Secondary Header is updated. Nov 5 15:56:09.350943 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 5 15:56:09.352602 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 15:56:09.353417 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:56:09.354040 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 15:56:09.358155 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 5 15:56:09.406058 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:56:09.425168 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 5 15:56:10.400748 disk-uuid[854]: Warning: The kernel is still using the old partition table. Nov 5 15:56:10.400748 disk-uuid[854]: The new table will be used at the next reboot or after you Nov 5 15:56:10.400748 disk-uuid[854]: run partprobe(8) or kpartx(8) Nov 5 15:56:10.400748 disk-uuid[854]: The operation has completed successfully. Nov 5 15:56:10.414694 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 5 15:56:10.414899 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 5 15:56:10.421897 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 5 15:56:10.486941 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (875) Nov 5 15:56:10.495063 kernel: BTRFS info (device vda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:56:10.495129 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:56:10.516121 kernel: BTRFS info (device vda6): turning on async discard Nov 5 15:56:10.516221 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 15:56:10.542936 kernel: BTRFS info (device vda6): last unmount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:56:10.565075 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 5 15:56:10.577160 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 5 15:56:10.844561 ignition[894]: Ignition 2.22.0 Nov 5 15:56:10.844574 ignition[894]: Stage: fetch-offline Nov 5 15:56:10.844624 ignition[894]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:56:10.844636 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 15:56:10.844724 ignition[894]: parsed url from cmdline: "" Nov 5 15:56:10.844728 ignition[894]: no config URL provided Nov 5 15:56:10.844733 ignition[894]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 15:56:10.844744 ignition[894]: no config at "/usr/lib/ignition/user.ign" Nov 5 15:56:10.844789 ignition[894]: op(1): [started] loading QEMU firmware config module Nov 5 15:56:10.844794 ignition[894]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 5 15:56:10.861002 ignition[894]: op(1): [finished] loading QEMU firmware config module Nov 5 15:56:10.894678 systemd-networkd[725]: eth0: Gained IPv6LL Nov 5 15:56:10.947360 ignition[894]: parsing config with SHA512: 389b0d73f5504efeac759215ccc7ae2cdd1e591db08c5f66685ab681d37be865662d61ffe3684439e20749a4ee0a2ad90c0d6a1267e37f85cda9a19aeb0fbad9 Nov 5 15:56:10.953007 unknown[894]: fetched base config from "system" Nov 5 15:56:10.953023 unknown[894]: fetched user config from "qemu" Nov 5 15:56:10.953484 ignition[894]: fetch-offline: fetch-offline passed Nov 5 15:56:10.953545 ignition[894]: Ignition finished successfully Nov 5 15:56:10.959056 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 15:56:10.963692 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 5 15:56:10.968001 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 5 15:56:11.006561 ignition[904]: Ignition 2.22.0 Nov 5 15:56:11.006576 ignition[904]: Stage: kargs Nov 5 15:56:11.006869 ignition[904]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:56:11.006882 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 15:56:11.007841 ignition[904]: kargs: kargs passed Nov 5 15:56:11.007900 ignition[904]: Ignition finished successfully Nov 5 15:56:11.019345 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 5 15:56:11.021331 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 5 15:56:11.063863 ignition[913]: Ignition 2.22.0 Nov 5 15:56:11.063877 ignition[913]: Stage: disks Nov 5 15:56:11.064051 ignition[913]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:56:11.064064 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 15:56:11.065084 ignition[913]: disks: disks passed Nov 5 15:56:11.065139 ignition[913]: Ignition finished successfully Nov 5 15:56:11.073108 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 5 15:56:11.077521 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 5 15:56:11.078324 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 5 15:56:11.086021 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 15:56:11.087093 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 15:56:11.090217 systemd[1]: Reached target basic.target - Basic System. Nov 5 15:56:11.095356 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 5 15:56:11.146357 systemd-fsck[923]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 5 15:56:11.154913 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 5 15:56:11.160365 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 5 15:56:11.291447 kernel: EXT4-fs (vda9): mounted filesystem f3db699e-c9e0-4f6b-8c2b-aa40a78cd116 r/w with ordered data mode. Quota mode: none. Nov 5 15:56:11.292146 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 5 15:56:11.293571 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 5 15:56:11.297354 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 15:56:11.302108 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 5 15:56:11.306315 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 5 15:56:11.306382 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 5 15:56:11.306416 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 15:56:11.324728 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 5 15:56:11.327618 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 5 15:56:11.335450 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (931) Nov 5 15:56:11.335517 kernel: BTRFS info (device vda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:56:11.338825 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:56:11.343099 kernel: BTRFS info (device vda6): turning on async discard Nov 5 15:56:11.343139 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 15:56:11.344805 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 15:56:11.405684 initrd-setup-root[955]: cut: /sysroot/etc/passwd: No such file or directory Nov 5 15:56:11.412918 initrd-setup-root[962]: cut: /sysroot/etc/group: No such file or directory Nov 5 15:56:11.418906 initrd-setup-root[969]: cut: /sysroot/etc/shadow: No such file or directory Nov 5 15:56:11.424697 initrd-setup-root[976]: cut: /sysroot/etc/gshadow: No such file or directory Nov 5 15:56:11.542642 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 5 15:56:11.546564 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 5 15:56:11.550137 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 5 15:56:11.568328 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 5 15:56:11.571723 kernel: BTRFS info (device vda6): last unmount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:56:11.588676 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 5 15:56:11.608300 ignition[1045]: INFO : Ignition 2.22.0 Nov 5 15:56:11.608300 ignition[1045]: INFO : Stage: mount Nov 5 15:56:11.611231 ignition[1045]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:56:11.611231 ignition[1045]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 15:56:11.611231 ignition[1045]: INFO : mount: mount passed Nov 5 15:56:11.611231 ignition[1045]: INFO : Ignition finished successfully Nov 5 15:56:11.612242 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 5 15:56:11.615500 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 5 15:56:11.640696 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 15:56:11.661452 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1057) Nov 5 15:56:11.665319 kernel: BTRFS info (device vda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:56:11.665347 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:56:11.670152 kernel: BTRFS info (device vda6): turning on async discard Nov 5 15:56:11.670202 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 15:56:11.672630 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 15:56:11.714228 ignition[1074]: INFO : Ignition 2.22.0 Nov 5 15:56:11.714228 ignition[1074]: INFO : Stage: files Nov 5 15:56:11.717686 ignition[1074]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:56:11.717686 ignition[1074]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 15:56:11.717686 ignition[1074]: DEBUG : files: compiled without relabeling support, skipping Nov 5 15:56:11.724724 ignition[1074]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 5 15:56:11.724724 ignition[1074]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 5 15:56:11.734599 ignition[1074]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 5 15:56:11.737618 ignition[1074]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 5 15:56:11.740523 ignition[1074]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 5 15:56:11.738303 unknown[1074]: wrote ssh authorized keys file for user: core Nov 5 15:56:11.745753 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 5 15:56:11.745753 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 5 15:56:11.813592 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 5 15:56:11.885630 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 5 15:56:11.885630 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 5 15:56:11.894300 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 5 15:56:11.894300 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 5 15:56:11.894300 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 5 15:56:11.894300 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 15:56:11.894300 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 15:56:11.894300 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 15:56:11.894300 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 15:56:11.920582 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 15:56:11.920582 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 15:56:11.920582 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 5 15:56:11.920582 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 5 15:56:11.920582 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 5 15:56:11.920582 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 5 15:56:12.288566 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 5 15:56:12.734567 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 5 15:56:12.734567 ignition[1074]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 5 15:56:12.740433 ignition[1074]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 15:56:12.748856 ignition[1074]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 15:56:12.748856 ignition[1074]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 5 15:56:12.748856 ignition[1074]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 5 15:56:12.756188 ignition[1074]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 5 15:56:12.756188 ignition[1074]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 5 15:56:12.756188 ignition[1074]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 5 15:56:12.756188 ignition[1074]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 5 15:56:12.789564 ignition[1074]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 5 15:56:12.797216 ignition[1074]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 5 15:56:12.799940 ignition[1074]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 5 15:56:12.799940 ignition[1074]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 5 15:56:12.799940 ignition[1074]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 5 15:56:12.799940 ignition[1074]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 5 15:56:12.799940 ignition[1074]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 5 15:56:12.799940 ignition[1074]: INFO : files: files passed Nov 5 15:56:12.799940 ignition[1074]: INFO : Ignition finished successfully Nov 5 15:56:12.807225 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 5 15:56:12.810741 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 5 15:56:12.817660 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 5 15:56:12.839685 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 5 15:56:12.839835 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 5 15:56:12.847982 initrd-setup-root-after-ignition[1105]: grep: /sysroot/oem/oem-release: No such file or directory Nov 5 15:56:12.854339 initrd-setup-root-after-ignition[1107]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:56:12.854339 initrd-setup-root-after-ignition[1107]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:56:12.860138 initrd-setup-root-after-ignition[1111]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:56:12.859058 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 15:56:12.861141 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 5 15:56:12.869910 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 5 15:56:12.941314 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 5 15:56:12.941531 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 5 15:56:12.942711 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 5 15:56:12.947894 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 5 15:56:12.951856 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 5 15:56:12.953001 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 5 15:56:12.995242 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 15:56:12.998254 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 5 15:56:13.026483 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 15:56:13.026811 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:56:13.027894 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:56:13.033007 systemd[1]: Stopped target timers.target - Timer Units. Nov 5 15:56:13.036331 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 5 15:56:13.036909 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 15:56:13.042300 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 5 15:56:13.043148 systemd[1]: Stopped target basic.target - Basic System. Nov 5 15:56:13.048037 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 5 15:56:13.050891 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 15:56:13.054287 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 5 15:56:13.058041 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 5 15:56:13.062032 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 5 15:56:13.065285 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 15:56:13.068582 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 5 15:56:13.072548 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 5 15:56:13.078229 systemd[1]: Stopped target swap.target - Swaps. Nov 5 15:56:13.079072 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 5 15:56:13.079333 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 5 15:56:13.084313 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:56:13.085294 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:56:13.089873 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 5 15:56:13.092905 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:56:13.096579 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 5 15:56:13.096729 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 5 15:56:13.101914 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 5 15:56:13.102061 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 15:56:13.103305 systemd[1]: Stopped target paths.target - Path Units. Nov 5 15:56:13.107917 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 5 15:56:13.113515 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:56:13.114347 systemd[1]: Stopped target slices.target - Slice Units. Nov 5 15:56:13.118979 systemd[1]: Stopped target sockets.target - Socket Units. Nov 5 15:56:13.121547 systemd[1]: iscsid.socket: Deactivated successfully. Nov 5 15:56:13.121731 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 15:56:13.124503 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 5 15:56:13.124690 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 15:56:13.127405 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 5 15:56:13.127661 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 15:56:13.130548 systemd[1]: ignition-files.service: Deactivated successfully. Nov 5 15:56:13.130754 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 5 15:56:13.138324 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 5 15:56:13.140258 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 5 15:56:13.143846 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 5 15:56:13.144153 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:56:13.147951 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 5 15:56:13.148065 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:56:13.152664 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 5 15:56:13.152825 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 15:56:13.168845 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 5 15:56:13.169023 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 5 15:56:13.209487 ignition[1131]: INFO : Ignition 2.22.0 Nov 5 15:56:13.209487 ignition[1131]: INFO : Stage: umount Nov 5 15:56:13.209487 ignition[1131]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:56:13.209487 ignition[1131]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 15:56:13.219890 ignition[1131]: INFO : umount: umount passed Nov 5 15:56:13.219890 ignition[1131]: INFO : Ignition finished successfully Nov 5 15:56:13.224069 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 5 15:56:13.227438 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 5 15:56:13.233088 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 5 15:56:13.238685 systemd[1]: Stopped target network.target - Network. Nov 5 15:56:13.248518 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 5 15:56:13.248682 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 5 15:56:13.254824 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 5 15:56:13.254946 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 5 15:56:13.265380 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 5 15:56:13.265549 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 5 15:56:13.267610 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 5 15:56:13.267690 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 5 15:56:13.272820 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 5 15:56:13.279267 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 5 15:56:13.304851 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 5 15:56:13.305054 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 5 15:56:13.318769 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 5 15:56:13.318947 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 5 15:56:13.336127 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 5 15:56:13.343714 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 5 15:56:13.349263 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 5 15:56:13.350236 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 5 15:56:13.350302 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:56:13.366223 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 5 15:56:13.367701 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 5 15:56:13.383549 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 5 15:56:13.385388 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 5 15:56:13.385494 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 15:56:13.396020 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 15:56:13.396140 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:56:13.426711 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 5 15:56:13.426817 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 5 15:56:13.434715 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:56:13.482503 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 5 15:56:13.482764 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:56:13.484675 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 5 15:56:13.484743 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 5 15:56:13.493616 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 5 15:56:13.493761 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:56:13.508123 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 5 15:56:13.508263 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 5 15:56:13.529341 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 5 15:56:13.529471 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 5 15:56:13.534324 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 5 15:56:13.534403 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 15:56:13.541549 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 5 15:56:13.542960 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 5 15:56:13.543053 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:56:13.544021 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 5 15:56:13.544090 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:56:13.544973 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 5 15:56:13.545039 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 15:56:13.554460 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 5 15:56:13.554576 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:56:13.568047 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:56:13.568145 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:56:13.588969 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 5 15:56:13.598754 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 5 15:56:13.608391 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 5 15:56:13.608639 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 5 15:56:13.611012 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 5 15:56:13.615456 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 5 15:56:13.651104 systemd[1]: Switching root. Nov 5 15:56:13.690084 systemd-journald[314]: Journal stopped Nov 5 15:56:17.466127 systemd-journald[314]: Received SIGTERM from PID 1 (systemd). Nov 5 15:56:17.466240 kernel: SELinux: policy capability network_peer_controls=1 Nov 5 15:56:17.466261 kernel: SELinux: policy capability open_perms=1 Nov 5 15:56:17.466278 kernel: SELinux: policy capability extended_socket_class=1 Nov 5 15:56:17.466294 kernel: SELinux: policy capability always_check_network=0 Nov 5 15:56:17.466438 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 5 15:56:17.466460 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 5 15:56:17.466477 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 5 15:56:17.466494 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 5 15:56:17.466509 kernel: SELinux: policy capability userspace_initial_context=0 Nov 5 15:56:17.466531 kernel: audit: type=1403 audit(1762358174.433:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 5 15:56:17.466554 systemd[1]: Successfully loaded SELinux policy in 125.549ms. Nov 5 15:56:17.466591 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 24.910ms. Nov 5 15:56:17.466609 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 15:56:17.466625 systemd[1]: Detected virtualization kvm. Nov 5 15:56:17.466641 systemd[1]: Detected architecture x86-64. Nov 5 15:56:17.466657 systemd[1]: Detected first boot. Nov 5 15:56:17.466674 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 15:56:17.466693 zram_generator::config[1177]: No configuration found. Nov 5 15:56:17.466719 kernel: Guest personality initialized and is inactive Nov 5 15:56:17.466741 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 5 15:56:17.466757 kernel: Initialized host personality Nov 5 15:56:17.466773 kernel: NET: Registered PF_VSOCK protocol family Nov 5 15:56:17.466794 systemd[1]: Populated /etc with preset unit settings. Nov 5 15:56:17.466810 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 5 15:56:17.466827 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 5 15:56:17.466854 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 5 15:56:17.466871 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 5 15:56:17.466887 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 5 15:56:17.466903 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 5 15:56:17.466920 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 5 15:56:17.466937 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 5 15:56:17.466953 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 5 15:56:17.466986 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 5 15:56:17.467004 systemd[1]: Created slice user.slice - User and Session Slice. Nov 5 15:56:17.467022 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:56:17.467039 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:56:17.467057 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 5 15:56:17.467074 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 5 15:56:17.467095 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 5 15:56:17.467126 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 15:56:17.467144 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 5 15:56:17.467162 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:56:17.467179 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:56:17.467196 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 5 15:56:17.467214 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 5 15:56:17.467243 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 5 15:56:17.467260 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 5 15:56:17.467277 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:56:17.467295 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 15:56:17.467312 systemd[1]: Reached target slices.target - Slice Units. Nov 5 15:56:17.467328 systemd[1]: Reached target swap.target - Swaps. Nov 5 15:56:17.467345 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 5 15:56:17.467373 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 5 15:56:17.467390 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 5 15:56:17.467406 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:56:17.467453 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 15:56:17.467470 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:56:17.467487 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 5 15:56:17.467504 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 5 15:56:17.467535 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 5 15:56:17.467553 systemd[1]: Mounting media.mount - External Media Directory... Nov 5 15:56:17.467570 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:56:17.467587 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 5 15:56:17.467604 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 5 15:56:17.467621 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 5 15:56:17.467638 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 5 15:56:17.467667 systemd[1]: Reached target machines.target - Containers. Nov 5 15:56:17.467685 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 5 15:56:17.467704 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:56:17.467721 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 15:56:17.467739 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 5 15:56:17.467756 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:56:17.467774 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 15:56:17.467802 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:56:17.467829 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 5 15:56:17.467846 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:56:17.467862 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 5 15:56:17.467879 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 5 15:56:17.467897 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 5 15:56:17.467913 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 5 15:56:17.467943 systemd[1]: Stopped systemd-fsck-usr.service. Nov 5 15:56:17.467963 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:56:17.467979 kernel: fuse: init (API version 7.41) Nov 5 15:56:17.467997 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 15:56:17.468015 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 15:56:17.468031 kernel: ACPI: bus type drm_connector registered Nov 5 15:56:17.468048 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 15:56:17.468076 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 5 15:56:17.468094 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 5 15:56:17.468111 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 15:56:17.468139 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:56:17.468157 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 5 15:56:17.468204 systemd-journald[1242]: Collecting audit messages is disabled. Nov 5 15:56:17.468236 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 5 15:56:17.468265 systemd-journald[1242]: Journal started Nov 5 15:56:17.468293 systemd-journald[1242]: Runtime Journal (/run/log/journal/8e3d0e543d8343fe890c313c4e899156) is 5.9M, max 47.9M, 41.9M free. Nov 5 15:56:15.917899 systemd[1]: Queued start job for default target multi-user.target. Nov 5 15:56:15.946013 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 5 15:56:15.948181 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 5 15:56:17.476091 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 15:56:17.484101 systemd[1]: Mounted media.mount - External Media Directory. Nov 5 15:56:17.488654 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 5 15:56:17.509243 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 5 15:56:17.514836 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 5 15:56:17.528769 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 5 15:56:17.535896 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:56:17.538967 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 5 15:56:17.539270 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 5 15:56:17.548712 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:56:17.549068 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:56:17.554763 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 15:56:17.555098 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 15:56:17.558041 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:56:17.558561 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:56:17.561282 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 5 15:56:17.561919 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 5 15:56:17.565876 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:56:17.566482 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:56:17.569953 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 15:56:17.572816 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:56:17.577515 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 5 15:56:17.581362 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 5 15:56:17.601245 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 15:56:17.605606 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 5 15:56:17.614955 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 5 15:56:17.623579 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 5 15:56:17.633662 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 5 15:56:17.633917 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 15:56:17.638164 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 5 15:56:17.644026 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:56:17.651706 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 5 15:56:17.657645 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 5 15:56:17.664629 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 15:56:17.668611 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 5 15:56:17.674625 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 15:56:17.678749 systemd-journald[1242]: Time spent on flushing to /var/log/journal/8e3d0e543d8343fe890c313c4e899156 is 25.082ms for 1020 entries. Nov 5 15:56:17.678749 systemd-journald[1242]: System Journal (/var/log/journal/8e3d0e543d8343fe890c313c4e899156) is 8M, max 163.5M, 155.5M free. Nov 5 15:56:17.735469 systemd-journald[1242]: Received client request to flush runtime journal. Nov 5 15:56:17.679202 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 15:56:17.687598 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 5 15:56:17.696310 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 15:56:17.708420 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:56:17.712033 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 5 15:56:17.718856 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 5 15:56:17.726102 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 5 15:56:17.741029 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 5 15:56:17.744688 kernel: loop1: detected capacity change from 0 to 219144 Nov 5 15:56:17.755166 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 5 15:56:17.770882 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 5 15:56:17.823051 systemd-tmpfiles[1299]: ACLs are not supported, ignoring. Nov 5 15:56:17.823085 systemd-tmpfiles[1299]: ACLs are not supported, ignoring. Nov 5 15:56:17.830710 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:56:17.849974 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 15:56:17.865377 kernel: loop2: detected capacity change from 0 to 110984 Nov 5 15:56:17.870910 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 5 15:56:17.905142 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 5 15:56:17.929929 kernel: loop3: detected capacity change from 0 to 128048 Nov 5 15:56:17.975922 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 5 15:56:17.986694 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 15:56:17.993642 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 15:56:18.005835 kernel: loop4: detected capacity change from 0 to 219144 Nov 5 15:56:18.019678 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 5 15:56:18.031865 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Nov 5 15:56:18.031888 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Nov 5 15:56:18.054547 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:56:18.074484 kernel: loop5: detected capacity change from 0 to 110984 Nov 5 15:56:18.162474 kernel: loop6: detected capacity change from 0 to 128048 Nov 5 15:56:18.173523 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 5 15:56:18.186571 (sd-merge)[1320]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 5 15:56:18.193459 (sd-merge)[1320]: Merged extensions into '/usr'. Nov 5 15:56:18.206882 systemd[1]: Reload requested from client PID 1297 ('systemd-sysext') (unit systemd-sysext.service)... Nov 5 15:56:18.207409 systemd[1]: Reloading... Nov 5 15:56:18.336466 zram_generator::config[1355]: No configuration found. Nov 5 15:56:18.434217 systemd-resolved[1318]: Positive Trust Anchors: Nov 5 15:56:18.434752 systemd-resolved[1318]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 15:56:18.434768 systemd-resolved[1318]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 15:56:18.434815 systemd-resolved[1318]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 15:56:18.447434 systemd-resolved[1318]: Defaulting to hostname 'linux'. Nov 5 15:56:18.781671 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 5 15:56:18.782942 systemd[1]: Reloading finished in 574 ms. Nov 5 15:56:21.398520 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 2741583316 wd_nsec: 2741582205 Nov 5 15:56:21.436332 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 15:56:21.439524 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 5 15:56:21.447370 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:56:21.468958 systemd[1]: Starting ensure-sysext.service... Nov 5 15:56:21.473562 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 15:56:21.499325 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 5 15:56:21.504440 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:56:21.509086 systemd[1]: Reload requested from client PID 1392 ('systemctl') (unit ensure-sysext.service)... Nov 5 15:56:21.509126 systemd[1]: Reloading... Nov 5 15:56:21.509913 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 5 15:56:21.509965 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 5 15:56:21.510394 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 5 15:56:21.510823 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 5 15:56:21.512072 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 5 15:56:21.512490 systemd-tmpfiles[1393]: ACLs are not supported, ignoring. Nov 5 15:56:21.512582 systemd-tmpfiles[1393]: ACLs are not supported, ignoring. Nov 5 15:56:21.523584 systemd-tmpfiles[1393]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 15:56:21.523596 systemd-tmpfiles[1393]: Skipping /boot Nov 5 15:56:21.536719 systemd-tmpfiles[1393]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 15:56:21.536846 systemd-tmpfiles[1393]: Skipping /boot Nov 5 15:56:21.556183 systemd-udevd[1396]: Using default interface naming scheme 'v257'. Nov 5 15:56:21.590535 zram_generator::config[1425]: No configuration found. Nov 5 15:56:21.792714 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 5 15:56:21.805452 kernel: mousedev: PS/2 mouse device common for all mice Nov 5 15:56:21.805522 kernel: ACPI: button: Power Button [PWRF] Nov 5 15:56:21.971100 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 5 15:56:21.973572 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 5 15:56:21.973682 systemd[1]: Reloading finished in 464 ms. Nov 5 15:56:22.241095 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Nov 5 15:56:22.241613 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 5 15:56:22.248616 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 5 15:56:22.255643 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:56:22.317174 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:56:22.341608 kernel: kvm_amd: TSC scaling supported Nov 5 15:56:22.341769 kernel: kvm_amd: Nested Virtualization enabled Nov 5 15:56:22.341787 kernel: kvm_amd: Nested Paging enabled Nov 5 15:56:22.343303 kernel: kvm_amd: LBR virtualization supported Nov 5 15:56:22.343336 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 5 15:56:22.344221 kernel: kvm_amd: Virtual GIF supported Nov 5 15:56:22.363211 systemd[1]: Finished ensure-sysext.service. Nov 5 15:56:22.376502 kernel: EDAC MC: Ver: 3.0.0 Nov 5 15:56:22.389416 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:56:22.391306 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 15:56:22.394846 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 5 15:56:22.397096 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:56:22.398656 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 5 15:56:22.411027 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:56:22.415572 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 15:56:22.420072 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:56:22.424166 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:56:22.426410 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:56:22.428730 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 5 15:56:22.431358 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:56:22.433110 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 5 15:56:22.449827 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 15:56:22.456636 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 5 15:56:22.461028 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 5 15:56:22.463930 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:56:22.464977 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:56:22.468884 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:56:22.469246 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:56:22.470461 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 15:56:22.470799 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 15:56:22.471737 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:56:22.472015 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:56:22.473228 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:56:22.473557 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:56:22.505621 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 5 15:56:22.509613 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 15:56:22.509757 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 15:56:22.519879 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 5 15:56:22.527369 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 5 15:56:22.554048 augenrules[1553]: No rules Nov 5 15:56:22.556809 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 15:56:22.557171 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 15:56:22.602206 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 5 15:56:22.603112 systemd[1]: Reached target time-set.target - System Time Set. Nov 5 15:56:22.612789 systemd-networkd[1522]: lo: Link UP Nov 5 15:56:22.612803 systemd-networkd[1522]: lo: Gained carrier Nov 5 15:56:22.615004 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 15:56:22.616370 systemd-networkd[1522]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:56:22.616579 systemd-networkd[1522]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 15:56:22.616789 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 5 15:56:22.618468 systemd-networkd[1522]: eth0: Link UP Nov 5 15:56:22.618869 systemd[1]: Reached target network.target - Network. Nov 5 15:56:22.619037 systemd-networkd[1522]: eth0: Gained carrier Nov 5 15:56:22.619055 systemd-networkd[1522]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:56:22.621963 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 5 15:56:22.624917 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 5 15:56:22.626053 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 15:56:22.665566 systemd-networkd[1522]: eth0: DHCPv4 address 10.0.0.107/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 5 15:56:22.673421 systemd-timesyncd[1523]: Network configuration changed, trying to establish connection. Nov 5 15:56:23.910636 systemd-timesyncd[1523]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 5 15:56:23.911852 systemd-resolved[1318]: Clock change detected. Flushing caches. Nov 5 15:56:23.912279 systemd-timesyncd[1523]: Initial clock synchronization to Wed 2025-11-05 15:56:23.910517 UTC. Nov 5 15:56:23.914975 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:56:23.940776 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 5 15:56:24.765176 ldconfig[1509]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 5 15:56:24.782695 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 5 15:56:24.790407 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 5 15:56:24.851890 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 5 15:56:24.854868 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 15:56:24.869849 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 5 15:56:24.872899 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 5 15:56:24.882271 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 5 15:56:24.884802 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 5 15:56:24.887533 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 5 15:56:24.890548 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 5 15:56:24.893205 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 5 15:56:24.899250 systemd[1]: Reached target paths.target - Path Units. Nov 5 15:56:24.910440 systemd[1]: Reached target timers.target - Timer Units. Nov 5 15:56:24.915705 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 5 15:56:24.921107 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 5 15:56:24.935217 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 5 15:56:24.938284 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 5 15:56:24.944408 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 5 15:56:24.954508 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 5 15:56:24.958123 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 5 15:56:24.962476 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 5 15:56:24.966152 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 15:56:24.969157 systemd[1]: Reached target basic.target - Basic System. Nov 5 15:56:24.972655 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 5 15:56:24.972715 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 5 15:56:24.976257 systemd[1]: Starting containerd.service - containerd container runtime... Nov 5 15:56:24.985474 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 5 15:56:24.995896 systemd-networkd[1522]: eth0: Gained IPv6LL Nov 5 15:56:25.002104 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 5 15:56:25.009357 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 5 15:56:25.022548 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 5 15:56:25.025579 jq[1577]: false Nov 5 15:56:25.029240 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 5 15:56:25.037923 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 5 15:56:25.047702 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 5 15:56:25.058564 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 5 15:56:25.067802 extend-filesystems[1578]: Found /dev/vda6 Nov 5 15:56:25.076390 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 5 15:56:25.079339 oslogin_cache_refresh[1579]: Refreshing passwd entry cache Nov 5 15:56:25.082789 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Refreshing passwd entry cache Nov 5 15:56:25.083474 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 5 15:56:25.095203 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Failure getting users, quitting Nov 5 15:56:25.095326 oslogin_cache_refresh[1579]: Failure getting users, quitting Nov 5 15:56:25.102047 extend-filesystems[1578]: Found /dev/vda9 Nov 5 15:56:25.103666 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 15:56:25.103666 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Refreshing group entry cache Nov 5 15:56:25.098326 oslogin_cache_refresh[1579]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 15:56:25.103883 extend-filesystems[1578]: Checking size of /dev/vda9 Nov 5 15:56:25.098417 oslogin_cache_refresh[1579]: Refreshing group entry cache Nov 5 15:56:25.112783 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Failure getting groups, quitting Nov 5 15:56:25.112783 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 15:56:25.108797 oslogin_cache_refresh[1579]: Failure getting groups, quitting Nov 5 15:56:25.108814 oslogin_cache_refresh[1579]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 15:56:25.120052 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 5 15:56:25.125876 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 5 15:56:25.126871 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 5 15:56:25.128388 systemd[1]: Starting update-engine.service - Update Engine... Nov 5 15:56:25.154169 extend-filesystems[1578]: Resized partition /dev/vda9 Nov 5 15:56:25.168522 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 5 15:56:25.180015 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 5 15:56:25.198022 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 5 15:56:25.204930 extend-filesystems[1603]: resize2fs 1.47.3 (8-Jul-2025) Nov 5 15:56:25.214808 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 5 15:56:25.219296 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 5 15:56:25.219861 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 5 15:56:25.220221 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 5 15:56:25.221471 jq[1604]: true Nov 5 15:56:25.227846 systemd[1]: motdgen.service: Deactivated successfully. Nov 5 15:56:25.228426 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 5 15:56:25.228379 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 5 15:56:25.246059 update_engine[1597]: I20251105 15:56:25.245723 1597 main.cc:92] Flatcar Update Engine starting Nov 5 15:56:25.246250 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 5 15:56:25.250134 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 5 15:56:25.299161 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 5 15:56:25.307334 jq[1613]: true Nov 5 15:56:25.336732 systemd[1]: Reached target network-online.target - Network is Online. Nov 5 15:56:25.345655 (ntainerd)[1625]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 5 15:56:25.349390 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 5 15:56:25.391335 extend-filesystems[1603]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 5 15:56:25.391335 extend-filesystems[1603]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 5 15:56:25.391335 extend-filesystems[1603]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 5 15:56:25.418370 extend-filesystems[1578]: Resized filesystem in /dev/vda9 Nov 5 15:56:25.411459 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:56:25.422719 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 5 15:56:25.435041 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 5 15:56:25.437416 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 5 15:56:25.444695 dbus-daemon[1575]: [system] SELinux support is enabled Nov 5 15:56:25.446212 systemd-logind[1592]: Watching system buttons on /dev/input/event2 (Power Button) Nov 5 15:56:25.446266 systemd-logind[1592]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 5 15:56:25.451092 update_engine[1597]: I20251105 15:56:25.450807 1597 update_check_scheduler.cc:74] Next update check in 10m8s Nov 5 15:56:25.468376 systemd-logind[1592]: New seat seat0. Nov 5 15:56:25.475321 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 5 15:56:25.492866 systemd[1]: Started systemd-logind.service - User Login Management. Nov 5 15:56:25.512710 tar[1611]: linux-amd64/LICENSE Nov 5 15:56:25.512710 tar[1611]: linux-amd64/helm Nov 5 15:56:25.503479 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 5 15:56:25.503529 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 5 15:56:25.525956 dbus-daemon[1575]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 5 15:56:25.508253 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 5 15:56:25.508279 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 5 15:56:25.535820 systemd[1]: Started update-engine.service - Update Engine. Nov 5 15:56:25.546699 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 5 15:56:25.554248 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 5 15:56:25.554700 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 5 15:56:25.566569 sshd_keygen[1599]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 5 15:56:25.566356 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 5 15:56:25.573332 bash[1648]: Updated "/home/core/.ssh/authorized_keys" Nov 5 15:56:25.576411 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 5 15:56:25.586599 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 5 15:56:25.596415 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 5 15:56:25.632041 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 5 15:56:25.642492 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 5 15:56:25.697564 locksmithd[1659]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 5 15:56:25.709156 systemd[1]: issuegen.service: Deactivated successfully. Nov 5 15:56:25.709544 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 5 15:56:25.726717 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 5 15:56:25.788268 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 5 15:56:25.798748 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 5 15:56:25.812487 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 5 15:56:25.816666 systemd[1]: Reached target getty.target - Login Prompts. Nov 5 15:56:26.161256 containerd[1625]: time="2025-11-05T15:56:26Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 5 15:56:26.173597 containerd[1625]: time="2025-11-05T15:56:26.172858340Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 5 15:56:26.267906 containerd[1625]: time="2025-11-05T15:56:26.264615157Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.244µs" Nov 5 15:56:26.267906 containerd[1625]: time="2025-11-05T15:56:26.267348273Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 5 15:56:26.286480 containerd[1625]: time="2025-11-05T15:56:26.284007495Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 5 15:56:26.286480 containerd[1625]: time="2025-11-05T15:56:26.284376707Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 5 15:56:26.286480 containerd[1625]: time="2025-11-05T15:56:26.284396524Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 5 15:56:26.286480 containerd[1625]: time="2025-11-05T15:56:26.284431350Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 15:56:26.286480 containerd[1625]: time="2025-11-05T15:56:26.284527951Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 15:56:26.286480 containerd[1625]: time="2025-11-05T15:56:26.284587092Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 15:56:26.286480 containerd[1625]: time="2025-11-05T15:56:26.284941546Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 15:56:26.286480 containerd[1625]: time="2025-11-05T15:56:26.284959310Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 15:56:26.286480 containerd[1625]: time="2025-11-05T15:56:26.284971172Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 15:56:26.286480 containerd[1625]: time="2025-11-05T15:56:26.284980429Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 5 15:56:26.286480 containerd[1625]: time="2025-11-05T15:56:26.285108119Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 5 15:56:26.286480 containerd[1625]: time="2025-11-05T15:56:26.285423670Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 15:56:26.286910 containerd[1625]: time="2025-11-05T15:56:26.285457223Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 15:56:26.286910 containerd[1625]: time="2025-11-05T15:56:26.285468264Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 5 15:56:26.286910 containerd[1625]: time="2025-11-05T15:56:26.285513168Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 5 15:56:26.286910 containerd[1625]: time="2025-11-05T15:56:26.285744472Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 5 15:56:26.286910 containerd[1625]: time="2025-11-05T15:56:26.285827147Z" level=info msg="metadata content store policy set" policy=shared Nov 5 15:56:26.376249 containerd[1625]: time="2025-11-05T15:56:26.376173719Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 5 15:56:26.376571 containerd[1625]: time="2025-11-05T15:56:26.376551408Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 5 15:56:26.376660 containerd[1625]: time="2025-11-05T15:56:26.376646746Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 5 15:56:26.376725 containerd[1625]: time="2025-11-05T15:56:26.376709684Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 5 15:56:26.376784 containerd[1625]: time="2025-11-05T15:56:26.376769356Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 5 15:56:26.376872 containerd[1625]: time="2025-11-05T15:56:26.376849066Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 5 15:56:26.376959 containerd[1625]: time="2025-11-05T15:56:26.376938163Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 5 15:56:26.377049 containerd[1625]: time="2025-11-05T15:56:26.377025887Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 5 15:56:26.380846 containerd[1625]: time="2025-11-05T15:56:26.378153652Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 5 15:56:26.380846 containerd[1625]: time="2025-11-05T15:56:26.378192365Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 5 15:56:26.380846 containerd[1625]: time="2025-11-05T15:56:26.378210549Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 5 15:56:26.380846 containerd[1625]: time="2025-11-05T15:56:26.378231849Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 5 15:56:26.380846 containerd[1625]: time="2025-11-05T15:56:26.378496966Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 5 15:56:26.380846 containerd[1625]: time="2025-11-05T15:56:26.378524277Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 5 15:56:26.380846 containerd[1625]: time="2025-11-05T15:56:26.378544385Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 5 15:56:26.380846 containerd[1625]: time="2025-11-05T15:56:26.378572478Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 5 15:56:26.380846 containerd[1625]: time="2025-11-05T15:56:26.378587927Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 5 15:56:26.380846 containerd[1625]: time="2025-11-05T15:56:26.378602965Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 5 15:56:26.380846 containerd[1625]: time="2025-11-05T15:56:26.378619215Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 5 15:56:26.380846 containerd[1625]: time="2025-11-05T15:56:26.378632811Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 5 15:56:26.380846 containerd[1625]: time="2025-11-05T15:56:26.378648991Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 5 15:56:26.380846 containerd[1625]: time="2025-11-05T15:56:26.378676583Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 5 15:56:26.380846 containerd[1625]: time="2025-11-05T15:56:26.378692864Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 5 15:56:26.386128 containerd[1625]: time="2025-11-05T15:56:26.381607489Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 5 15:56:26.386128 containerd[1625]: time="2025-11-05T15:56:26.381691517Z" level=info msg="Start snapshots syncer" Nov 5 15:56:26.386128 containerd[1625]: time="2025-11-05T15:56:26.381732875Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 5 15:56:26.386338 containerd[1625]: time="2025-11-05T15:56:26.382163843Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 5 15:56:26.386338 containerd[1625]: time="2025-11-05T15:56:26.382233123Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 5 15:56:26.386338 containerd[1625]: time="2025-11-05T15:56:26.382361944Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 5 15:56:26.386338 containerd[1625]: time="2025-11-05T15:56:26.382575254Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 5 15:56:26.386338 containerd[1625]: time="2025-11-05T15:56:26.382612604Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 5 15:56:26.386338 containerd[1625]: time="2025-11-05T15:56:26.382630799Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 5 15:56:26.386338 containerd[1625]: time="2025-11-05T15:56:26.382651928Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 5 15:56:26.386338 containerd[1625]: time="2025-11-05T15:56:26.382673368Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 5 15:56:26.386338 containerd[1625]: time="2025-11-05T15:56:26.382692554Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 5 15:56:26.386338 containerd[1625]: time="2025-11-05T15:56:26.382712031Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 5 15:56:26.386338 containerd[1625]: time="2025-11-05T15:56:26.382756584Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 5 15:56:26.386338 containerd[1625]: time="2025-11-05T15:56:26.382782804Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 5 15:56:26.386338 containerd[1625]: time="2025-11-05T15:56:26.382805787Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 5 15:56:26.386338 containerd[1625]: time="2025-11-05T15:56:26.382862914Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 15:56:26.386338 containerd[1625]: time="2025-11-05T15:56:26.382892549Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 15:56:26.386338 containerd[1625]: time="2025-11-05T15:56:26.382911826Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 15:56:26.386338 containerd[1625]: time="2025-11-05T15:56:26.382939197Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 15:56:26.386338 containerd[1625]: time="2025-11-05T15:56:26.382952231Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 5 15:56:26.386338 containerd[1625]: time="2025-11-05T15:56:26.382968051Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 5 15:56:26.386338 containerd[1625]: time="2025-11-05T15:56:26.382985343Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 5 15:56:26.386338 containerd[1625]: time="2025-11-05T15:56:26.383008918Z" level=info msg="runtime interface created" Nov 5 15:56:26.386338 containerd[1625]: time="2025-11-05T15:56:26.383020720Z" level=info msg="created NRI interface" Nov 5 15:56:26.386338 containerd[1625]: time="2025-11-05T15:56:26.383033233Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 5 15:56:26.386338 containerd[1625]: time="2025-11-05T15:56:26.383052910Z" level=info msg="Connect containerd service" Nov 5 15:56:26.386338 containerd[1625]: time="2025-11-05T15:56:26.383104076Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 5 15:56:26.386338 containerd[1625]: time="2025-11-05T15:56:26.384450481Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 15:56:26.713385 tar[1611]: linux-amd64/README.md Nov 5 15:56:26.796101 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 5 15:56:26.885734 containerd[1625]: time="2025-11-05T15:56:26.885649939Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 5 15:56:26.885875 containerd[1625]: time="2025-11-05T15:56:26.885757010Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 5 15:56:26.885875 containerd[1625]: time="2025-11-05T15:56:26.885789881Z" level=info msg="Start subscribing containerd event" Nov 5 15:56:26.885875 containerd[1625]: time="2025-11-05T15:56:26.885833243Z" level=info msg="Start recovering state" Nov 5 15:56:26.885996 containerd[1625]: time="2025-11-05T15:56:26.885967595Z" level=info msg="Start event monitor" Nov 5 15:56:26.885996 containerd[1625]: time="2025-11-05T15:56:26.885987282Z" level=info msg="Start cni network conf syncer for default" Nov 5 15:56:26.886047 containerd[1625]: time="2025-11-05T15:56:26.885998823Z" level=info msg="Start streaming server" Nov 5 15:56:26.886047 containerd[1625]: time="2025-11-05T15:56:26.886010225Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 5 15:56:26.886047 containerd[1625]: time="2025-11-05T15:56:26.886019712Z" level=info msg="runtime interface starting up..." Nov 5 15:56:26.886047 containerd[1625]: time="2025-11-05T15:56:26.886027557Z" level=info msg="starting plugins..." Nov 5 15:56:26.891655 containerd[1625]: time="2025-11-05T15:56:26.886048196Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 5 15:56:26.891655 containerd[1625]: time="2025-11-05T15:56:26.890805277Z" level=info msg="containerd successfully booted in 0.734156s" Nov 5 15:56:26.891083 systemd[1]: Started containerd.service - containerd container runtime. Nov 5 15:56:27.677983 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:56:27.681850 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 5 15:56:27.684188 systemd[1]: Startup finished in 3.185s (kernel) + 7.140s (initrd) + 12.136s (userspace) = 22.462s. Nov 5 15:56:27.694899 (kubelet)[1717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:56:28.373650 kubelet[1717]: E1105 15:56:28.373584 1717 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:56:28.377669 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:56:28.377868 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:56:28.378255 systemd[1]: kubelet.service: Consumed 1.401s CPU time, 257.7M memory peak. Nov 5 15:56:33.936811 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 5 15:56:33.938739 systemd[1]: Started sshd@0-10.0.0.107:22-10.0.0.1:49072.service - OpenSSH per-connection server daemon (10.0.0.1:49072). Nov 5 15:56:34.171463 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 49072 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:56:34.175577 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:56:34.194997 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 5 15:56:34.196671 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 5 15:56:34.207112 systemd-logind[1592]: New session 1 of user core. Nov 5 15:56:34.237710 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 5 15:56:34.242972 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 5 15:56:34.270569 (systemd)[1735]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 5 15:56:34.277224 systemd-logind[1592]: New session c1 of user core. Nov 5 15:56:34.458771 systemd[1735]: Queued start job for default target default.target. Nov 5 15:56:34.476095 systemd[1735]: Created slice app.slice - User Application Slice. Nov 5 15:56:34.476138 systemd[1735]: Reached target paths.target - Paths. Nov 5 15:56:34.476203 systemd[1735]: Reached target timers.target - Timers. Nov 5 15:56:34.478253 systemd[1735]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 5 15:56:34.492566 systemd[1735]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 5 15:56:34.492769 systemd[1735]: Reached target sockets.target - Sockets. Nov 5 15:56:34.492852 systemd[1735]: Reached target basic.target - Basic System. Nov 5 15:56:34.492918 systemd[1735]: Reached target default.target - Main User Target. Nov 5 15:56:34.492960 systemd[1735]: Startup finished in 202ms. Nov 5 15:56:34.493247 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 5 15:56:34.495441 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 5 15:56:34.578276 systemd[1]: Started sshd@1-10.0.0.107:22-10.0.0.1:49082.service - OpenSSH per-connection server daemon (10.0.0.1:49082). Nov 5 15:56:34.679949 sshd[1746]: Accepted publickey for core from 10.0.0.1 port 49082 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:56:34.688706 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:56:34.728420 systemd-logind[1592]: New session 2 of user core. Nov 5 15:56:34.754699 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 5 15:56:34.852239 sshd[1749]: Connection closed by 10.0.0.1 port 49082 Nov 5 15:56:34.857437 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Nov 5 15:56:34.882019 systemd[1]: sshd@1-10.0.0.107:22-10.0.0.1:49082.service: Deactivated successfully. Nov 5 15:56:34.884529 systemd[1]: session-2.scope: Deactivated successfully. Nov 5 15:56:34.890614 systemd-logind[1592]: Session 2 logged out. Waiting for processes to exit. Nov 5 15:56:34.895403 systemd[1]: Started sshd@2-10.0.0.107:22-10.0.0.1:49096.service - OpenSSH per-connection server daemon (10.0.0.1:49096). Nov 5 15:56:34.898033 systemd-logind[1592]: Removed session 2. Nov 5 15:56:34.994100 sshd[1755]: Accepted publickey for core from 10.0.0.1 port 49096 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:56:34.998536 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:56:35.019227 systemd-logind[1592]: New session 3 of user core. Nov 5 15:56:35.040298 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 5 15:56:35.116976 sshd[1758]: Connection closed by 10.0.0.1 port 49096 Nov 5 15:56:35.118952 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Nov 5 15:56:35.132454 systemd[1]: sshd@2-10.0.0.107:22-10.0.0.1:49096.service: Deactivated successfully. Nov 5 15:56:35.135282 systemd[1]: session-3.scope: Deactivated successfully. Nov 5 15:56:35.137472 systemd-logind[1592]: Session 3 logged out. Waiting for processes to exit. Nov 5 15:56:35.142216 systemd[1]: Started sshd@3-10.0.0.107:22-10.0.0.1:49100.service - OpenSSH per-connection server daemon (10.0.0.1:49100). Nov 5 15:56:35.153338 systemd-logind[1592]: Removed session 3. Nov 5 15:56:35.274186 sshd[1764]: Accepted publickey for core from 10.0.0.1 port 49100 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:56:35.279529 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:56:35.287988 systemd-logind[1592]: New session 4 of user core. Nov 5 15:56:35.298700 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 5 15:56:35.364917 sshd[1767]: Connection closed by 10.0.0.1 port 49100 Nov 5 15:56:35.365273 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Nov 5 15:56:35.380225 systemd[1]: sshd@3-10.0.0.107:22-10.0.0.1:49100.service: Deactivated successfully. Nov 5 15:56:35.382920 systemd[1]: session-4.scope: Deactivated successfully. Nov 5 15:56:35.383879 systemd-logind[1592]: Session 4 logged out. Waiting for processes to exit. Nov 5 15:56:35.386665 systemd-logind[1592]: Removed session 4. Nov 5 15:56:35.388106 systemd[1]: Started sshd@4-10.0.0.107:22-10.0.0.1:49108.service - OpenSSH per-connection server daemon (10.0.0.1:49108). Nov 5 15:56:35.450810 sshd[1773]: Accepted publickey for core from 10.0.0.1 port 49108 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:56:35.452828 sshd-session[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:56:35.458053 systemd-logind[1592]: New session 5 of user core. Nov 5 15:56:35.468461 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 5 15:56:35.531985 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 5 15:56:35.532451 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:56:35.554832 sudo[1777]: pam_unix(sudo:session): session closed for user root Nov 5 15:56:35.558327 sshd[1776]: Connection closed by 10.0.0.1 port 49108 Nov 5 15:56:35.558923 sshd-session[1773]: pam_unix(sshd:session): session closed for user core Nov 5 15:56:35.569921 systemd[1]: sshd@4-10.0.0.107:22-10.0.0.1:49108.service: Deactivated successfully. Nov 5 15:56:35.572414 systemd[1]: session-5.scope: Deactivated successfully. Nov 5 15:56:35.575013 systemd-logind[1592]: Session 5 logged out. Waiting for processes to exit. Nov 5 15:56:35.580804 systemd[1]: Started sshd@5-10.0.0.107:22-10.0.0.1:49114.service - OpenSSH per-connection server daemon (10.0.0.1:49114). Nov 5 15:56:35.586458 systemd-logind[1592]: Removed session 5. Nov 5 15:56:35.679758 sshd[1783]: Accepted publickey for core from 10.0.0.1 port 49114 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:56:35.682199 sshd-session[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:56:35.694464 systemd-logind[1592]: New session 6 of user core. Nov 5 15:56:35.709606 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 5 15:56:35.777116 sudo[1788]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 5 15:56:35.777497 sudo[1788]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:56:36.001376 sudo[1788]: pam_unix(sudo:session): session closed for user root Nov 5 15:56:36.011471 sudo[1787]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 5 15:56:36.011868 sudo[1787]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:56:36.024732 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 15:56:36.074918 augenrules[1810]: No rules Nov 5 15:56:36.076629 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 15:56:36.076931 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 15:56:36.078166 sudo[1787]: pam_unix(sudo:session): session closed for user root Nov 5 15:56:36.080116 sshd[1786]: Connection closed by 10.0.0.1 port 49114 Nov 5 15:56:36.080449 sshd-session[1783]: pam_unix(sshd:session): session closed for user core Nov 5 15:56:36.089810 systemd[1]: sshd@5-10.0.0.107:22-10.0.0.1:49114.service: Deactivated successfully. Nov 5 15:56:36.091850 systemd[1]: session-6.scope: Deactivated successfully. Nov 5 15:56:36.092650 systemd-logind[1592]: Session 6 logged out. Waiting for processes to exit. Nov 5 15:56:36.095746 systemd[1]: Started sshd@6-10.0.0.107:22-10.0.0.1:49122.service - OpenSSH per-connection server daemon (10.0.0.1:49122). Nov 5 15:56:36.096388 systemd-logind[1592]: Removed session 6. Nov 5 15:56:36.154751 sshd[1819]: Accepted publickey for core from 10.0.0.1 port 49122 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:56:36.156061 sshd-session[1819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:56:36.160750 systemd-logind[1592]: New session 7 of user core. Nov 5 15:56:36.170435 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 5 15:56:36.228240 sudo[1823]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 5 15:56:36.228726 sudo[1823]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:56:36.835487 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 5 15:56:36.857891 (dockerd)[1844]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 5 15:56:37.602105 dockerd[1844]: time="2025-11-05T15:56:37.602023815Z" level=info msg="Starting up" Nov 5 15:56:37.602997 dockerd[1844]: time="2025-11-05T15:56:37.602966483Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 5 15:56:37.621559 dockerd[1844]: time="2025-11-05T15:56:37.621497495Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 5 15:56:37.973729 dockerd[1844]: time="2025-11-05T15:56:37.973523370Z" level=info msg="Loading containers: start." Nov 5 15:56:38.033707 kernel: Initializing XFRM netlink socket Nov 5 15:56:38.415981 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 5 15:56:38.419099 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:56:38.967036 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:56:38.990267 (kubelet)[2000]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:56:39.355973 kubelet[2000]: E1105 15:56:39.354678 2000 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:56:39.362168 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:56:39.362433 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:56:39.362935 systemd[1]: kubelet.service: Consumed 467ms CPU time, 110.5M memory peak. Nov 5 15:56:40.107002 systemd-networkd[1522]: docker0: Link UP Nov 5 15:56:40.119621 dockerd[1844]: time="2025-11-05T15:56:40.119551348Z" level=info msg="Loading containers: done." Nov 5 15:56:40.139381 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck30217527-merged.mount: Deactivated successfully. Nov 5 15:56:40.150334 dockerd[1844]: time="2025-11-05T15:56:40.150216777Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 5 15:56:40.150551 dockerd[1844]: time="2025-11-05T15:56:40.150408206Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 5 15:56:40.150596 dockerd[1844]: time="2025-11-05T15:56:40.150561523Z" level=info msg="Initializing buildkit" Nov 5 15:56:40.206571 dockerd[1844]: time="2025-11-05T15:56:40.206499176Z" level=info msg="Completed buildkit initialization" Nov 5 15:56:40.221795 dockerd[1844]: time="2025-11-05T15:56:40.221673383Z" level=info msg="Daemon has completed initialization" Nov 5 15:56:40.221931 dockerd[1844]: time="2025-11-05T15:56:40.221816492Z" level=info msg="API listen on /run/docker.sock" Nov 5 15:56:40.222194 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 5 15:56:42.107818 containerd[1625]: time="2025-11-05T15:56:42.107327257Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 5 15:56:43.136150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount726139431.mount: Deactivated successfully. Nov 5 15:56:45.263946 containerd[1625]: time="2025-11-05T15:56:45.263838753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:56:45.265353 containerd[1625]: time="2025-11-05T15:56:45.264796229Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Nov 5 15:56:45.267220 containerd[1625]: time="2025-11-05T15:56:45.267166394Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:56:45.271098 containerd[1625]: time="2025-11-05T15:56:45.271015883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:56:45.272378 containerd[1625]: time="2025-11-05T15:56:45.272285504Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 3.16489598s" Nov 5 15:56:45.272378 containerd[1625]: time="2025-11-05T15:56:45.272352249Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 5 15:56:45.273066 containerd[1625]: time="2025-11-05T15:56:45.273030441Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 5 15:56:48.499599 containerd[1625]: time="2025-11-05T15:56:48.499514220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:56:48.501519 containerd[1625]: time="2025-11-05T15:56:48.501436104Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Nov 5 15:56:48.503871 containerd[1625]: time="2025-11-05T15:56:48.503586847Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:56:48.509615 containerd[1625]: time="2025-11-05T15:56:48.509510506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:56:48.510720 containerd[1625]: time="2025-11-05T15:56:48.510611401Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 3.237549421s" Nov 5 15:56:48.510720 containerd[1625]: time="2025-11-05T15:56:48.510668698Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 5 15:56:48.519832 containerd[1625]: time="2025-11-05T15:56:48.519791988Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 5 15:56:49.415788 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 5 15:56:49.419693 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:56:49.741725 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:56:49.757927 (kubelet)[2156]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:56:50.019634 containerd[1625]: time="2025-11-05T15:56:50.019473185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:56:50.020674 containerd[1625]: time="2025-11-05T15:56:50.020626107Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Nov 5 15:56:50.022575 containerd[1625]: time="2025-11-05T15:56:50.022544936Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:56:50.025053 containerd[1625]: time="2025-11-05T15:56:50.025015920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:56:50.026644 containerd[1625]: time="2025-11-05T15:56:50.026600231Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.50676396s" Nov 5 15:56:50.026729 containerd[1625]: time="2025-11-05T15:56:50.026648852Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 5 15:56:50.027224 containerd[1625]: time="2025-11-05T15:56:50.027189907Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 5 15:56:50.059584 kubelet[2156]: E1105 15:56:50.059506 2156 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:56:50.064744 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:56:50.065012 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:56:50.065703 systemd[1]: kubelet.service: Consumed 313ms CPU time, 108.9M memory peak. Nov 5 15:56:52.427794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3344920280.mount: Deactivated successfully. Nov 5 15:56:53.105963 containerd[1625]: time="2025-11-05T15:56:53.105873167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:56:53.107124 containerd[1625]: time="2025-11-05T15:56:53.107070152Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Nov 5 15:56:53.109107 containerd[1625]: time="2025-11-05T15:56:53.109046718Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:56:53.113765 containerd[1625]: time="2025-11-05T15:56:53.113690327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:56:53.114552 containerd[1625]: time="2025-11-05T15:56:53.114499564Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 3.087281775s" Nov 5 15:56:53.114552 containerd[1625]: time="2025-11-05T15:56:53.114535983Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 5 15:56:53.117284 containerd[1625]: time="2025-11-05T15:56:53.117240174Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 5 15:56:53.616332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1954591121.mount: Deactivated successfully. Nov 5 15:56:58.854166 containerd[1625]: time="2025-11-05T15:56:58.854076057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:56:58.856187 containerd[1625]: time="2025-11-05T15:56:58.856094985Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Nov 5 15:56:58.857664 containerd[1625]: time="2025-11-05T15:56:58.857580771Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:56:58.864226 containerd[1625]: time="2025-11-05T15:56:58.864128667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:56:58.866549 containerd[1625]: time="2025-11-05T15:56:58.865507810Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 5.747920665s" Nov 5 15:56:58.866549 containerd[1625]: time="2025-11-05T15:56:58.865546895Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 5 15:56:58.866834 containerd[1625]: time="2025-11-05T15:56:58.866763646Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 5 15:56:59.468046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2105636360.mount: Deactivated successfully. Nov 5 15:56:59.487710 containerd[1625]: time="2025-11-05T15:56:59.487615727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:56:59.495749 containerd[1625]: time="2025-11-05T15:56:59.494344282Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Nov 5 15:56:59.497033 containerd[1625]: time="2025-11-05T15:56:59.496975547Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:56:59.502954 containerd[1625]: time="2025-11-05T15:56:59.502842333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:56:59.505194 containerd[1625]: time="2025-11-05T15:56:59.505130521Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 638.332139ms" Nov 5 15:56:59.505194 containerd[1625]: time="2025-11-05T15:56:59.505168935Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 5 15:56:59.505972 containerd[1625]: time="2025-11-05T15:56:59.505922867Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 5 15:57:00.164148 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 5 15:57:00.167558 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:57:00.592960 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:57:00.605756 (kubelet)[2265]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:57:01.057124 kubelet[2265]: E1105 15:57:01.057046 2265 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:57:01.060889 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:57:01.061163 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:57:01.061761 systemd[1]: kubelet.service: Consumed 347ms CPU time, 110.4M memory peak. Nov 5 15:57:08.229137 containerd[1625]: time="2025-11-05T15:57:08.229047681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:57:08.260763 containerd[1625]: time="2025-11-05T15:57:08.260629692Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Nov 5 15:57:08.371757 containerd[1625]: time="2025-11-05T15:57:08.371049214Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:57:08.415587 containerd[1625]: time="2025-11-05T15:57:08.415044659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:57:08.425406 containerd[1625]: time="2025-11-05T15:57:08.422477616Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 8.916491989s" Nov 5 15:57:08.425406 containerd[1625]: time="2025-11-05T15:57:08.422547379Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 5 15:57:10.254674 update_engine[1597]: I20251105 15:57:10.253563 1597 update_attempter.cc:509] Updating boot flags... Nov 5 15:57:11.164469 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 5 15:57:11.170925 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:57:11.551730 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:57:11.570987 (kubelet)[2337]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:57:11.633403 kubelet[2337]: E1105 15:57:11.633289 2337 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:57:11.637905 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:57:11.638158 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:57:11.638683 systemd[1]: kubelet.service: Consumed 284ms CPU time, 110.5M memory peak. Nov 5 15:57:13.347806 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:57:13.347995 systemd[1]: kubelet.service: Consumed 284ms CPU time, 110.5M memory peak. Nov 5 15:57:13.350845 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:57:13.384691 systemd[1]: Reload requested from client PID 2353 ('systemctl') (unit session-7.scope)... Nov 5 15:57:13.384731 systemd[1]: Reloading... Nov 5 15:57:13.531355 zram_generator::config[2396]: No configuration found. Nov 5 15:57:15.224991 systemd[1]: Reloading finished in 1839 ms. Nov 5 15:57:15.303337 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 5 15:57:15.303468 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 5 15:57:15.303840 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:57:15.303896 systemd[1]: kubelet.service: Consumed 191ms CPU time, 98.2M memory peak. Nov 5 15:57:15.305998 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:57:15.517630 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:57:15.538608 (kubelet)[2444]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 15:57:15.581550 kubelet[2444]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 15:57:15.581550 kubelet[2444]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:57:15.581977 kubelet[2444]: I1105 15:57:15.581574 2444 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 15:57:16.277944 kubelet[2444]: I1105 15:57:16.276835 2444 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 5 15:57:16.277944 kubelet[2444]: I1105 15:57:16.277846 2444 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 15:57:16.277944 kubelet[2444]: I1105 15:57:16.278045 2444 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 5 15:57:16.278927 kubelet[2444]: I1105 15:57:16.278496 2444 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 15:57:16.281549 kubelet[2444]: I1105 15:57:16.279238 2444 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 15:57:17.646811 kubelet[2444]: E1105 15:57:17.646331 2444 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.107:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 5 15:57:17.651484 kubelet[2444]: I1105 15:57:17.651383 2444 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 15:57:17.673949 kubelet[2444]: I1105 15:57:17.673597 2444 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 15:57:17.699132 kubelet[2444]: I1105 15:57:17.695927 2444 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 5 15:57:17.699132 kubelet[2444]: I1105 15:57:17.697536 2444 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 15:57:17.700232 kubelet[2444]: I1105 15:57:17.699820 2444 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 15:57:17.702604 kubelet[2444]: I1105 15:57:17.701163 2444 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 15:57:17.709150 kubelet[2444]: I1105 15:57:17.707214 2444 container_manager_linux.go:306] "Creating device plugin manager" Nov 5 15:57:17.709150 kubelet[2444]: I1105 15:57:17.708844 2444 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 5 15:57:17.834434 kubelet[2444]: I1105 15:57:17.834354 2444 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:57:17.835230 kubelet[2444]: I1105 15:57:17.834692 2444 kubelet.go:475] "Attempting to sync node with API server" Nov 5 15:57:17.835230 kubelet[2444]: I1105 15:57:17.834724 2444 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 15:57:17.835230 kubelet[2444]: I1105 15:57:17.834754 2444 kubelet.go:387] "Adding apiserver pod source" Nov 5 15:57:17.835230 kubelet[2444]: I1105 15:57:17.834793 2444 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 15:57:17.836167 kubelet[2444]: E1105 15:57:17.836105 2444 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.107:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 15:57:17.836278 kubelet[2444]: E1105 15:57:17.836232 2444 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.107:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 15:57:17.847879 kubelet[2444]: I1105 15:57:17.845965 2444 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 15:57:17.847879 kubelet[2444]: I1105 15:57:17.846719 2444 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 15:57:17.847879 kubelet[2444]: I1105 15:57:17.846756 2444 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 5 15:57:17.847879 kubelet[2444]: W1105 15:57:17.846841 2444 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 5 15:57:17.858752 kubelet[2444]: I1105 15:57:17.854763 2444 server.go:1262] "Started kubelet" Nov 5 15:57:17.858752 kubelet[2444]: I1105 15:57:17.857854 2444 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 15:57:17.859121 kubelet[2444]: I1105 15:57:17.856062 2444 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 15:57:17.862167 kubelet[2444]: I1105 15:57:17.862111 2444 server.go:310] "Adding debug handlers to kubelet server" Nov 5 15:57:17.865068 kubelet[2444]: I1105 15:57:17.856207 2444 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 15:57:17.865893 kubelet[2444]: I1105 15:57:17.865856 2444 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 15:57:17.867126 kubelet[2444]: I1105 15:57:17.867103 2444 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 5 15:57:17.867731 kubelet[2444]: I1105 15:57:17.867710 2444 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 15:57:17.867980 kubelet[2444]: I1105 15:57:17.867961 2444 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 5 15:57:17.868159 kubelet[2444]: I1105 15:57:17.868145 2444 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 5 15:57:17.868292 kubelet[2444]: I1105 15:57:17.868279 2444 reconciler.go:29] "Reconciler: start to sync state" Nov 5 15:57:17.869761 kubelet[2444]: E1105 15:57:17.866347 2444 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.107:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.107:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875277ade503d90 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-05 15:57:17.854711184 +0000 UTC m=+2.312056196,LastTimestamp:2025-11-05 15:57:17.854711184 +0000 UTC m=+2.312056196,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 5 15:57:17.869761 kubelet[2444]: E1105 15:57:17.868841 2444 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.107:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 15:57:17.869761 kubelet[2444]: E1105 15:57:17.868975 2444 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 15:57:17.869761 kubelet[2444]: E1105 15:57:17.869055 2444 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.107:6443: connect: connection refused" interval="200ms" Nov 5 15:57:17.869761 kubelet[2444]: I1105 15:57:17.869353 2444 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 15:57:17.872749 kubelet[2444]: E1105 15:57:17.872707 2444 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 15:57:17.880403 kubelet[2444]: I1105 15:57:17.880208 2444 factory.go:223] Registration of the containerd container factory successfully Nov 5 15:57:17.880403 kubelet[2444]: I1105 15:57:17.880242 2444 factory.go:223] Registration of the systemd container factory successfully Nov 5 15:57:17.943105 kubelet[2444]: I1105 15:57:17.943003 2444 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 5 15:57:17.946698 kubelet[2444]: I1105 15:57:17.945709 2444 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 5 15:57:17.946698 kubelet[2444]: I1105 15:57:17.945744 2444 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 5 15:57:17.946698 kubelet[2444]: I1105 15:57:17.945769 2444 kubelet.go:2427] "Starting kubelet main sync loop" Nov 5 15:57:17.946698 kubelet[2444]: E1105 15:57:17.945825 2444 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 15:57:17.946698 kubelet[2444]: E1105 15:57:17.946597 2444 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.107:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 15:57:17.946977 kubelet[2444]: I1105 15:57:17.946856 2444 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 15:57:17.946977 kubelet[2444]: I1105 15:57:17.946871 2444 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 15:57:17.947053 kubelet[2444]: I1105 15:57:17.947004 2444 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:57:17.970021 kubelet[2444]: E1105 15:57:17.969891 2444 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 15:57:17.986923 kubelet[2444]: I1105 15:57:17.986718 2444 policy_none.go:49] "None policy: Start" Nov 5 15:57:17.986923 kubelet[2444]: I1105 15:57:17.986768 2444 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 5 15:57:17.986923 kubelet[2444]: I1105 15:57:17.986787 2444 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 5 15:57:18.027032 kubelet[2444]: I1105 15:57:18.026953 2444 policy_none.go:47] "Start" Nov 5 15:57:18.033779 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 5 15:57:18.046741 kubelet[2444]: E1105 15:57:18.046675 2444 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 5 15:57:18.069715 kubelet[2444]: E1105 15:57:18.069612 2444 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.107:6443: connect: connection refused" interval="400ms" Nov 5 15:57:18.070358 kubelet[2444]: E1105 15:57:18.070334 2444 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 15:57:18.075542 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 5 15:57:18.082518 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 5 15:57:18.103471 kubelet[2444]: E1105 15:57:18.103120 2444 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 15:57:18.106658 kubelet[2444]: I1105 15:57:18.106292 2444 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 15:57:18.106658 kubelet[2444]: I1105 15:57:18.106394 2444 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 15:57:18.107214 kubelet[2444]: I1105 15:57:18.107145 2444 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 15:57:18.110827 kubelet[2444]: E1105 15:57:18.110789 2444 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 15:57:18.110945 kubelet[2444]: E1105 15:57:18.110848 2444 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 5 15:57:18.210271 kubelet[2444]: I1105 15:57:18.210102 2444 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 15:57:18.211374 kubelet[2444]: E1105 15:57:18.211283 2444 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.107:6443/api/v1/nodes\": dial tcp 10.0.0.107:6443: connect: connection refused" node="localhost" Nov 5 15:57:18.270858 kubelet[2444]: I1105 15:57:18.270618 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e64f79d2d5d2a2152f5256e0c27def88-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e64f79d2d5d2a2152f5256e0c27def88\") " pod="kube-system/kube-apiserver-localhost" Nov 5 15:57:18.270858 kubelet[2444]: I1105 15:57:18.270700 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e64f79d2d5d2a2152f5256e0c27def88-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e64f79d2d5d2a2152f5256e0c27def88\") " pod="kube-system/kube-apiserver-localhost" Nov 5 15:57:18.270858 kubelet[2444]: I1105 15:57:18.270725 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e64f79d2d5d2a2152f5256e0c27def88-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e64f79d2d5d2a2152f5256e0c27def88\") " pod="kube-system/kube-apiserver-localhost" Nov 5 15:57:18.305028 systemd[1]: Created slice kubepods-burstable-pode64f79d2d5d2a2152f5256e0c27def88.slice - libcontainer container kubepods-burstable-pode64f79d2d5d2a2152f5256e0c27def88.slice. Nov 5 15:57:18.316442 kubelet[2444]: E1105 15:57:18.316389 2444 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:57:18.371505 kubelet[2444]: I1105 15:57:18.371293 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:57:18.371505 kubelet[2444]: I1105 15:57:18.371374 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:57:18.371505 kubelet[2444]: I1105 15:57:18.371393 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:57:18.371505 kubelet[2444]: I1105 15:57:18.371410 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:57:18.371505 kubelet[2444]: I1105 15:57:18.371430 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:57:18.398194 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Nov 5 15:57:18.400556 kubelet[2444]: E1105 15:57:18.400524 2444 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:57:18.412993 kubelet[2444]: I1105 15:57:18.412973 2444 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 15:57:18.413364 kubelet[2444]: E1105 15:57:18.413337 2444 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.107:6443/api/v1/nodes\": dial tcp 10.0.0.107:6443: connect: connection refused" node="localhost" Nov 5 15:57:18.435788 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Nov 5 15:57:18.437943 kubelet[2444]: E1105 15:57:18.437896 2444 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:57:18.471140 kubelet[2444]: E1105 15:57:18.470970 2444 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.107:6443: connect: connection refused" interval="800ms" Nov 5 15:57:18.472047 kubelet[2444]: I1105 15:57:18.472013 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 5 15:57:18.679452 kubelet[2444]: E1105 15:57:18.679369 2444 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:18.680403 containerd[1625]: time="2025-11-05T15:57:18.680348410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e64f79d2d5d2a2152f5256e0c27def88,Namespace:kube-system,Attempt:0,}" Nov 5 15:57:18.705697 kubelet[2444]: E1105 15:57:18.705639 2444 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:18.706435 containerd[1625]: time="2025-11-05T15:57:18.706218906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Nov 5 15:57:18.742795 kubelet[2444]: E1105 15:57:18.742546 2444 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:18.743608 containerd[1625]: time="2025-11-05T15:57:18.743258332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Nov 5 15:57:18.815724 kubelet[2444]: I1105 15:57:18.815682 2444 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 15:57:18.816215 kubelet[2444]: E1105 15:57:18.816168 2444 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.107:6443/api/v1/nodes\": dial tcp 10.0.0.107:6443: connect: connection refused" node="localhost" Nov 5 15:57:18.838994 kubelet[2444]: E1105 15:57:18.838927 2444 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.107:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 15:57:18.988997 kubelet[2444]: E1105 15:57:18.988915 2444 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.107:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 15:57:19.272517 kubelet[2444]: E1105 15:57:19.272456 2444 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.107:6443: connect: connection refused" interval="1.6s" Nov 5 15:57:19.325961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount24996077.mount: Deactivated successfully. Nov 5 15:57:19.335419 containerd[1625]: time="2025-11-05T15:57:19.335323792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:57:19.339546 containerd[1625]: time="2025-11-05T15:57:19.339455844Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 5 15:57:19.340488 containerd[1625]: time="2025-11-05T15:57:19.340452274Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:57:19.341993 containerd[1625]: time="2025-11-05T15:57:19.341819792Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:57:19.342931 containerd[1625]: time="2025-11-05T15:57:19.342883769Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:57:19.343810 containerd[1625]: time="2025-11-05T15:57:19.343775100Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 5 15:57:19.344446 kubelet[2444]: E1105 15:57:19.344399 2444 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.107:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 15:57:19.344903 containerd[1625]: time="2025-11-05T15:57:19.344874413Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 5 15:57:19.346557 containerd[1625]: time="2025-11-05T15:57:19.346525516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:57:19.348046 containerd[1625]: time="2025-11-05T15:57:19.348003905Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 638.545699ms" Nov 5 15:57:19.349439 containerd[1625]: time="2025-11-05T15:57:19.349393656Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 602.135136ms" Nov 5 15:57:19.350112 containerd[1625]: time="2025-11-05T15:57:19.349900752Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 664.668668ms" Nov 5 15:57:19.367345 kubelet[2444]: E1105 15:57:19.367163 2444 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.107:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.107:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875277ade503d90 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-05 15:57:17.854711184 +0000 UTC m=+2.312056196,LastTimestamp:2025-11-05 15:57:17.854711184 +0000 UTC m=+2.312056196,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 5 15:57:19.390957 containerd[1625]: time="2025-11-05T15:57:19.390898617Z" level=info msg="connecting to shim eea36a21d00918209944243133ffe4111484243e53190891c53a5d85b37d435d" address="unix:///run/containerd/s/a0eee0e3b37b256ac1e0738d155858ee9e863306bb9680b013ca0c2b9c9225b6" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:57:19.395335 containerd[1625]: time="2025-11-05T15:57:19.394563969Z" level=info msg="connecting to shim 5c0ef5cdae20c2175e6700c92f5b51655efcebda6dac5cdd81076f676e788576" address="unix:///run/containerd/s/90395200d99626aa00359de2bd8fb453cad32272ee8c7f094a00f93587bf5bbd" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:57:19.403660 containerd[1625]: time="2025-11-05T15:57:19.403589390Z" level=info msg="connecting to shim 9d748f8cfd15b17635a00caaafefd4b6ad4157032efb93476c5bb115bfdf92f9" address="unix:///run/containerd/s/87e12fc03c795ea659f05bc69cfed4350eae011a040af0d590f5b79687230344" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:57:19.408340 kubelet[2444]: E1105 15:57:19.406616 2444 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.107:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 15:57:19.478465 systemd[1]: Started cri-containerd-9d748f8cfd15b17635a00caaafefd4b6ad4157032efb93476c5bb115bfdf92f9.scope - libcontainer container 9d748f8cfd15b17635a00caaafefd4b6ad4157032efb93476c5bb115bfdf92f9. Nov 5 15:57:19.483696 systemd[1]: Started cri-containerd-5c0ef5cdae20c2175e6700c92f5b51655efcebda6dac5cdd81076f676e788576.scope - libcontainer container 5c0ef5cdae20c2175e6700c92f5b51655efcebda6dac5cdd81076f676e788576. Nov 5 15:57:19.486231 systemd[1]: Started cri-containerd-eea36a21d00918209944243133ffe4111484243e53190891c53a5d85b37d435d.scope - libcontainer container eea36a21d00918209944243133ffe4111484243e53190891c53a5d85b37d435d. Nov 5 15:57:19.574094 containerd[1625]: time="2025-11-05T15:57:19.573162889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e64f79d2d5d2a2152f5256e0c27def88,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d748f8cfd15b17635a00caaafefd4b6ad4157032efb93476c5bb115bfdf92f9\"" Nov 5 15:57:19.575069 kubelet[2444]: E1105 15:57:19.575028 2444 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:19.579287 containerd[1625]: time="2025-11-05T15:57:19.579214602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"eea36a21d00918209944243133ffe4111484243e53190891c53a5d85b37d435d\"" Nov 5 15:57:19.580021 kubelet[2444]: E1105 15:57:19.579990 2444 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:19.593379 containerd[1625]: time="2025-11-05T15:57:19.593277873Z" level=info msg="CreateContainer within sandbox \"9d748f8cfd15b17635a00caaafefd4b6ad4157032efb93476c5bb115bfdf92f9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 5 15:57:19.595867 containerd[1625]: time="2025-11-05T15:57:19.595826640Z" level=info msg="CreateContainer within sandbox \"eea36a21d00918209944243133ffe4111484243e53190891c53a5d85b37d435d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 5 15:57:19.614541 containerd[1625]: time="2025-11-05T15:57:19.614477582Z" level=info msg="Container 504d047a9dd0df7c1de7841416be958f9b624aafbb0a352a4c6d93969c980439: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:57:19.615747 containerd[1625]: time="2025-11-05T15:57:19.615456408Z" level=info msg="Container 1508c6156b263b62a7093ef16441582897f1487c1568cfca607b8c2152f48ab6: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:57:19.621232 kubelet[2444]: I1105 15:57:19.620418 2444 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 15:57:19.621232 kubelet[2444]: E1105 15:57:19.621046 2444 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.107:6443/api/v1/nodes\": dial tcp 10.0.0.107:6443: connect: connection refused" node="localhost" Nov 5 15:57:19.631292 containerd[1625]: time="2025-11-05T15:57:19.631213623Z" level=info msg="CreateContainer within sandbox \"9d748f8cfd15b17635a00caaafefd4b6ad4157032efb93476c5bb115bfdf92f9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"504d047a9dd0df7c1de7841416be958f9b624aafbb0a352a4c6d93969c980439\"" Nov 5 15:57:19.632768 containerd[1625]: time="2025-11-05T15:57:19.632744681Z" level=info msg="StartContainer for \"504d047a9dd0df7c1de7841416be958f9b624aafbb0a352a4c6d93969c980439\"" Nov 5 15:57:19.632905 containerd[1625]: time="2025-11-05T15:57:19.632874486Z" level=info msg="CreateContainer within sandbox \"eea36a21d00918209944243133ffe4111484243e53190891c53a5d85b37d435d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1508c6156b263b62a7093ef16441582897f1487c1568cfca607b8c2152f48ab6\"" Nov 5 15:57:19.633377 containerd[1625]: time="2025-11-05T15:57:19.633297333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c0ef5cdae20c2175e6700c92f5b51655efcebda6dac5cdd81076f676e788576\"" Nov 5 15:57:19.633493 containerd[1625]: time="2025-11-05T15:57:19.633328032Z" level=info msg="StartContainer for \"1508c6156b263b62a7093ef16441582897f1487c1568cfca607b8c2152f48ab6\"" Nov 5 15:57:19.634081 containerd[1625]: time="2025-11-05T15:57:19.634056414Z" level=info msg="connecting to shim 504d047a9dd0df7c1de7841416be958f9b624aafbb0a352a4c6d93969c980439" address="unix:///run/containerd/s/87e12fc03c795ea659f05bc69cfed4350eae011a040af0d590f5b79687230344" protocol=ttrpc version=3 Nov 5 15:57:19.634578 containerd[1625]: time="2025-11-05T15:57:19.634544294Z" level=info msg="connecting to shim 1508c6156b263b62a7093ef16441582897f1487c1568cfca607b8c2152f48ab6" address="unix:///run/containerd/s/a0eee0e3b37b256ac1e0738d155858ee9e863306bb9680b013ca0c2b9c9225b6" protocol=ttrpc version=3 Nov 5 15:57:19.635681 kubelet[2444]: E1105 15:57:19.635633 2444 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:19.641566 containerd[1625]: time="2025-11-05T15:57:19.641505452Z" level=info msg="CreateContainer within sandbox \"5c0ef5cdae20c2175e6700c92f5b51655efcebda6dac5cdd81076f676e788576\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 5 15:57:19.654603 containerd[1625]: time="2025-11-05T15:57:19.654548629Z" level=info msg="Container 8f8aba3f0afe493469bad881ab73adc214fb7fe3e477ce0cb424b738f6702697: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:57:19.662481 systemd[1]: Started cri-containerd-1508c6156b263b62a7093ef16441582897f1487c1568cfca607b8c2152f48ab6.scope - libcontainer container 1508c6156b263b62a7093ef16441582897f1487c1568cfca607b8c2152f48ab6. Nov 5 15:57:19.664728 containerd[1625]: time="2025-11-05T15:57:19.664673063Z" level=info msg="CreateContainer within sandbox \"5c0ef5cdae20c2175e6700c92f5b51655efcebda6dac5cdd81076f676e788576\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8f8aba3f0afe493469bad881ab73adc214fb7fe3e477ce0cb424b738f6702697\"" Nov 5 15:57:19.665389 containerd[1625]: time="2025-11-05T15:57:19.665360619Z" level=info msg="StartContainer for \"8f8aba3f0afe493469bad881ab73adc214fb7fe3e477ce0cb424b738f6702697\"" Nov 5 15:57:19.667907 containerd[1625]: time="2025-11-05T15:57:19.667827281Z" level=info msg="connecting to shim 8f8aba3f0afe493469bad881ab73adc214fb7fe3e477ce0cb424b738f6702697" address="unix:///run/containerd/s/90395200d99626aa00359de2bd8fb453cad32272ee8c7f094a00f93587bf5bbd" protocol=ttrpc version=3 Nov 5 15:57:19.673784 systemd[1]: Started cri-containerd-504d047a9dd0df7c1de7841416be958f9b624aafbb0a352a4c6d93969c980439.scope - libcontainer container 504d047a9dd0df7c1de7841416be958f9b624aafbb0a352a4c6d93969c980439. Nov 5 15:57:19.693727 systemd[1]: Started cri-containerd-8f8aba3f0afe493469bad881ab73adc214fb7fe3e477ce0cb424b738f6702697.scope - libcontainer container 8f8aba3f0afe493469bad881ab73adc214fb7fe3e477ce0cb424b738f6702697. Nov 5 15:57:19.806447 containerd[1625]: time="2025-11-05T15:57:19.806389999Z" level=info msg="StartContainer for \"1508c6156b263b62a7093ef16441582897f1487c1568cfca607b8c2152f48ab6\" returns successfully" Nov 5 15:57:19.806951 containerd[1625]: time="2025-11-05T15:57:19.806505507Z" level=info msg="StartContainer for \"504d047a9dd0df7c1de7841416be958f9b624aafbb0a352a4c6d93969c980439\" returns successfully" Nov 5 15:57:19.821475 kubelet[2444]: E1105 15:57:19.821412 2444 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.107:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 5 15:57:19.840260 containerd[1625]: time="2025-11-05T15:57:19.840088209Z" level=info msg="StartContainer for \"8f8aba3f0afe493469bad881ab73adc214fb7fe3e477ce0cb424b738f6702697\" returns successfully" Nov 5 15:57:19.958545 kubelet[2444]: E1105 15:57:19.958433 2444 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:57:19.958849 kubelet[2444]: E1105 15:57:19.958703 2444 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:19.964191 kubelet[2444]: E1105 15:57:19.964170 2444 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:57:19.964737 kubelet[2444]: E1105 15:57:19.964685 2444 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:19.966151 kubelet[2444]: E1105 15:57:19.966005 2444 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:57:19.966151 kubelet[2444]: E1105 15:57:19.966103 2444 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:20.971888 kubelet[2444]: E1105 15:57:20.971827 2444 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:57:20.972590 kubelet[2444]: E1105 15:57:20.971998 2444 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:20.972590 kubelet[2444]: E1105 15:57:20.972427 2444 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:57:20.972590 kubelet[2444]: E1105 15:57:20.972566 2444 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:21.224939 kubelet[2444]: I1105 15:57:21.224569 2444 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 15:57:22.261936 kubelet[2444]: E1105 15:57:22.261383 2444 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:57:22.261936 kubelet[2444]: E1105 15:57:22.261610 2444 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:23.056999 kubelet[2444]: E1105 15:57:23.056941 2444 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 5 15:57:23.143541 kubelet[2444]: I1105 15:57:23.143435 2444 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 5 15:57:23.169895 kubelet[2444]: I1105 15:57:23.169690 2444 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 15:57:23.182482 kubelet[2444]: E1105 15:57:23.182422 2444 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 5 15:57:23.182482 kubelet[2444]: I1105 15:57:23.182463 2444 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 15:57:23.185362 kubelet[2444]: E1105 15:57:23.185328 2444 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 5 15:57:23.185362 kubelet[2444]: I1105 15:57:23.185354 2444 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 15:57:23.188688 kubelet[2444]: E1105 15:57:23.188645 2444 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 5 15:57:23.594553 kubelet[2444]: I1105 15:57:23.594502 2444 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 15:57:23.597058 kubelet[2444]: E1105 15:57:23.597029 2444 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 5 15:57:23.597250 kubelet[2444]: E1105 15:57:23.597228 2444 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:23.838146 kubelet[2444]: I1105 15:57:23.838089 2444 apiserver.go:52] "Watching apiserver" Nov 5 15:57:23.869440 kubelet[2444]: I1105 15:57:23.869276 2444 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 5 15:57:25.495958 systemd[1]: Reload requested from client PID 2734 ('systemctl') (unit session-7.scope)... Nov 5 15:57:25.495984 systemd[1]: Reloading... Nov 5 15:57:25.593491 zram_generator::config[2781]: No configuration found. Nov 5 15:57:25.863033 systemd[1]: Reloading finished in 366 ms. Nov 5 15:57:25.897078 kubelet[2444]: I1105 15:57:25.896971 2444 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 15:57:25.897076 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:57:25.932476 systemd[1]: kubelet.service: Deactivated successfully. Nov 5 15:57:25.933035 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:57:25.933130 systemd[1]: kubelet.service: Consumed 1.460s CPU time, 123.8M memory peak. Nov 5 15:57:25.936179 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:57:26.211238 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:57:26.223401 (kubelet)[2823]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 15:57:26.286781 kubelet[2823]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 15:57:26.286781 kubelet[2823]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:57:26.287495 kubelet[2823]: I1105 15:57:26.286800 2823 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 15:57:26.300519 kubelet[2823]: I1105 15:57:26.300442 2823 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 5 15:57:26.300519 kubelet[2823]: I1105 15:57:26.300492 2823 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 15:57:26.300519 kubelet[2823]: I1105 15:57:26.300537 2823 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 5 15:57:26.300789 kubelet[2823]: I1105 15:57:26.300547 2823 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 15:57:26.300890 kubelet[2823]: I1105 15:57:26.300862 2823 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 15:57:26.302607 kubelet[2823]: I1105 15:57:26.302559 2823 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 5 15:57:26.305153 kubelet[2823]: I1105 15:57:26.305029 2823 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 15:57:26.310978 kubelet[2823]: I1105 15:57:26.310932 2823 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 15:57:26.317175 kubelet[2823]: I1105 15:57:26.317144 2823 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 5 15:57:26.318045 kubelet[2823]: I1105 15:57:26.317487 2823 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 15:57:26.318045 kubelet[2823]: I1105 15:57:26.317539 2823 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 15:57:26.318045 kubelet[2823]: I1105 15:57:26.317692 2823 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 15:57:26.318045 kubelet[2823]: I1105 15:57:26.317700 2823 container_manager_linux.go:306] "Creating device plugin manager" Nov 5 15:57:26.318279 kubelet[2823]: I1105 15:57:26.317724 2823 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 5 15:57:26.319045 kubelet[2823]: I1105 15:57:26.318989 2823 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:57:26.319261 kubelet[2823]: I1105 15:57:26.319225 2823 kubelet.go:475] "Attempting to sync node with API server" Nov 5 15:57:26.319261 kubelet[2823]: I1105 15:57:26.319244 2823 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 15:57:26.319366 kubelet[2823]: I1105 15:57:26.319273 2823 kubelet.go:387] "Adding apiserver pod source" Nov 5 15:57:26.319366 kubelet[2823]: I1105 15:57:26.319325 2823 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 15:57:26.325167 kubelet[2823]: I1105 15:57:26.325053 2823 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 15:57:26.325791 kubelet[2823]: I1105 15:57:26.325758 2823 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 15:57:26.325842 kubelet[2823]: I1105 15:57:26.325805 2823 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 5 15:57:26.332137 kubelet[2823]: I1105 15:57:26.332019 2823 server.go:1262] "Started kubelet" Nov 5 15:57:26.332776 kubelet[2823]: I1105 15:57:26.332734 2823 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 15:57:26.334737 kubelet[2823]: I1105 15:57:26.334677 2823 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 5 15:57:26.335626 kubelet[2823]: I1105 15:57:26.335607 2823 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 15:57:26.335827 kubelet[2823]: I1105 15:57:26.333237 2823 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 15:57:26.335972 kubelet[2823]: I1105 15:57:26.332735 2823 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 15:57:26.337402 kubelet[2823]: I1105 15:57:26.337383 2823 server.go:310] "Adding debug handlers to kubelet server" Nov 5 15:57:26.337544 kubelet[2823]: I1105 15:57:26.333063 2823 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 15:57:26.338680 kubelet[2823]: I1105 15:57:26.338068 2823 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 5 15:57:26.339498 kubelet[2823]: I1105 15:57:26.339474 2823 factory.go:223] Registration of the systemd container factory successfully Nov 5 15:57:26.339666 kubelet[2823]: I1105 15:57:26.339638 2823 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 5 15:57:26.339768 kubelet[2823]: I1105 15:57:26.339743 2823 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 15:57:26.339884 kubelet[2823]: E1105 15:57:26.339522 2823 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 15:57:26.339934 kubelet[2823]: I1105 15:57:26.339905 2823 reconciler.go:29] "Reconciler: start to sync state" Nov 5 15:57:26.348187 kubelet[2823]: I1105 15:57:26.346262 2823 factory.go:223] Registration of the containerd container factory successfully Nov 5 15:57:26.434603 kubelet[2823]: I1105 15:57:26.434538 2823 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 5 15:57:26.439399 kubelet[2823]: I1105 15:57:26.438948 2823 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 5 15:57:26.439399 kubelet[2823]: I1105 15:57:26.438985 2823 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 5 15:57:26.439399 kubelet[2823]: I1105 15:57:26.439033 2823 kubelet.go:2427] "Starting kubelet main sync loop" Nov 5 15:57:26.439399 kubelet[2823]: E1105 15:57:26.439104 2823 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 15:57:26.458639 kubelet[2823]: I1105 15:57:26.458594 2823 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 15:57:26.458639 kubelet[2823]: I1105 15:57:26.458612 2823 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 15:57:26.458639 kubelet[2823]: I1105 15:57:26.458629 2823 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:57:26.458906 kubelet[2823]: I1105 15:57:26.458756 2823 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 5 15:57:26.458906 kubelet[2823]: I1105 15:57:26.458767 2823 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 5 15:57:26.458906 kubelet[2823]: I1105 15:57:26.458784 2823 policy_none.go:49] "None policy: Start" Nov 5 15:57:26.458906 kubelet[2823]: I1105 15:57:26.458794 2823 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 5 15:57:26.458906 kubelet[2823]: I1105 15:57:26.458804 2823 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 5 15:57:26.458906 kubelet[2823]: I1105 15:57:26.458881 2823 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 5 15:57:26.458906 kubelet[2823]: I1105 15:57:26.458889 2823 policy_none.go:47] "Start" Nov 5 15:57:26.465219 kubelet[2823]: E1105 15:57:26.464776 2823 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 15:57:26.465797 kubelet[2823]: I1105 15:57:26.465653 2823 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 15:57:26.465982 kubelet[2823]: I1105 15:57:26.465944 2823 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 15:57:26.466997 kubelet[2823]: I1105 15:57:26.466366 2823 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 15:57:26.468813 kubelet[2823]: E1105 15:57:26.468775 2823 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 15:57:26.541103 kubelet[2823]: I1105 15:57:26.541047 2823 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 15:57:26.541269 kubelet[2823]: I1105 15:57:26.541066 2823 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 15:57:26.541629 kubelet[2823]: I1105 15:57:26.541351 2823 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 15:57:26.577172 kubelet[2823]: I1105 15:57:26.577120 2823 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 15:57:26.631221 kubelet[2823]: I1105 15:57:26.631175 2823 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 5 15:57:26.631439 kubelet[2823]: I1105 15:57:26.631297 2823 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 5 15:57:26.642790 kubelet[2823]: I1105 15:57:26.641552 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e64f79d2d5d2a2152f5256e0c27def88-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e64f79d2d5d2a2152f5256e0c27def88\") " pod="kube-system/kube-apiserver-localhost" Nov 5 15:57:26.642790 kubelet[2823]: I1105 15:57:26.642792 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:57:26.642983 kubelet[2823]: I1105 15:57:26.642812 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:57:26.642983 kubelet[2823]: I1105 15:57:26.642834 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:57:26.642983 kubelet[2823]: I1105 15:57:26.642857 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e64f79d2d5d2a2152f5256e0c27def88-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e64f79d2d5d2a2152f5256e0c27def88\") " pod="kube-system/kube-apiserver-localhost" Nov 5 15:57:26.642983 kubelet[2823]: I1105 15:57:26.642874 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:57:26.642983 kubelet[2823]: I1105 15:57:26.642886 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:57:26.643163 kubelet[2823]: I1105 15:57:26.642901 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 5 15:57:26.643163 kubelet[2823]: I1105 15:57:26.642916 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e64f79d2d5d2a2152f5256e0c27def88-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e64f79d2d5d2a2152f5256e0c27def88\") " pod="kube-system/kube-apiserver-localhost" Nov 5 15:57:26.862036 kubelet[2823]: E1105 15:57:26.861865 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:26.871500 kubelet[2823]: E1105 15:57:26.871419 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:26.871687 kubelet[2823]: E1105 15:57:26.871522 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:27.327222 kubelet[2823]: I1105 15:57:27.325209 2823 apiserver.go:52] "Watching apiserver" Nov 5 15:57:27.340762 kubelet[2823]: I1105 15:57:27.340265 2823 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 5 15:57:27.430287 kubelet[2823]: I1105 15:57:27.430205 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.430162236 podStartE2EDuration="1.430162236s" podCreationTimestamp="2025-11-05 15:57:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:57:27.425833872 +0000 UTC m=+1.195988141" watchObservedRunningTime="2025-11-05 15:57:27.430162236 +0000 UTC m=+1.200316515" Nov 5 15:57:27.460042 kubelet[2823]: I1105 15:57:27.459640 2823 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 15:57:27.460042 kubelet[2823]: I1105 15:57:27.459672 2823 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 15:57:27.460042 kubelet[2823]: I1105 15:57:27.459929 2823 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 15:57:27.492713 kubelet[2823]: E1105 15:57:27.492641 2823 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 5 15:57:27.492912 kubelet[2823]: E1105 15:57:27.492879 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:27.493007 kubelet[2823]: E1105 15:57:27.492990 2823 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 5 15:57:27.493163 kubelet[2823]: E1105 15:57:27.493080 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:27.508340 kubelet[2823]: I1105 15:57:27.508123 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.508100475 podStartE2EDuration="1.508100475s" podCreationTimestamp="2025-11-05 15:57:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:57:27.451529591 +0000 UTC m=+1.221683860" watchObservedRunningTime="2025-11-05 15:57:27.508100475 +0000 UTC m=+1.278254744" Nov 5 15:57:27.511711 kubelet[2823]: E1105 15:57:27.509830 2823 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 5 15:57:27.511711 kubelet[2823]: E1105 15:57:27.510231 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:27.561915 kubelet[2823]: I1105 15:57:27.561829 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.561803243 podStartE2EDuration="1.561803243s" podCreationTimestamp="2025-11-05 15:57:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:57:27.509541056 +0000 UTC m=+1.279695355" watchObservedRunningTime="2025-11-05 15:57:27.561803243 +0000 UTC m=+1.331957512" Nov 5 15:57:28.460647 kubelet[2823]: E1105 15:57:28.460595 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:28.460647 kubelet[2823]: E1105 15:57:28.460602 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:28.460647 kubelet[2823]: E1105 15:57:28.460651 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:30.281009 kubelet[2823]: E1105 15:57:30.280068 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:30.464401 kubelet[2823]: E1105 15:57:30.463892 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:32.227932 kubelet[2823]: I1105 15:57:32.226254 2823 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 5 15:57:32.227932 kubelet[2823]: I1105 15:57:32.226832 2823 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 5 15:57:32.228551 containerd[1625]: time="2025-11-05T15:57:32.226666146Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 5 15:57:32.933995 systemd[1]: Created slice kubepods-besteffort-podc977edb0_9f4f_4579_80ca_df52d70449d3.slice - libcontainer container kubepods-besteffort-podc977edb0_9f4f_4579_80ca_df52d70449d3.slice. Nov 5 15:57:32.960982 kubelet[2823]: I1105 15:57:32.960216 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c977edb0-9f4f-4579-80ca-df52d70449d3-kube-proxy\") pod \"kube-proxy-z6whd\" (UID: \"c977edb0-9f4f-4579-80ca-df52d70449d3\") " pod="kube-system/kube-proxy-z6whd" Nov 5 15:57:32.960982 kubelet[2823]: I1105 15:57:32.960273 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c977edb0-9f4f-4579-80ca-df52d70449d3-xtables-lock\") pod \"kube-proxy-z6whd\" (UID: \"c977edb0-9f4f-4579-80ca-df52d70449d3\") " pod="kube-system/kube-proxy-z6whd" Nov 5 15:57:32.960982 kubelet[2823]: I1105 15:57:32.960353 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c977edb0-9f4f-4579-80ca-df52d70449d3-lib-modules\") pod \"kube-proxy-z6whd\" (UID: \"c977edb0-9f4f-4579-80ca-df52d70449d3\") " pod="kube-system/kube-proxy-z6whd" Nov 5 15:57:32.960982 kubelet[2823]: I1105 15:57:32.960374 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psrn7\" (UniqueName: \"kubernetes.io/projected/c977edb0-9f4f-4579-80ca-df52d70449d3-kube-api-access-psrn7\") pod \"kube-proxy-z6whd\" (UID: \"c977edb0-9f4f-4579-80ca-df52d70449d3\") " pod="kube-system/kube-proxy-z6whd" Nov 5 15:57:33.321414 kubelet[2823]: E1105 15:57:33.321249 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:33.341330 containerd[1625]: time="2025-11-05T15:57:33.339340164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z6whd,Uid:c977edb0-9f4f-4579-80ca-df52d70449d3,Namespace:kube-system,Attempt:0,}" Nov 5 15:57:33.396793 systemd[1]: Created slice kubepods-besteffort-pod6389017c_cb97_4b82_8632_477f02eea241.slice - libcontainer container kubepods-besteffort-pod6389017c_cb97_4b82_8632_477f02eea241.slice. Nov 5 15:57:33.413031 containerd[1625]: time="2025-11-05T15:57:33.412963056Z" level=info msg="connecting to shim 0f99ec193c894c8168c05bbd9538935450e0587507327e0a8ffe8ca4558a9feb" address="unix:///run/containerd/s/cd851ea8b000f734ad59cfd454c024b24b7dc5b20d5fdb44dba6ef5e9540feda" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:57:33.478442 systemd[1]: Started cri-containerd-0f99ec193c894c8168c05bbd9538935450e0587507327e0a8ffe8ca4558a9feb.scope - libcontainer container 0f99ec193c894c8168c05bbd9538935450e0587507327e0a8ffe8ca4558a9feb. Nov 5 15:57:33.482697 kubelet[2823]: I1105 15:57:33.482262 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv5v4\" (UniqueName: \"kubernetes.io/projected/6389017c-cb97-4b82-8632-477f02eea241-kube-api-access-bv5v4\") pod \"tigera-operator-65cdcdfd6d-wmg8c\" (UID: \"6389017c-cb97-4b82-8632-477f02eea241\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-wmg8c" Nov 5 15:57:33.482697 kubelet[2823]: I1105 15:57:33.482356 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6389017c-cb97-4b82-8632-477f02eea241-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-wmg8c\" (UID: \"6389017c-cb97-4b82-8632-477f02eea241\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-wmg8c" Nov 5 15:57:33.656011 containerd[1625]: time="2025-11-05T15:57:33.653796818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z6whd,Uid:c977edb0-9f4f-4579-80ca-df52d70449d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f99ec193c894c8168c05bbd9538935450e0587507327e0a8ffe8ca4558a9feb\"" Nov 5 15:57:33.656901 kubelet[2823]: E1105 15:57:33.656830 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:33.725323 containerd[1625]: time="2025-11-05T15:57:33.722005994Z" level=info msg="CreateContainer within sandbox \"0f99ec193c894c8168c05bbd9538935450e0587507327e0a8ffe8ca4558a9feb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 5 15:57:33.733330 containerd[1625]: time="2025-11-05T15:57:33.732900893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-wmg8c,Uid:6389017c-cb97-4b82-8632-477f02eea241,Namespace:tigera-operator,Attempt:0,}" Nov 5 15:57:33.767769 containerd[1625]: time="2025-11-05T15:57:33.765898595Z" level=info msg="Container f09a025da523d4f03906ab5ef317786aa0d0114b8f3c450da3f0562f4f8adba5: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:57:33.796241 containerd[1625]: time="2025-11-05T15:57:33.796156548Z" level=info msg="CreateContainer within sandbox \"0f99ec193c894c8168c05bbd9538935450e0587507327e0a8ffe8ca4558a9feb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f09a025da523d4f03906ab5ef317786aa0d0114b8f3c450da3f0562f4f8adba5\"" Nov 5 15:57:33.797228 containerd[1625]: time="2025-11-05T15:57:33.797190622Z" level=info msg="StartContainer for \"f09a025da523d4f03906ab5ef317786aa0d0114b8f3c450da3f0562f4f8adba5\"" Nov 5 15:57:33.801475 containerd[1625]: time="2025-11-05T15:57:33.801425402Z" level=info msg="connecting to shim f09a025da523d4f03906ab5ef317786aa0d0114b8f3c450da3f0562f4f8adba5" address="unix:///run/containerd/s/cd851ea8b000f734ad59cfd454c024b24b7dc5b20d5fdb44dba6ef5e9540feda" protocol=ttrpc version=3 Nov 5 15:57:33.838699 systemd[1]: Started cri-containerd-f09a025da523d4f03906ab5ef317786aa0d0114b8f3c450da3f0562f4f8adba5.scope - libcontainer container f09a025da523d4f03906ab5ef317786aa0d0114b8f3c450da3f0562f4f8adba5. Nov 5 15:57:33.901882 containerd[1625]: time="2025-11-05T15:57:33.900171425Z" level=info msg="connecting to shim f1b9a67d12c304787782b97fbdffe16270d2e086ac22aea6cf80ee5f6d00c168" address="unix:///run/containerd/s/51505f474d98e85fc02f1e1ac30e789e90e942a7856577a962df2d6f4b3bdfa4" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:57:34.010531 systemd[1]: Started cri-containerd-f1b9a67d12c304787782b97fbdffe16270d2e086ac22aea6cf80ee5f6d00c168.scope - libcontainer container f1b9a67d12c304787782b97fbdffe16270d2e086ac22aea6cf80ee5f6d00c168. Nov 5 15:57:34.017606 containerd[1625]: time="2025-11-05T15:57:34.017549355Z" level=info msg="StartContainer for \"f09a025da523d4f03906ab5ef317786aa0d0114b8f3c450da3f0562f4f8adba5\" returns successfully" Nov 5 15:57:34.167880 containerd[1625]: time="2025-11-05T15:57:34.167813042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-wmg8c,Uid:6389017c-cb97-4b82-8632-477f02eea241,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f1b9a67d12c304787782b97fbdffe16270d2e086ac22aea6cf80ee5f6d00c168\"" Nov 5 15:57:34.170422 containerd[1625]: time="2025-11-05T15:57:34.170378733Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 5 15:57:34.497953 kubelet[2823]: E1105 15:57:34.497878 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:35.655027 kubelet[2823]: E1105 15:57:35.654951 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:35.694054 kubelet[2823]: I1105 15:57:35.693982 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z6whd" podStartSLOduration=3.693964108 podStartE2EDuration="3.693964108s" podCreationTimestamp="2025-11-05 15:57:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:57:34.54669446 +0000 UTC m=+8.316848749" watchObservedRunningTime="2025-11-05 15:57:35.693964108 +0000 UTC m=+9.464118377" Nov 5 15:57:36.501437 kubelet[2823]: E1105 15:57:36.501393 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:37.145259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount367304299.mount: Deactivated successfully. Nov 5 15:57:37.378267 kubelet[2823]: E1105 15:57:37.378207 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:37.504331 kubelet[2823]: E1105 15:57:37.503678 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:37.780127 containerd[1625]: time="2025-11-05T15:57:37.779979444Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:57:37.780867 containerd[1625]: time="2025-11-05T15:57:37.780828068Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 5 15:57:37.781892 containerd[1625]: time="2025-11-05T15:57:37.781866920Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:57:37.783905 containerd[1625]: time="2025-11-05T15:57:37.783882646Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:57:37.784473 containerd[1625]: time="2025-11-05T15:57:37.784450012Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.614027927s" Nov 5 15:57:37.784515 containerd[1625]: time="2025-11-05T15:57:37.784477524Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 5 15:57:37.790999 containerd[1625]: time="2025-11-05T15:57:37.790947159Z" level=info msg="CreateContainer within sandbox \"f1b9a67d12c304787782b97fbdffe16270d2e086ac22aea6cf80ee5f6d00c168\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 5 15:57:37.800557 containerd[1625]: time="2025-11-05T15:57:37.800494765Z" level=info msg="Container ba8c453a471000f76eee0fbcd47c2f0dcffa42f59142ffc30415551b0b394011: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:57:37.808179 containerd[1625]: time="2025-11-05T15:57:37.808114670Z" level=info msg="CreateContainer within sandbox \"f1b9a67d12c304787782b97fbdffe16270d2e086ac22aea6cf80ee5f6d00c168\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ba8c453a471000f76eee0fbcd47c2f0dcffa42f59142ffc30415551b0b394011\"" Nov 5 15:57:37.808816 containerd[1625]: time="2025-11-05T15:57:37.808783406Z" level=info msg="StartContainer for \"ba8c453a471000f76eee0fbcd47c2f0dcffa42f59142ffc30415551b0b394011\"" Nov 5 15:57:37.810016 containerd[1625]: time="2025-11-05T15:57:37.809983010Z" level=info msg="connecting to shim ba8c453a471000f76eee0fbcd47c2f0dcffa42f59142ffc30415551b0b394011" address="unix:///run/containerd/s/51505f474d98e85fc02f1e1ac30e789e90e942a7856577a962df2d6f4b3bdfa4" protocol=ttrpc version=3 Nov 5 15:57:37.839599 systemd[1]: Started cri-containerd-ba8c453a471000f76eee0fbcd47c2f0dcffa42f59142ffc30415551b0b394011.scope - libcontainer container ba8c453a471000f76eee0fbcd47c2f0dcffa42f59142ffc30415551b0b394011. Nov 5 15:57:37.878200 containerd[1625]: time="2025-11-05T15:57:37.878135243Z" level=info msg="StartContainer for \"ba8c453a471000f76eee0fbcd47c2f0dcffa42f59142ffc30415551b0b394011\" returns successfully" Nov 5 15:57:38.516420 kubelet[2823]: I1105 15:57:38.516345 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-wmg8c" podStartSLOduration=1.900954212 podStartE2EDuration="5.516328491s" podCreationTimestamp="2025-11-05 15:57:33 +0000 UTC" firstStartedPulling="2025-11-05 15:57:34.169798392 +0000 UTC m=+7.939952661" lastFinishedPulling="2025-11-05 15:57:37.785172671 +0000 UTC m=+11.555326940" observedRunningTime="2025-11-05 15:57:38.516171417 +0000 UTC m=+12.286325686" watchObservedRunningTime="2025-11-05 15:57:38.516328491 +0000 UTC m=+12.286482760" Nov 5 15:57:45.425993 sudo[1823]: pam_unix(sudo:session): session closed for user root Nov 5 15:57:45.428551 sshd[1822]: Connection closed by 10.0.0.1 port 49122 Nov 5 15:57:45.429746 sshd-session[1819]: pam_unix(sshd:session): session closed for user core Nov 5 15:57:45.442994 systemd[1]: sshd@6-10.0.0.107:22-10.0.0.1:49122.service: Deactivated successfully. Nov 5 15:57:45.451978 systemd[1]: session-7.scope: Deactivated successfully. Nov 5 15:57:45.452298 systemd[1]: session-7.scope: Consumed 8.096s CPU time, 225.1M memory peak. Nov 5 15:57:45.470416 systemd-logind[1592]: Session 7 logged out. Waiting for processes to exit. Nov 5 15:57:45.482625 systemd-logind[1592]: Removed session 7. Nov 5 15:57:52.415978 systemd[1]: Created slice kubepods-besteffort-pod27c11d93_2a20_4369_a327_d2e982d6cff0.slice - libcontainer container kubepods-besteffort-pod27c11d93_2a20_4369_a327_d2e982d6cff0.slice. Nov 5 15:57:52.483567 kubelet[2823]: I1105 15:57:52.483502 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/27c11d93-2a20-4369-a327-d2e982d6cff0-typha-certs\") pod \"calico-typha-79948b7968-5fkwd\" (UID: \"27c11d93-2a20-4369-a327-d2e982d6cff0\") " pod="calico-system/calico-typha-79948b7968-5fkwd" Nov 5 15:57:52.484891 kubelet[2823]: I1105 15:57:52.484822 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vkz6\" (UniqueName: \"kubernetes.io/projected/27c11d93-2a20-4369-a327-d2e982d6cff0-kube-api-access-4vkz6\") pod \"calico-typha-79948b7968-5fkwd\" (UID: \"27c11d93-2a20-4369-a327-d2e982d6cff0\") " pod="calico-system/calico-typha-79948b7968-5fkwd" Nov 5 15:57:52.484891 kubelet[2823]: I1105 15:57:52.484894 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27c11d93-2a20-4369-a327-d2e982d6cff0-tigera-ca-bundle\") pod \"calico-typha-79948b7968-5fkwd\" (UID: \"27c11d93-2a20-4369-a327-d2e982d6cff0\") " pod="calico-system/calico-typha-79948b7968-5fkwd" Nov 5 15:57:52.681961 systemd[1]: Created slice kubepods-besteffort-pod8bfff047_6e70_4c6b_9c60_7f641b49a3fc.slice - libcontainer container kubepods-besteffort-pod8bfff047_6e70_4c6b_9c60_7f641b49a3fc.slice. Nov 5 15:57:52.748549 kubelet[2823]: E1105 15:57:52.748492 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:52.749661 containerd[1625]: time="2025-11-05T15:57:52.749605998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79948b7968-5fkwd,Uid:27c11d93-2a20-4369-a327-d2e982d6cff0,Namespace:calico-system,Attempt:0,}" Nov 5 15:57:52.761488 kubelet[2823]: E1105 15:57:52.761398 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cr9h8" podUID="6636406c-76cb-4ddc-8f4d-b82da1f33a92" Nov 5 15:57:52.789585 kubelet[2823]: I1105 15:57:52.786568 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8bfff047-6e70-4c6b-9c60-7f641b49a3fc-tigera-ca-bundle\") pod \"calico-node-lvlp5\" (UID: \"8bfff047-6e70-4c6b-9c60-7f641b49a3fc\") " pod="calico-system/calico-node-lvlp5" Nov 5 15:57:52.789585 kubelet[2823]: I1105 15:57:52.789519 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8bfff047-6e70-4c6b-9c60-7f641b49a3fc-xtables-lock\") pod \"calico-node-lvlp5\" (UID: \"8bfff047-6e70-4c6b-9c60-7f641b49a3fc\") " pod="calico-system/calico-node-lvlp5" Nov 5 15:57:52.789585 kubelet[2823]: I1105 15:57:52.789556 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8bfff047-6e70-4c6b-9c60-7f641b49a3fc-var-run-calico\") pod \"calico-node-lvlp5\" (UID: \"8bfff047-6e70-4c6b-9c60-7f641b49a3fc\") " pod="calico-system/calico-node-lvlp5" Nov 5 15:57:52.789585 kubelet[2823]: I1105 15:57:52.789583 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8bfff047-6e70-4c6b-9c60-7f641b49a3fc-lib-modules\") pod \"calico-node-lvlp5\" (UID: \"8bfff047-6e70-4c6b-9c60-7f641b49a3fc\") " pod="calico-system/calico-node-lvlp5" Nov 5 15:57:52.789585 kubelet[2823]: I1105 15:57:52.789612 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8bfff047-6e70-4c6b-9c60-7f641b49a3fc-flexvol-driver-host\") pod \"calico-node-lvlp5\" (UID: \"8bfff047-6e70-4c6b-9c60-7f641b49a3fc\") " pod="calico-system/calico-node-lvlp5" Nov 5 15:57:52.790690 kubelet[2823]: I1105 15:57:52.789639 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8bfff047-6e70-4c6b-9c60-7f641b49a3fc-var-lib-calico\") pod \"calico-node-lvlp5\" (UID: \"8bfff047-6e70-4c6b-9c60-7f641b49a3fc\") " pod="calico-system/calico-node-lvlp5" Nov 5 15:57:52.790690 kubelet[2823]: I1105 15:57:52.789662 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8bfff047-6e70-4c6b-9c60-7f641b49a3fc-cni-log-dir\") pod \"calico-node-lvlp5\" (UID: \"8bfff047-6e70-4c6b-9c60-7f641b49a3fc\") " pod="calico-system/calico-node-lvlp5" Nov 5 15:57:52.790690 kubelet[2823]: I1105 15:57:52.789686 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8bfff047-6e70-4c6b-9c60-7f641b49a3fc-node-certs\") pod \"calico-node-lvlp5\" (UID: \"8bfff047-6e70-4c6b-9c60-7f641b49a3fc\") " pod="calico-system/calico-node-lvlp5" Nov 5 15:57:52.790690 kubelet[2823]: I1105 15:57:52.789710 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qxmh\" (UniqueName: \"kubernetes.io/projected/8bfff047-6e70-4c6b-9c60-7f641b49a3fc-kube-api-access-5qxmh\") pod \"calico-node-lvlp5\" (UID: \"8bfff047-6e70-4c6b-9c60-7f641b49a3fc\") " pod="calico-system/calico-node-lvlp5" Nov 5 15:57:52.790690 kubelet[2823]: I1105 15:57:52.789738 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8bfff047-6e70-4c6b-9c60-7f641b49a3fc-cni-bin-dir\") pod \"calico-node-lvlp5\" (UID: \"8bfff047-6e70-4c6b-9c60-7f641b49a3fc\") " pod="calico-system/calico-node-lvlp5" Nov 5 15:57:52.791040 kubelet[2823]: I1105 15:57:52.789762 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8bfff047-6e70-4c6b-9c60-7f641b49a3fc-cni-net-dir\") pod \"calico-node-lvlp5\" (UID: \"8bfff047-6e70-4c6b-9c60-7f641b49a3fc\") " pod="calico-system/calico-node-lvlp5" Nov 5 15:57:52.791040 kubelet[2823]: I1105 15:57:52.789783 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8bfff047-6e70-4c6b-9c60-7f641b49a3fc-policysync\") pod \"calico-node-lvlp5\" (UID: \"8bfff047-6e70-4c6b-9c60-7f641b49a3fc\") " pod="calico-system/calico-node-lvlp5" Nov 5 15:57:52.803608 containerd[1625]: time="2025-11-05T15:57:52.803485469Z" level=info msg="connecting to shim 330d8a20853182ead1af65945ab42c36887da8fdc6a20be4224acca9a306889b" address="unix:///run/containerd/s/bc67ecf9aef5a1219ece93ddcf24cc8298b5020bdbcac9b6490f580377e9951f" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:57:52.841034 systemd[1]: Started cri-containerd-330d8a20853182ead1af65945ab42c36887da8fdc6a20be4224acca9a306889b.scope - libcontainer container 330d8a20853182ead1af65945ab42c36887da8fdc6a20be4224acca9a306889b. Nov 5 15:57:52.890594 kubelet[2823]: I1105 15:57:52.890479 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzxcx\" (UniqueName: \"kubernetes.io/projected/6636406c-76cb-4ddc-8f4d-b82da1f33a92-kube-api-access-qzxcx\") pod \"csi-node-driver-cr9h8\" (UID: \"6636406c-76cb-4ddc-8f4d-b82da1f33a92\") " pod="calico-system/csi-node-driver-cr9h8" Nov 5 15:57:52.891112 kubelet[2823]: I1105 15:57:52.891060 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6636406c-76cb-4ddc-8f4d-b82da1f33a92-kubelet-dir\") pod \"csi-node-driver-cr9h8\" (UID: \"6636406c-76cb-4ddc-8f4d-b82da1f33a92\") " pod="calico-system/csi-node-driver-cr9h8" Nov 5 15:57:52.891112 kubelet[2823]: I1105 15:57:52.891094 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6636406c-76cb-4ddc-8f4d-b82da1f33a92-socket-dir\") pod \"csi-node-driver-cr9h8\" (UID: \"6636406c-76cb-4ddc-8f4d-b82da1f33a92\") " pod="calico-system/csi-node-driver-cr9h8" Nov 5 15:57:52.891177 kubelet[2823]: I1105 15:57:52.891127 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6636406c-76cb-4ddc-8f4d-b82da1f33a92-registration-dir\") pod \"csi-node-driver-cr9h8\" (UID: \"6636406c-76cb-4ddc-8f4d-b82da1f33a92\") " pod="calico-system/csi-node-driver-cr9h8" Nov 5 15:57:52.891177 kubelet[2823]: I1105 15:57:52.891147 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6636406c-76cb-4ddc-8f4d-b82da1f33a92-varrun\") pod \"csi-node-driver-cr9h8\" (UID: \"6636406c-76cb-4ddc-8f4d-b82da1f33a92\") " pod="calico-system/csi-node-driver-cr9h8" Nov 5 15:57:52.894772 kubelet[2823]: E1105 15:57:52.894735 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:52.894772 kubelet[2823]: W1105 15:57:52.894764 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:52.894916 kubelet[2823]: E1105 15:57:52.894786 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:52.898723 kubelet[2823]: E1105 15:57:52.898678 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:52.898723 kubelet[2823]: W1105 15:57:52.898707 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:52.898723 kubelet[2823]: E1105 15:57:52.898729 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:52.907823 kubelet[2823]: E1105 15:57:52.907744 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:52.907823 kubelet[2823]: W1105 15:57:52.907777 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:52.907823 kubelet[2823]: E1105 15:57:52.907808 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:52.908408 kubelet[2823]: E1105 15:57:52.908376 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:52.908408 kubelet[2823]: W1105 15:57:52.908392 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:52.908408 kubelet[2823]: E1105 15:57:52.908405 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:52.912278 containerd[1625]: time="2025-11-05T15:57:52.912214679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79948b7968-5fkwd,Uid:27c11d93-2a20-4369-a327-d2e982d6cff0,Namespace:calico-system,Attempt:0,} returns sandbox id \"330d8a20853182ead1af65945ab42c36887da8fdc6a20be4224acca9a306889b\"" Nov 5 15:57:52.913276 kubelet[2823]: E1105 15:57:52.913220 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:52.914440 containerd[1625]: time="2025-11-05T15:57:52.914404920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 5 15:57:52.992373 kubelet[2823]: E1105 15:57:52.991811 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:52.992373 kubelet[2823]: W1105 15:57:52.991838 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:52.992373 kubelet[2823]: E1105 15:57:52.991860 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:52.992373 kubelet[2823]: E1105 15:57:52.992130 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:52.992373 kubelet[2823]: W1105 15:57:52.992144 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:52.992373 kubelet[2823]: E1105 15:57:52.992155 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:52.993178 kubelet[2823]: E1105 15:57:52.992659 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:52.993178 kubelet[2823]: W1105 15:57:52.992841 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:52.993178 kubelet[2823]: E1105 15:57:52.992877 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:52.993613 kubelet[2823]: E1105 15:57:52.993588 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:52.993613 kubelet[2823]: W1105 15:57:52.993609 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:52.993713 kubelet[2823]: E1105 15:57:52.993623 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:52.993906 kubelet[2823]: E1105 15:57:52.993888 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:52.993906 kubelet[2823]: W1105 15:57:52.993904 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:52.994083 kubelet[2823]: E1105 15:57:52.993916 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:52.994170 kubelet[2823]: E1105 15:57:52.994150 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:52.994170 kubelet[2823]: W1105 15:57:52.994162 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:52.994241 kubelet[2823]: E1105 15:57:52.994173 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:52.994434 kubelet[2823]: E1105 15:57:52.994416 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:52.994434 kubelet[2823]: W1105 15:57:52.994429 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:52.994546 kubelet[2823]: E1105 15:57:52.994443 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:52.994731 kubelet[2823]: E1105 15:57:52.994713 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:52.994731 kubelet[2823]: W1105 15:57:52.994725 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:52.994817 kubelet[2823]: E1105 15:57:52.994736 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:52.994957 kubelet[2823]: E1105 15:57:52.994939 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:52.994957 kubelet[2823]: W1105 15:57:52.994951 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:52.995056 kubelet[2823]: E1105 15:57:52.994961 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:52.995200 kubelet[2823]: E1105 15:57:52.995182 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:52.995200 kubelet[2823]: W1105 15:57:52.995194 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:52.995281 kubelet[2823]: E1105 15:57:52.995205 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:52.995428 kubelet[2823]: E1105 15:57:52.995410 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:52.995428 kubelet[2823]: W1105 15:57:52.995422 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:52.995515 kubelet[2823]: E1105 15:57:52.995432 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:52.995786 kubelet[2823]: E1105 15:57:52.995611 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:52.995786 kubelet[2823]: E1105 15:57:52.995660 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:52.995786 kubelet[2823]: W1105 15:57:52.995671 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:52.995786 kubelet[2823]: E1105 15:57:52.995683 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:52.996067 kubelet[2823]: E1105 15:57:52.996052 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:52.996259 containerd[1625]: time="2025-11-05T15:57:52.996126309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lvlp5,Uid:8bfff047-6e70-4c6b-9c60-7f641b49a3fc,Namespace:calico-system,Attempt:0,}" Nov 5 15:57:52.996401 kubelet[2823]: W1105 15:57:52.996138 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:52.996401 kubelet[2823]: E1105 15:57:52.996157 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:52.996582 kubelet[2823]: E1105 15:57:52.996550 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:52.996582 kubelet[2823]: W1105 15:57:52.996565 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:52.996582 kubelet[2823]: E1105 15:57:52.996577 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:52.996836 kubelet[2823]: E1105 15:57:52.996807 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:52.996836 kubelet[2823]: W1105 15:57:52.996828 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:52.996896 kubelet[2823]: E1105 15:57:52.996839 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:52.997682 kubelet[2823]: E1105 15:57:52.997638 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:52.997682 kubelet[2823]: W1105 15:57:52.997674 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:52.997682 kubelet[2823]: E1105 15:57:52.997688 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:52.997937 kubelet[2823]: E1105 15:57:52.997912 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:52.997937 kubelet[2823]: W1105 15:57:52.997929 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:52.997937 kubelet[2823]: E1105 15:57:52.997939 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:52.998239 kubelet[2823]: E1105 15:57:52.998217 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:52.998293 kubelet[2823]: W1105 15:57:52.998243 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:52.998293 kubelet[2823]: E1105 15:57:52.998255 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:52.999126 kubelet[2823]: E1105 15:57:52.998550 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:52.999126 kubelet[2823]: W1105 15:57:52.998560 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:52.999126 kubelet[2823]: E1105 15:57:52.998571 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:52.999126 kubelet[2823]: E1105 15:57:52.998788 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:52.999126 kubelet[2823]: W1105 15:57:52.998796 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:52.999126 kubelet[2823]: E1105 15:57:52.998805 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:52.999126 kubelet[2823]: E1105 15:57:52.999025 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:52.999126 kubelet[2823]: W1105 15:57:52.999035 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:52.999126 kubelet[2823]: E1105 15:57:52.999044 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:52.999472 kubelet[2823]: E1105 15:57:52.999256 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:52.999472 kubelet[2823]: W1105 15:57:52.999264 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:52.999472 kubelet[2823]: E1105 15:57:52.999274 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:52.999576 kubelet[2823]: E1105 15:57:52.999517 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:52.999576 kubelet[2823]: W1105 15:57:52.999526 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:52.999576 kubelet[2823]: E1105 15:57:52.999534 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:52.999731 kubelet[2823]: E1105 15:57:52.999715 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:52.999731 kubelet[2823]: W1105 15:57:52.999724 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:52.999731 kubelet[2823]: E1105 15:57:52.999732 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:53.000121 kubelet[2823]: E1105 15:57:53.000098 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:53.000121 kubelet[2823]: W1105 15:57:53.000113 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:53.000211 kubelet[2823]: E1105 15:57:53.000125 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:53.010889 kubelet[2823]: E1105 15:57:53.010844 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:53.010889 kubelet[2823]: W1105 15:57:53.010879 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:53.011049 kubelet[2823]: E1105 15:57:53.010911 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:53.024283 containerd[1625]: time="2025-11-05T15:57:53.024216370Z" level=info msg="connecting to shim 84f28e6250bffad9ca098a60e0e38a63b9c7c76af105307ec7cb7a5f0e55ed07" address="unix:///run/containerd/s/3b465a0b86ffead177f5c04fd7685666e718158300a6b5e8822a46cf0dce829b" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:57:53.060499 systemd[1]: Started cri-containerd-84f28e6250bffad9ca098a60e0e38a63b9c7c76af105307ec7cb7a5f0e55ed07.scope - libcontainer container 84f28e6250bffad9ca098a60e0e38a63b9c7c76af105307ec7cb7a5f0e55ed07. Nov 5 15:57:53.094838 containerd[1625]: time="2025-11-05T15:57:53.094793120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lvlp5,Uid:8bfff047-6e70-4c6b-9c60-7f641b49a3fc,Namespace:calico-system,Attempt:0,} returns sandbox id \"84f28e6250bffad9ca098a60e0e38a63b9c7c76af105307ec7cb7a5f0e55ed07\"" Nov 5 15:57:53.095581 kubelet[2823]: E1105 15:57:53.095555 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:54.439554 kubelet[2823]: E1105 15:57:54.439487 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cr9h8" podUID="6636406c-76cb-4ddc-8f4d-b82da1f33a92" Nov 5 15:57:55.118672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3286069303.mount: Deactivated successfully. Nov 5 15:57:55.891665 containerd[1625]: time="2025-11-05T15:57:55.891594340Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:57:55.892800 containerd[1625]: time="2025-11-05T15:57:55.892734649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 5 15:57:55.894296 containerd[1625]: time="2025-11-05T15:57:55.894242289Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:57:55.896671 containerd[1625]: time="2025-11-05T15:57:55.896615070Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:57:55.897197 containerd[1625]: time="2025-11-05T15:57:55.897126450Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.982679762s" Nov 5 15:57:55.897197 containerd[1625]: time="2025-11-05T15:57:55.897176384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 5 15:57:55.898484 containerd[1625]: time="2025-11-05T15:57:55.898414066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 5 15:57:55.909585 containerd[1625]: time="2025-11-05T15:57:55.909528572Z" level=info msg="CreateContainer within sandbox \"330d8a20853182ead1af65945ab42c36887da8fdc6a20be4224acca9a306889b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 5 15:57:55.917432 containerd[1625]: time="2025-11-05T15:57:55.917369685Z" level=info msg="Container 1325680e00388b89052d90739aea7703257ad807dfe5b57c015b7164a4baf950: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:57:55.925768 containerd[1625]: time="2025-11-05T15:57:55.925711176Z" level=info msg="CreateContainer within sandbox \"330d8a20853182ead1af65945ab42c36887da8fdc6a20be4224acca9a306889b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1325680e00388b89052d90739aea7703257ad807dfe5b57c015b7164a4baf950\"" Nov 5 15:57:55.926228 containerd[1625]: time="2025-11-05T15:57:55.926204262Z" level=info msg="StartContainer for \"1325680e00388b89052d90739aea7703257ad807dfe5b57c015b7164a4baf950\"" Nov 5 15:57:55.927171 containerd[1625]: time="2025-11-05T15:57:55.927146329Z" level=info msg="connecting to shim 1325680e00388b89052d90739aea7703257ad807dfe5b57c015b7164a4baf950" address="unix:///run/containerd/s/bc67ecf9aef5a1219ece93ddcf24cc8298b5020bdbcac9b6490f580377e9951f" protocol=ttrpc version=3 Nov 5 15:57:55.952589 systemd[1]: Started cri-containerd-1325680e00388b89052d90739aea7703257ad807dfe5b57c015b7164a4baf950.scope - libcontainer container 1325680e00388b89052d90739aea7703257ad807dfe5b57c015b7164a4baf950. Nov 5 15:57:56.027841 containerd[1625]: time="2025-11-05T15:57:56.027795755Z" level=info msg="StartContainer for \"1325680e00388b89052d90739aea7703257ad807dfe5b57c015b7164a4baf950\" returns successfully" Nov 5 15:57:56.440893 kubelet[2823]: E1105 15:57:56.440797 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cr9h8" podUID="6636406c-76cb-4ddc-8f4d-b82da1f33a92" Nov 5 15:57:56.571226 kubelet[2823]: E1105 15:57:56.571186 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:56.601496 kubelet[2823]: I1105 15:57:56.601391 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-79948b7968-5fkwd" podStartSLOduration=1.6171408729999999 podStartE2EDuration="4.601364873s" podCreationTimestamp="2025-11-05 15:57:52 +0000 UTC" firstStartedPulling="2025-11-05 15:57:52.91397335 +0000 UTC m=+26.684127609" lastFinishedPulling="2025-11-05 15:57:55.89819733 +0000 UTC m=+29.668351609" observedRunningTime="2025-11-05 15:57:56.598481754 +0000 UTC m=+30.368636033" watchObservedRunningTime="2025-11-05 15:57:56.601364873 +0000 UTC m=+30.371519153" Nov 5 15:57:56.659860 kubelet[2823]: E1105 15:57:56.659815 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:56.659860 kubelet[2823]: W1105 15:57:56.659844 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:56.660061 kubelet[2823]: E1105 15:57:56.659866 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:56.660359 kubelet[2823]: E1105 15:57:56.660334 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:56.660359 kubelet[2823]: W1105 15:57:56.660349 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:56.660451 kubelet[2823]: E1105 15:57:56.660361 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:56.660587 kubelet[2823]: E1105 15:57:56.660569 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:56.660587 kubelet[2823]: W1105 15:57:56.660582 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:56.660690 kubelet[2823]: E1105 15:57:56.660597 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:56.660868 kubelet[2823]: E1105 15:57:56.660841 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:56.660868 kubelet[2823]: W1105 15:57:56.660855 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:56.660868 kubelet[2823]: E1105 15:57:56.660866 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:56.661230 kubelet[2823]: E1105 15:57:56.661202 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:56.661230 kubelet[2823]: W1105 15:57:56.661217 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:56.661230 kubelet[2823]: E1105 15:57:56.661228 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:56.661491 kubelet[2823]: E1105 15:57:56.661457 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:56.661491 kubelet[2823]: W1105 15:57:56.661477 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:56.661491 kubelet[2823]: E1105 15:57:56.661489 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:56.661691 kubelet[2823]: E1105 15:57:56.661676 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:56.661691 kubelet[2823]: W1105 15:57:56.661688 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:56.661772 kubelet[2823]: E1105 15:57:56.661699 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:56.662003 kubelet[2823]: E1105 15:57:56.661985 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:56.662003 kubelet[2823]: W1105 15:57:56.661998 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:56.662079 kubelet[2823]: E1105 15:57:56.662008 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:56.662599 kubelet[2823]: E1105 15:57:56.662268 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:56.662599 kubelet[2823]: W1105 15:57:56.662284 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:56.662599 kubelet[2823]: E1105 15:57:56.662295 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:56.662599 kubelet[2823]: E1105 15:57:56.662528 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:56.662599 kubelet[2823]: W1105 15:57:56.662539 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:56.662599 kubelet[2823]: E1105 15:57:56.662551 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:56.662848 kubelet[2823]: E1105 15:57:56.662764 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:56.662848 kubelet[2823]: W1105 15:57:56.662774 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:56.662848 kubelet[2823]: E1105 15:57:56.662785 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:56.662981 kubelet[2823]: E1105 15:57:56.662961 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:56.662981 kubelet[2823]: W1105 15:57:56.662975 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:56.663068 kubelet[2823]: E1105 15:57:56.662988 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:56.663210 kubelet[2823]: E1105 15:57:56.663192 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:56.663210 kubelet[2823]: W1105 15:57:56.663206 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:56.663339 kubelet[2823]: E1105 15:57:56.663217 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:56.663459 kubelet[2823]: E1105 15:57:56.663442 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:56.663459 kubelet[2823]: W1105 15:57:56.663455 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:56.663543 kubelet[2823]: E1105 15:57:56.663469 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:56.663671 kubelet[2823]: E1105 15:57:56.663654 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:56.663671 kubelet[2823]: W1105 15:57:56.663668 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:56.663753 kubelet[2823]: E1105 15:57:56.663678 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:56.721343 kubelet[2823]: E1105 15:57:56.721190 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:56.721343 kubelet[2823]: W1105 15:57:56.721212 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:56.721343 kubelet[2823]: E1105 15:57:56.721243 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:56.721741 kubelet[2823]: E1105 15:57:56.721689 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:56.721741 kubelet[2823]: W1105 15:57:56.721704 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:56.721741 kubelet[2823]: E1105 15:57:56.721717 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:56.721968 kubelet[2823]: E1105 15:57:56.721951 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:56.721968 kubelet[2823]: W1105 15:57:56.721964 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:56.722115 kubelet[2823]: E1105 15:57:56.721974 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:56.722224 kubelet[2823]: E1105 15:57:56.722217 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:56.722288 kubelet[2823]: W1105 15:57:56.722228 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:56.722288 kubelet[2823]: E1105 15:57:56.722255 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:56.722521 kubelet[2823]: E1105 15:57:56.722472 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:56.722521 kubelet[2823]: W1105 15:57:56.722483 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:56.722521 kubelet[2823]: E1105 15:57:56.722494 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:56.722723 kubelet[2823]: E1105 15:57:56.722705 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:56.722723 kubelet[2823]: W1105 15:57:56.722717 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:56.722819 kubelet[2823]: E1105 15:57:56.722728 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:56.722935 kubelet[2823]: E1105 15:57:56.722918 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:56.722935 kubelet[2823]: W1105 15:57:56.722929 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:56.723007 kubelet[2823]: E1105 15:57:56.722939 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:56.723146 kubelet[2823]: E1105 15:57:56.723127 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:56.723146 kubelet[2823]: W1105 15:57:56.723138 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:56.723225 kubelet[2823]: E1105 15:57:56.723147 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:56.723374 kubelet[2823]: E1105 15:57:56.723353 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:56.723374 kubelet[2823]: W1105 15:57:56.723365 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:56.723374 kubelet[2823]: E1105 15:57:56.723376 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:56.723613 kubelet[2823]: E1105 15:57:56.723590 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:56.723613 kubelet[2823]: W1105 15:57:56.723601 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:56.723699 kubelet[2823]: E1105 15:57:56.723616 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:56.723908 kubelet[2823]: E1105 15:57:56.723886 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:56.723908 kubelet[2823]: W1105 15:57:56.723901 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:56.723992 kubelet[2823]: E1105 15:57:56.723911 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:56.724145 kubelet[2823]: E1105 15:57:56.724124 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:56.724145 kubelet[2823]: W1105 15:57:56.724138 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:56.724229 kubelet[2823]: E1105 15:57:56.724148 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:56.724399 kubelet[2823]: E1105 15:57:56.724381 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:56.724399 kubelet[2823]: W1105 15:57:56.724394 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:56.724488 kubelet[2823]: E1105 15:57:56.724405 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:56.724612 kubelet[2823]: E1105 15:57:56.724595 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:56.724612 kubelet[2823]: W1105 15:57:56.724607 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:56.724698 kubelet[2823]: E1105 15:57:56.724617 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:56.724833 kubelet[2823]: E1105 15:57:56.724816 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:56.724833 kubelet[2823]: W1105 15:57:56.724827 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:56.724913 kubelet[2823]: E1105 15:57:56.724837 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:56.725045 kubelet[2823]: E1105 15:57:56.725027 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:56.725045 kubelet[2823]: W1105 15:57:56.725039 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:56.725136 kubelet[2823]: E1105 15:57:56.725050 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:56.725295 kubelet[2823]: E1105 15:57:56.725274 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:56.725295 kubelet[2823]: W1105 15:57:56.725289 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:56.725393 kubelet[2823]: E1105 15:57:56.725318 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:56.725703 kubelet[2823]: E1105 15:57:56.725687 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:56.725703 kubelet[2823]: W1105 15:57:56.725700 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:56.725778 kubelet[2823]: E1105 15:57:56.725711 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:57.571522 kubelet[2823]: I1105 15:57:57.571484 2823 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 15:57:57.572030 kubelet[2823]: E1105 15:57:57.571859 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:57.668640 kubelet[2823]: E1105 15:57:57.668569 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:57.668640 kubelet[2823]: W1105 15:57:57.668628 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:57.668897 kubelet[2823]: E1105 15:57:57.668679 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:57.669022 kubelet[2823]: E1105 15:57:57.669005 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:57.669069 kubelet[2823]: W1105 15:57:57.669017 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:57.669069 kubelet[2823]: E1105 15:57:57.669055 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:57.669360 kubelet[2823]: E1105 15:57:57.669337 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:57.669360 kubelet[2823]: W1105 15:57:57.669358 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:57.669484 kubelet[2823]: E1105 15:57:57.669369 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:57.669642 kubelet[2823]: E1105 15:57:57.669608 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:57.669642 kubelet[2823]: W1105 15:57:57.669620 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:57.669642 kubelet[2823]: E1105 15:57:57.669631 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:57.669893 kubelet[2823]: E1105 15:57:57.669860 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:57.669893 kubelet[2823]: W1105 15:57:57.669873 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:57.669893 kubelet[2823]: E1105 15:57:57.669884 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:57.670119 kubelet[2823]: E1105 15:57:57.670095 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:57.670119 kubelet[2823]: W1105 15:57:57.670107 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:57.670119 kubelet[2823]: E1105 15:57:57.670117 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:57.670386 kubelet[2823]: E1105 15:57:57.670360 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:57.670386 kubelet[2823]: W1105 15:57:57.670372 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:57.670386 kubelet[2823]: E1105 15:57:57.670383 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:57.670614 kubelet[2823]: E1105 15:57:57.670590 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:57.670614 kubelet[2823]: W1105 15:57:57.670613 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:57.670700 kubelet[2823]: E1105 15:57:57.670625 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:57.670875 kubelet[2823]: E1105 15:57:57.670850 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:57.670875 kubelet[2823]: W1105 15:57:57.670861 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:57.670875 kubelet[2823]: E1105 15:57:57.670870 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:57.671097 kubelet[2823]: E1105 15:57:57.671073 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:57.671097 kubelet[2823]: W1105 15:57:57.671085 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:57.671097 kubelet[2823]: E1105 15:57:57.671095 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:57.671355 kubelet[2823]: E1105 15:57:57.671330 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:57.671355 kubelet[2823]: W1105 15:57:57.671342 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:57.671355 kubelet[2823]: E1105 15:57:57.671353 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:57.671586 kubelet[2823]: E1105 15:57:57.671563 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:57.671586 kubelet[2823]: W1105 15:57:57.671574 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:57.671586 kubelet[2823]: E1105 15:57:57.671584 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:57.671822 kubelet[2823]: E1105 15:57:57.671800 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:57.671822 kubelet[2823]: W1105 15:57:57.671810 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:57.671822 kubelet[2823]: E1105 15:57:57.671821 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:57.672050 kubelet[2823]: E1105 15:57:57.672027 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:57.672050 kubelet[2823]: W1105 15:57:57.672039 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:57.672050 kubelet[2823]: E1105 15:57:57.672048 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:57.672291 kubelet[2823]: E1105 15:57:57.672266 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:57.672291 kubelet[2823]: W1105 15:57:57.672278 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:57.672291 kubelet[2823]: E1105 15:57:57.672288 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:57.728865 kubelet[2823]: E1105 15:57:57.728805 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:57.728865 kubelet[2823]: W1105 15:57:57.728833 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:57.728865 kubelet[2823]: E1105 15:57:57.728854 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:57.729212 kubelet[2823]: E1105 15:57:57.729170 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:57.729212 kubelet[2823]: W1105 15:57:57.729192 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:57.729290 kubelet[2823]: E1105 15:57:57.729219 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:57.729584 kubelet[2823]: E1105 15:57:57.729560 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:57.729584 kubelet[2823]: W1105 15:57:57.729576 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:57.729584 kubelet[2823]: E1105 15:57:57.729587 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:57.729970 kubelet[2823]: E1105 15:57:57.729927 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:57.729970 kubelet[2823]: W1105 15:57:57.729956 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:57.730038 kubelet[2823]: E1105 15:57:57.729979 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:57.730257 kubelet[2823]: E1105 15:57:57.730225 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:57.730257 kubelet[2823]: W1105 15:57:57.730236 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:57.730257 kubelet[2823]: E1105 15:57:57.730244 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:57.730489 kubelet[2823]: E1105 15:57:57.730466 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:57.730489 kubelet[2823]: W1105 15:57:57.730485 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:57.730564 kubelet[2823]: E1105 15:57:57.730499 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:57.730813 kubelet[2823]: E1105 15:57:57.730785 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:57.730813 kubelet[2823]: W1105 15:57:57.730800 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:57.730861 kubelet[2823]: E1105 15:57:57.730814 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:57.731076 kubelet[2823]: E1105 15:57:57.731053 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:57.731076 kubelet[2823]: W1105 15:57:57.731070 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:57.731123 kubelet[2823]: E1105 15:57:57.731081 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:57.731425 kubelet[2823]: E1105 15:57:57.731409 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:57.731452 kubelet[2823]: W1105 15:57:57.731426 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:57.731452 kubelet[2823]: E1105 15:57:57.731446 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:57.731827 kubelet[2823]: E1105 15:57:57.731795 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:57.731827 kubelet[2823]: W1105 15:57:57.731819 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:57.731890 kubelet[2823]: E1105 15:57:57.731839 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:57.732107 kubelet[2823]: E1105 15:57:57.732078 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:57.732107 kubelet[2823]: W1105 15:57:57.732100 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:57.732156 kubelet[2823]: E1105 15:57:57.732117 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:57.732497 kubelet[2823]: E1105 15:57:57.732469 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:57.732497 kubelet[2823]: W1105 15:57:57.732492 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:57.732552 kubelet[2823]: E1105 15:57:57.732510 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:57.732861 kubelet[2823]: E1105 15:57:57.732832 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:57.732885 kubelet[2823]: W1105 15:57:57.732855 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:57.732885 kubelet[2823]: E1105 15:57:57.732874 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:57.733145 kubelet[2823]: E1105 15:57:57.733118 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:57.733145 kubelet[2823]: W1105 15:57:57.733140 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:57.733194 kubelet[2823]: E1105 15:57:57.733158 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:57.733440 kubelet[2823]: E1105 15:57:57.733420 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:57.733467 kubelet[2823]: W1105 15:57:57.733440 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:57.733467 kubelet[2823]: E1105 15:57:57.733459 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:57.733746 kubelet[2823]: E1105 15:57:57.733717 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:57.733746 kubelet[2823]: W1105 15:57:57.733740 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:57.733900 kubelet[2823]: E1105 15:57:57.733758 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:57.734024 kubelet[2823]: E1105 15:57:57.733996 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:57.734024 kubelet[2823]: W1105 15:57:57.734018 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:57.734068 kubelet[2823]: E1105 15:57:57.734036 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:57.734764 kubelet[2823]: E1105 15:57:57.734735 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:57:57.734764 kubelet[2823]: W1105 15:57:57.734750 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:57:57.734764 kubelet[2823]: E1105 15:57:57.734761 2823 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:57:57.901775 containerd[1625]: time="2025-11-05T15:57:57.901616407Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:57:57.902528 containerd[1625]: time="2025-11-05T15:57:57.902460952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 5 15:57:57.904073 containerd[1625]: time="2025-11-05T15:57:57.903956007Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:57:57.906327 containerd[1625]: time="2025-11-05T15:57:57.906245612Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:57:57.906952 containerd[1625]: time="2025-11-05T15:57:57.906888739Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.008443715s" Nov 5 15:57:57.906952 containerd[1625]: time="2025-11-05T15:57:57.906935838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 5 15:57:57.911141 containerd[1625]: time="2025-11-05T15:57:57.911095091Z" level=info msg="CreateContainer within sandbox \"84f28e6250bffad9ca098a60e0e38a63b9c7c76af105307ec7cb7a5f0e55ed07\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 5 15:57:57.921084 containerd[1625]: time="2025-11-05T15:57:57.921021004Z" level=info msg="Container 5d97fa371d42fb0ea8c79f9c40e123c6bf9ed47689e7173c152a8cbd6bdf941a: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:57:57.934756 containerd[1625]: time="2025-11-05T15:57:57.934693158Z" level=info msg="CreateContainer within sandbox \"84f28e6250bffad9ca098a60e0e38a63b9c7c76af105307ec7cb7a5f0e55ed07\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5d97fa371d42fb0ea8c79f9c40e123c6bf9ed47689e7173c152a8cbd6bdf941a\"" Nov 5 15:57:57.935547 containerd[1625]: time="2025-11-05T15:57:57.935485574Z" level=info msg="StartContainer for \"5d97fa371d42fb0ea8c79f9c40e123c6bf9ed47689e7173c152a8cbd6bdf941a\"" Nov 5 15:57:57.937496 containerd[1625]: time="2025-11-05T15:57:57.937459588Z" level=info msg="connecting to shim 5d97fa371d42fb0ea8c79f9c40e123c6bf9ed47689e7173c152a8cbd6bdf941a" address="unix:///run/containerd/s/3b465a0b86ffead177f5c04fd7685666e718158300a6b5e8822a46cf0dce829b" protocol=ttrpc version=3 Nov 5 15:57:57.969599 systemd[1]: Started cri-containerd-5d97fa371d42fb0ea8c79f9c40e123c6bf9ed47689e7173c152a8cbd6bdf941a.scope - libcontainer container 5d97fa371d42fb0ea8c79f9c40e123c6bf9ed47689e7173c152a8cbd6bdf941a. Nov 5 15:57:58.019738 containerd[1625]: time="2025-11-05T15:57:58.019687118Z" level=info msg="StartContainer for \"5d97fa371d42fb0ea8c79f9c40e123c6bf9ed47689e7173c152a8cbd6bdf941a\" returns successfully" Nov 5 15:57:58.030648 systemd[1]: cri-containerd-5d97fa371d42fb0ea8c79f9c40e123c6bf9ed47689e7173c152a8cbd6bdf941a.scope: Deactivated successfully. Nov 5 15:57:58.032658 containerd[1625]: time="2025-11-05T15:57:58.032617818Z" level=info msg="received exit event container_id:\"5d97fa371d42fb0ea8c79f9c40e123c6bf9ed47689e7173c152a8cbd6bdf941a\" id:\"5d97fa371d42fb0ea8c79f9c40e123c6bf9ed47689e7173c152a8cbd6bdf941a\" pid:3514 exited_at:{seconds:1762358278 nanos:32089667}" Nov 5 15:57:58.032757 containerd[1625]: time="2025-11-05T15:57:58.032740198Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5d97fa371d42fb0ea8c79f9c40e123c6bf9ed47689e7173c152a8cbd6bdf941a\" id:\"5d97fa371d42fb0ea8c79f9c40e123c6bf9ed47689e7173c152a8cbd6bdf941a\" pid:3514 exited_at:{seconds:1762358278 nanos:32089667}" Nov 5 15:57:58.061601 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d97fa371d42fb0ea8c79f9c40e123c6bf9ed47689e7173c152a8cbd6bdf941a-rootfs.mount: Deactivated successfully. Nov 5 15:57:58.439786 kubelet[2823]: E1105 15:57:58.439703 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cr9h8" podUID="6636406c-76cb-4ddc-8f4d-b82da1f33a92" Nov 5 15:57:58.575593 kubelet[2823]: E1105 15:57:58.575547 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:59.579395 kubelet[2823]: E1105 15:57:59.579212 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:57:59.579997 containerd[1625]: time="2025-11-05T15:57:59.579942373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 5 15:58:00.439872 kubelet[2823]: E1105 15:58:00.439804 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cr9h8" podUID="6636406c-76cb-4ddc-8f4d-b82da1f33a92" Nov 5 15:58:02.439863 kubelet[2823]: E1105 15:58:02.439750 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cr9h8" podUID="6636406c-76cb-4ddc-8f4d-b82da1f33a92" Nov 5 15:58:03.342860 containerd[1625]: time="2025-11-05T15:58:03.342726458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:58:03.344482 containerd[1625]: time="2025-11-05T15:58:03.344342386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 5 15:58:03.349792 containerd[1625]: time="2025-11-05T15:58:03.349735886Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:58:03.353822 containerd[1625]: time="2025-11-05T15:58:03.353767109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:58:03.356451 containerd[1625]: time="2025-11-05T15:58:03.356401018Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.77641364s" Nov 5 15:58:03.356451 containerd[1625]: time="2025-11-05T15:58:03.356444873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 5 15:58:03.375649 containerd[1625]: time="2025-11-05T15:58:03.375567818Z" level=info msg="CreateContainer within sandbox \"84f28e6250bffad9ca098a60e0e38a63b9c7c76af105307ec7cb7a5f0e55ed07\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 5 15:58:03.417725 containerd[1625]: time="2025-11-05T15:58:03.414631473Z" level=info msg="Container 50d5985e848846bb89514f7e7da3a1efbd1f7aeae561d8660e33e168531fda5f: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:58:03.446727 containerd[1625]: time="2025-11-05T15:58:03.446639761Z" level=info msg="CreateContainer within sandbox \"84f28e6250bffad9ca098a60e0e38a63b9c7c76af105307ec7cb7a5f0e55ed07\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"50d5985e848846bb89514f7e7da3a1efbd1f7aeae561d8660e33e168531fda5f\"" Nov 5 15:58:03.449632 containerd[1625]: time="2025-11-05T15:58:03.447560644Z" level=info msg="StartContainer for \"50d5985e848846bb89514f7e7da3a1efbd1f7aeae561d8660e33e168531fda5f\"" Nov 5 15:58:03.454095 containerd[1625]: time="2025-11-05T15:58:03.454038423Z" level=info msg="connecting to shim 50d5985e848846bb89514f7e7da3a1efbd1f7aeae561d8660e33e168531fda5f" address="unix:///run/containerd/s/3b465a0b86ffead177f5c04fd7685666e718158300a6b5e8822a46cf0dce829b" protocol=ttrpc version=3 Nov 5 15:58:03.546618 systemd[1]: Started cri-containerd-50d5985e848846bb89514f7e7da3a1efbd1f7aeae561d8660e33e168531fda5f.scope - libcontainer container 50d5985e848846bb89514f7e7da3a1efbd1f7aeae561d8660e33e168531fda5f. Nov 5 15:58:03.740355 containerd[1625]: time="2025-11-05T15:58:03.740281358Z" level=info msg="StartContainer for \"50d5985e848846bb89514f7e7da3a1efbd1f7aeae561d8660e33e168531fda5f\" returns successfully" Nov 5 15:58:04.441324 kubelet[2823]: E1105 15:58:04.439874 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cr9h8" podUID="6636406c-76cb-4ddc-8f4d-b82da1f33a92" Nov 5 15:58:04.616825 kubelet[2823]: E1105 15:58:04.615957 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:58:05.484040 systemd[1]: cri-containerd-50d5985e848846bb89514f7e7da3a1efbd1f7aeae561d8660e33e168531fda5f.scope: Deactivated successfully. Nov 5 15:58:05.484491 systemd[1]: cri-containerd-50d5985e848846bb89514f7e7da3a1efbd1f7aeae561d8660e33e168531fda5f.scope: Consumed 868ms CPU time, 183.8M memory peak, 3.3M read from disk, 171.3M written to disk. Nov 5 15:58:05.497779 containerd[1625]: time="2025-11-05T15:58:05.497712015Z" level=info msg="TaskExit event in podsandbox handler container_id:\"50d5985e848846bb89514f7e7da3a1efbd1f7aeae561d8660e33e168531fda5f\" id:\"50d5985e848846bb89514f7e7da3a1efbd1f7aeae561d8660e33e168531fda5f\" pid:3574 exited_at:{seconds:1762358285 nanos:490045310}" Nov 5 15:58:05.498336 containerd[1625]: time="2025-11-05T15:58:05.497813270Z" level=info msg="received exit event container_id:\"50d5985e848846bb89514f7e7da3a1efbd1f7aeae561d8660e33e168531fda5f\" id:\"50d5985e848846bb89514f7e7da3a1efbd1f7aeae561d8660e33e168531fda5f\" pid:3574 exited_at:{seconds:1762358285 nanos:490045310}" Nov 5 15:58:05.570458 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50d5985e848846bb89514f7e7da3a1efbd1f7aeae561d8660e33e168531fda5f-rootfs.mount: Deactivated successfully. Nov 5 15:58:05.618877 kubelet[2823]: I1105 15:58:05.617228 2823 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 5 15:58:05.621994 kubelet[2823]: E1105 15:58:05.621934 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:58:05.925058 systemd[1]: Created slice kubepods-besteffort-pode568ac1b_7203_41c5_978d_53ea0a375013.slice - libcontainer container kubepods-besteffort-pode568ac1b_7203_41c5_978d_53ea0a375013.slice. Nov 5 15:58:05.938981 systemd[1]: Created slice kubepods-burstable-pod60a572ee_9f85_42df_8906_eb4bf9d5e5c1.slice - libcontainer container kubepods-burstable-pod60a572ee_9f85_42df_8906_eb4bf9d5e5c1.slice. Nov 5 15:58:05.948545 kubelet[2823]: I1105 15:58:05.948500 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvhxh\" (UniqueName: \"kubernetes.io/projected/36f63588-e88e-4e5e-be35-3d453ebfbecf-kube-api-access-tvhxh\") pod \"calico-apiserver-666f597f8b-qsk25\" (UID: \"36f63588-e88e-4e5e-be35-3d453ebfbecf\") " pod="calico-apiserver/calico-apiserver-666f597f8b-qsk25" Nov 5 15:58:05.948545 kubelet[2823]: I1105 15:58:05.948547 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/def47882-ae7c-4469-bdea-ed04b63c4c12-goldmane-key-pair\") pod \"goldmane-7c778bb748-4gzxq\" (UID: \"def47882-ae7c-4469-bdea-ed04b63c4c12\") " pod="calico-system/goldmane-7c778bb748-4gzxq" Nov 5 15:58:05.948856 kubelet[2823]: I1105 15:58:05.948573 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8m7z\" (UniqueName: \"kubernetes.io/projected/a9ea0dd2-f690-473a-aa1d-3c08114559e4-kube-api-access-n8m7z\") pod \"whisker-5999bc8664-wsdld\" (UID: \"a9ea0dd2-f690-473a-aa1d-3c08114559e4\") " pod="calico-system/whisker-5999bc8664-wsdld" Nov 5 15:58:05.948856 kubelet[2823]: I1105 15:58:05.948592 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a2a255a-1d40-4545-90a6-e6052dd9a0ae-config-volume\") pod \"coredns-66bc5c9577-7x86x\" (UID: \"2a2a255a-1d40-4545-90a6-e6052dd9a0ae\") " pod="kube-system/coredns-66bc5c9577-7x86x" Nov 5 15:58:05.949247 kubelet[2823]: I1105 15:58:05.948632 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/36f63588-e88e-4e5e-be35-3d453ebfbecf-calico-apiserver-certs\") pod \"calico-apiserver-666f597f8b-qsk25\" (UID: \"36f63588-e88e-4e5e-be35-3d453ebfbecf\") " pod="calico-apiserver/calico-apiserver-666f597f8b-qsk25" Nov 5 15:58:05.949247 kubelet[2823]: I1105 15:58:05.948974 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zwpj\" (UniqueName: \"kubernetes.io/projected/def47882-ae7c-4469-bdea-ed04b63c4c12-kube-api-access-4zwpj\") pod \"goldmane-7c778bb748-4gzxq\" (UID: \"def47882-ae7c-4469-bdea-ed04b63c4c12\") " pod="calico-system/goldmane-7c778bb748-4gzxq" Nov 5 15:58:05.949247 kubelet[2823]: I1105 15:58:05.948998 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/def47882-ae7c-4469-bdea-ed04b63c4c12-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-4gzxq\" (UID: \"def47882-ae7c-4469-bdea-ed04b63c4c12\") " pod="calico-system/goldmane-7c778bb748-4gzxq" Nov 5 15:58:05.949247 kubelet[2823]: I1105 15:58:05.949027 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5jcl\" (UniqueName: \"kubernetes.io/projected/091e450e-4da8-4476-b3e3-4b2049f9a92c-kube-api-access-f5jcl\") pod \"calico-apiserver-666f597f8b-vgtfd\" (UID: \"091e450e-4da8-4476-b3e3-4b2049f9a92c\") " pod="calico-apiserver/calico-apiserver-666f597f8b-vgtfd" Nov 5 15:58:05.949247 kubelet[2823]: I1105 15:58:05.949046 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9ea0dd2-f690-473a-aa1d-3c08114559e4-whisker-ca-bundle\") pod \"whisker-5999bc8664-wsdld\" (UID: \"a9ea0dd2-f690-473a-aa1d-3c08114559e4\") " pod="calico-system/whisker-5999bc8664-wsdld" Nov 5 15:58:05.949455 kubelet[2823]: I1105 15:58:05.949070 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nwrg\" (UniqueName: \"kubernetes.io/projected/60a572ee-9f85-42df-8906-eb4bf9d5e5c1-kube-api-access-6nwrg\") pod \"coredns-66bc5c9577-wctmv\" (UID: \"60a572ee-9f85-42df-8906-eb4bf9d5e5c1\") " pod="kube-system/coredns-66bc5c9577-wctmv" Nov 5 15:58:05.949455 kubelet[2823]: I1105 15:58:05.949091 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e568ac1b-7203-41c5-978d-53ea0a375013-tigera-ca-bundle\") pod \"calico-kube-controllers-94f7df89d-9b28s\" (UID: \"e568ac1b-7203-41c5-978d-53ea0a375013\") " pod="calico-system/calico-kube-controllers-94f7df89d-9b28s" Nov 5 15:58:05.949455 kubelet[2823]: I1105 15:58:05.949109 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r8r2\" (UniqueName: \"kubernetes.io/projected/e568ac1b-7203-41c5-978d-53ea0a375013-kube-api-access-4r8r2\") pod \"calico-kube-controllers-94f7df89d-9b28s\" (UID: \"e568ac1b-7203-41c5-978d-53ea0a375013\") " pod="calico-system/calico-kube-controllers-94f7df89d-9b28s" Nov 5 15:58:05.949455 kubelet[2823]: I1105 15:58:05.949127 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/091e450e-4da8-4476-b3e3-4b2049f9a92c-calico-apiserver-certs\") pod \"calico-apiserver-666f597f8b-vgtfd\" (UID: \"091e450e-4da8-4476-b3e3-4b2049f9a92c\") " pod="calico-apiserver/calico-apiserver-666f597f8b-vgtfd" Nov 5 15:58:05.949455 kubelet[2823]: I1105 15:58:05.949146 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w27f\" (UniqueName: \"kubernetes.io/projected/2a2a255a-1d40-4545-90a6-e6052dd9a0ae-kube-api-access-4w27f\") pod \"coredns-66bc5c9577-7x86x\" (UID: \"2a2a255a-1d40-4545-90a6-e6052dd9a0ae\") " pod="kube-system/coredns-66bc5c9577-7x86x" Nov 5 15:58:05.949684 kubelet[2823]: I1105 15:58:05.949163 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/def47882-ae7c-4469-bdea-ed04b63c4c12-config\") pod \"goldmane-7c778bb748-4gzxq\" (UID: \"def47882-ae7c-4469-bdea-ed04b63c4c12\") " pod="calico-system/goldmane-7c778bb748-4gzxq" Nov 5 15:58:05.949684 kubelet[2823]: I1105 15:58:05.949184 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a9ea0dd2-f690-473a-aa1d-3c08114559e4-whisker-backend-key-pair\") pod \"whisker-5999bc8664-wsdld\" (UID: \"a9ea0dd2-f690-473a-aa1d-3c08114559e4\") " pod="calico-system/whisker-5999bc8664-wsdld" Nov 5 15:58:05.949684 kubelet[2823]: I1105 15:58:05.949201 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60a572ee-9f85-42df-8906-eb4bf9d5e5c1-config-volume\") pod \"coredns-66bc5c9577-wctmv\" (UID: \"60a572ee-9f85-42df-8906-eb4bf9d5e5c1\") " pod="kube-system/coredns-66bc5c9577-wctmv" Nov 5 15:58:05.953372 systemd[1]: Created slice kubepods-besteffort-poda9ea0dd2_f690_473a_aa1d_3c08114559e4.slice - libcontainer container kubepods-besteffort-poda9ea0dd2_f690_473a_aa1d_3c08114559e4.slice. Nov 5 15:58:05.963954 systemd[1]: Created slice kubepods-besteffort-pod091e450e_4da8_4476_b3e3_4b2049f9a92c.slice - libcontainer container kubepods-besteffort-pod091e450e_4da8_4476_b3e3_4b2049f9a92c.slice. Nov 5 15:58:05.979418 systemd[1]: Created slice kubepods-burstable-pod2a2a255a_1d40_4545_90a6_e6052dd9a0ae.slice - libcontainer container kubepods-burstable-pod2a2a255a_1d40_4545_90a6_e6052dd9a0ae.slice. Nov 5 15:58:05.991833 systemd[1]: Created slice kubepods-besteffort-pod36f63588_e88e_4e5e_be35_3d453ebfbecf.slice - libcontainer container kubepods-besteffort-pod36f63588_e88e_4e5e_be35_3d453ebfbecf.slice. Nov 5 15:58:06.000951 systemd[1]: Created slice kubepods-besteffort-poddef47882_ae7c_4469_bdea_ed04b63c4c12.slice - libcontainer container kubepods-besteffort-poddef47882_ae7c_4469_bdea_ed04b63c4c12.slice. Nov 5 15:58:06.252363 containerd[1625]: time="2025-11-05T15:58:06.249903146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-94f7df89d-9b28s,Uid:e568ac1b-7203-41c5-978d-53ea0a375013,Namespace:calico-system,Attempt:0,}" Nov 5 15:58:06.260406 kubelet[2823]: E1105 15:58:06.260264 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:58:06.262708 containerd[1625]: time="2025-11-05T15:58:06.261983751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wctmv,Uid:60a572ee-9f85-42df-8906-eb4bf9d5e5c1,Namespace:kube-system,Attempt:0,}" Nov 5 15:58:06.270092 containerd[1625]: time="2025-11-05T15:58:06.269586721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5999bc8664-wsdld,Uid:a9ea0dd2-f690-473a-aa1d-3c08114559e4,Namespace:calico-system,Attempt:0,}" Nov 5 15:58:06.277504 containerd[1625]: time="2025-11-05T15:58:06.277372264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-666f597f8b-vgtfd,Uid:091e450e-4da8-4476-b3e3-4b2049f9a92c,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:58:06.293930 kubelet[2823]: E1105 15:58:06.292678 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:58:06.294067 containerd[1625]: time="2025-11-05T15:58:06.293547997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7x86x,Uid:2a2a255a-1d40-4545-90a6-e6052dd9a0ae,Namespace:kube-system,Attempt:0,}" Nov 5 15:58:06.309723 containerd[1625]: time="2025-11-05T15:58:06.309662241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-4gzxq,Uid:def47882-ae7c-4469-bdea-ed04b63c4c12,Namespace:calico-system,Attempt:0,}" Nov 5 15:58:06.315338 containerd[1625]: time="2025-11-05T15:58:06.310277308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-666f597f8b-qsk25,Uid:36f63588-e88e-4e5e-be35-3d453ebfbecf,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:58:06.459479 systemd[1]: Created slice kubepods-besteffort-pod6636406c_76cb_4ddc_8f4d_b82da1f33a92.slice - libcontainer container kubepods-besteffort-pod6636406c_76cb_4ddc_8f4d_b82da1f33a92.slice. Nov 5 15:58:06.468447 containerd[1625]: time="2025-11-05T15:58:06.468394809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cr9h8,Uid:6636406c-76cb-4ddc-8f4d-b82da1f33a92,Namespace:calico-system,Attempt:0,}" Nov 5 15:58:06.495174 containerd[1625]: time="2025-11-05T15:58:06.495113047Z" level=error msg="Failed to destroy network for sandbox \"76b7cc2695aae5323058f4242baffc337781c6664833f314f63b61439eaa345c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:58:06.532712 containerd[1625]: time="2025-11-05T15:58:06.532541907Z" level=error msg="Failed to destroy network for sandbox \"1ac4c903aded6d90f0b817d6de54b561fb6b5e8972be9bfa9200a62244e40b02\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:58:06.545166 containerd[1625]: time="2025-11-05T15:58:06.545083933Z" level=error msg="Failed to destroy network for sandbox \"88ba431ea1d80f1a2c0c7562669a8172e0494ff0fcc9e7c1bd2737fe5404f178\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:58:06.545473 containerd[1625]: time="2025-11-05T15:58:06.545097219Z" level=error msg="Failed to destroy network for sandbox \"498034d6df10c401f6bc7986946f750f9ccc6ff739ad675f410f8c0c60f62c51\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:58:06.545835 containerd[1625]: time="2025-11-05T15:58:06.545724681Z" level=error msg="Failed to destroy network for sandbox \"06ebd1c33f56168816610124dde9973e831ee1b793c31948d91ce6dcc1548b83\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:58:06.560146 containerd[1625]: time="2025-11-05T15:58:06.560017679Z" level=error msg="Failed to destroy network for sandbox \"9b93eaa50c1db43b88e8865b6b42fe7a959b882621a08388cbb42fd57829379b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:58:06.562432 containerd[1625]: time="2025-11-05T15:58:06.562286670Z" level=error msg="Failed to destroy network for sandbox \"52e87e77dbdc29e2c333060a17867a8d7e34f404d0fb79806ba48d8b1b50f116\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:58:06.595622 containerd[1625]: time="2025-11-05T15:58:06.595548423Z" level=error msg="Failed to destroy network for sandbox \"5fad8eccec2b2f586e570bed70161bd31e1e190a779aff3b85d5be874948fbe0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:58:06.597911 systemd[1]: run-netns-cni\x2ddb00534c\x2d41f9\x2d4406\x2d0d32\x2d05b07ce5e920.mount: Deactivated successfully. Nov 5 15:58:06.624716 containerd[1625]: time="2025-11-05T15:58:06.624617771Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-4gzxq,Uid:def47882-ae7c-4469-bdea-ed04b63c4c12,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"76b7cc2695aae5323058f4242baffc337781c6664833f314f63b61439eaa345c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:58:06.647557 kubelet[2823]: E1105 15:58:06.646268 2823 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76b7cc2695aae5323058f4242baffc337781c6664833f314f63b61439eaa345c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:58:06.647557 kubelet[2823]: E1105 15:58:06.646404 2823 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76b7cc2695aae5323058f4242baffc337781c6664833f314f63b61439eaa345c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-4gzxq" Nov 5 15:58:06.647557 kubelet[2823]: E1105 15:58:06.646432 2823 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76b7cc2695aae5323058f4242baffc337781c6664833f314f63b61439eaa345c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-4gzxq" Nov 5 15:58:06.648225 kubelet[2823]: E1105 15:58:06.646506 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-4gzxq_calico-system(def47882-ae7c-4469-bdea-ed04b63c4c12)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-4gzxq_calico-system(def47882-ae7c-4469-bdea-ed04b63c4c12)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"76b7cc2695aae5323058f4242baffc337781c6664833f314f63b61439eaa345c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-4gzxq" podUID="def47882-ae7c-4469-bdea-ed04b63c4c12" Nov 5 15:58:06.657828 kubelet[2823]: E1105 15:58:06.657669 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:58:06.660496 containerd[1625]: time="2025-11-05T15:58:06.659050375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 5 15:58:06.703777 containerd[1625]: time="2025-11-05T15:58:06.703645742Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-666f597f8b-vgtfd,Uid:091e450e-4da8-4476-b3e3-4b2049f9a92c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ac4c903aded6d90f0b817d6de54b561fb6b5e8972be9bfa9200a62244e40b02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:58:06.704142 kubelet[2823]: E1105 15:58:06.704046 2823 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ac4c903aded6d90f0b817d6de54b561fb6b5e8972be9bfa9200a62244e40b02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:58:06.704232 kubelet[2823]: E1105 15:58:06.704146 2823 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ac4c903aded6d90f0b817d6de54b561fb6b5e8972be9bfa9200a62244e40b02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-666f597f8b-vgtfd" Nov 5 15:58:06.704232 kubelet[2823]: E1105 15:58:06.704175 2823 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ac4c903aded6d90f0b817d6de54b561fb6b5e8972be9bfa9200a62244e40b02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-666f597f8b-vgtfd" Nov 5 15:58:06.704334 kubelet[2823]: E1105 15:58:06.704244 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-666f597f8b-vgtfd_calico-apiserver(091e450e-4da8-4476-b3e3-4b2049f9a92c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-666f597f8b-vgtfd_calico-apiserver(091e450e-4da8-4476-b3e3-4b2049f9a92c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ac4c903aded6d90f0b817d6de54b561fb6b5e8972be9bfa9200a62244e40b02\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-666f597f8b-vgtfd" podUID="091e450e-4da8-4476-b3e3-4b2049f9a92c" Nov 5 15:58:06.791256 containerd[1625]: time="2025-11-05T15:58:06.790808820Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7x86x,Uid:2a2a255a-1d40-4545-90a6-e6052dd9a0ae,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"88ba431ea1d80f1a2c0c7562669a8172e0494ff0fcc9e7c1bd2737fe5404f178\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:58:06.792510 kubelet[2823]: E1105 15:58:06.791863 2823 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88ba431ea1d80f1a2c0c7562669a8172e0494ff0fcc9e7c1bd2737fe5404f178\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:58:06.792510 kubelet[2823]: E1105 15:58:06.792014 2823 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88ba431ea1d80f1a2c0c7562669a8172e0494ff0fcc9e7c1bd2737fe5404f178\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-7x86x" Nov 5 15:58:06.792510 kubelet[2823]: E1105 15:58:06.792043 2823 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88ba431ea1d80f1a2c0c7562669a8172e0494ff0fcc9e7c1bd2737fe5404f178\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-7x86x" Nov 5 15:58:06.794762 kubelet[2823]: E1105 15:58:06.792105 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-7x86x_kube-system(2a2a255a-1d40-4545-90a6-e6052dd9a0ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-7x86x_kube-system(2a2a255a-1d40-4545-90a6-e6052dd9a0ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"88ba431ea1d80f1a2c0c7562669a8172e0494ff0fcc9e7c1bd2737fe5404f178\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-7x86x" podUID="2a2a255a-1d40-4545-90a6-e6052dd9a0ae" Nov 5 15:58:06.815802 containerd[1625]: time="2025-11-05T15:58:06.815673680Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wctmv,Uid:60a572ee-9f85-42df-8906-eb4bf9d5e5c1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"06ebd1c33f56168816610124dde9973e831ee1b793c31948d91ce6dcc1548b83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:58:06.824176 kubelet[2823]: E1105 15:58:06.823871 2823 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06ebd1c33f56168816610124dde9973e831ee1b793c31948d91ce6dcc1548b83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:58:06.824176 kubelet[2823]: E1105 15:58:06.823958 2823 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06ebd1c33f56168816610124dde9973e831ee1b793c31948d91ce6dcc1548b83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-wctmv" Nov 5 15:58:06.824176 kubelet[2823]: E1105 15:58:06.823986 2823 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06ebd1c33f56168816610124dde9973e831ee1b793c31948d91ce6dcc1548b83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-wctmv" Nov 5 15:58:06.824433 kubelet[2823]: E1105 15:58:06.824044 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-wctmv_kube-system(60a572ee-9f85-42df-8906-eb4bf9d5e5c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-wctmv_kube-system(60a572ee-9f85-42df-8906-eb4bf9d5e5c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"06ebd1c33f56168816610124dde9973e831ee1b793c31948d91ce6dcc1548b83\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-wctmv" podUID="60a572ee-9f85-42df-8906-eb4bf9d5e5c1" Nov 5 15:58:06.902381 containerd[1625]: time="2025-11-05T15:58:06.901082311Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-666f597f8b-qsk25,Uid:36f63588-e88e-4e5e-be35-3d453ebfbecf,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"498034d6df10c401f6bc7986946f750f9ccc6ff739ad675f410f8c0c60f62c51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:58:06.902583 kubelet[2823]: E1105 15:58:06.901659 2823 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"498034d6df10c401f6bc7986946f750f9ccc6ff739ad675f410f8c0c60f62c51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:58:06.902583 kubelet[2823]: E1105 15:58:06.901836 2823 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"498034d6df10c401f6bc7986946f750f9ccc6ff739ad675f410f8c0c60f62c51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-666f597f8b-qsk25" Nov 5 15:58:06.902583 kubelet[2823]: E1105 15:58:06.901865 2823 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"498034d6df10c401f6bc7986946f750f9ccc6ff739ad675f410f8c0c60f62c51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-666f597f8b-qsk25" Nov 5 15:58:06.902738 kubelet[2823]: E1105 15:58:06.901960 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-666f597f8b-qsk25_calico-apiserver(36f63588-e88e-4e5e-be35-3d453ebfbecf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-666f597f8b-qsk25_calico-apiserver(36f63588-e88e-4e5e-be35-3d453ebfbecf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"498034d6df10c401f6bc7986946f750f9ccc6ff739ad675f410f8c0c60f62c51\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-666f597f8b-qsk25" podUID="36f63588-e88e-4e5e-be35-3d453ebfbecf" Nov 5 15:58:06.922387 containerd[1625]: time="2025-11-05T15:58:06.920471439Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-94f7df89d-9b28s,Uid:e568ac1b-7203-41c5-978d-53ea0a375013,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b93eaa50c1db43b88e8865b6b42fe7a959b882621a08388cbb42fd57829379b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:58:06.923761 kubelet[2823]: E1105 15:58:06.923172 2823 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b93eaa50c1db43b88e8865b6b42fe7a959b882621a08388cbb42fd57829379b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:58:06.923761 kubelet[2823]: E1105 15:58:06.923274 2823 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b93eaa50c1db43b88e8865b6b42fe7a959b882621a08388cbb42fd57829379b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-94f7df89d-9b28s" Nov 5 15:58:06.923761 kubelet[2823]: E1105 15:58:06.923368 2823 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b93eaa50c1db43b88e8865b6b42fe7a959b882621a08388cbb42fd57829379b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-94f7df89d-9b28s" Nov 5 15:58:06.924045 kubelet[2823]: E1105 15:58:06.923487 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-94f7df89d-9b28s_calico-system(e568ac1b-7203-41c5-978d-53ea0a375013)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-94f7df89d-9b28s_calico-system(e568ac1b-7203-41c5-978d-53ea0a375013)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b93eaa50c1db43b88e8865b6b42fe7a959b882621a08388cbb42fd57829379b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-94f7df89d-9b28s" podUID="e568ac1b-7203-41c5-978d-53ea0a375013" Nov 5 15:58:06.925960 containerd[1625]: time="2025-11-05T15:58:06.925546177Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5999bc8664-wsdld,Uid:a9ea0dd2-f690-473a-aa1d-3c08114559e4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"52e87e77dbdc29e2c333060a17867a8d7e34f404d0fb79806ba48d8b1b50f116\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:58:06.926248 kubelet[2823]: E1105 15:58:06.925923 2823 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52e87e77dbdc29e2c333060a17867a8d7e34f404d0fb79806ba48d8b1b50f116\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:58:06.926248 kubelet[2823]: E1105 15:58:06.925962 2823 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52e87e77dbdc29e2c333060a17867a8d7e34f404d0fb79806ba48d8b1b50f116\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5999bc8664-wsdld" Nov 5 15:58:06.926248 kubelet[2823]: E1105 15:58:06.925983 2823 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52e87e77dbdc29e2c333060a17867a8d7e34f404d0fb79806ba48d8b1b50f116\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5999bc8664-wsdld" Nov 5 15:58:06.926436 kubelet[2823]: E1105 15:58:06.926041 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5999bc8664-wsdld_calico-system(a9ea0dd2-f690-473a-aa1d-3c08114559e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5999bc8664-wsdld_calico-system(a9ea0dd2-f690-473a-aa1d-3c08114559e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52e87e77dbdc29e2c333060a17867a8d7e34f404d0fb79806ba48d8b1b50f116\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5999bc8664-wsdld" podUID="a9ea0dd2-f690-473a-aa1d-3c08114559e4" Nov 5 15:58:06.927974 containerd[1625]: time="2025-11-05T15:58:06.927866587Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cr9h8,Uid:6636406c-76cb-4ddc-8f4d-b82da1f33a92,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fad8eccec2b2f586e570bed70161bd31e1e190a779aff3b85d5be874948fbe0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:58:06.929704 kubelet[2823]: E1105 15:58:06.929525 2823 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fad8eccec2b2f586e570bed70161bd31e1e190a779aff3b85d5be874948fbe0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:58:06.929704 kubelet[2823]: E1105 15:58:06.929612 2823 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fad8eccec2b2f586e570bed70161bd31e1e190a779aff3b85d5be874948fbe0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cr9h8" Nov 5 15:58:06.929704 kubelet[2823]: E1105 15:58:06.929639 2823 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fad8eccec2b2f586e570bed70161bd31e1e190a779aff3b85d5be874948fbe0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cr9h8" Nov 5 15:58:06.930009 kubelet[2823]: E1105 15:58:06.929777 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cr9h8_calico-system(6636406c-76cb-4ddc-8f4d-b82da1f33a92)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cr9h8_calico-system(6636406c-76cb-4ddc-8f4d-b82da1f33a92)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5fad8eccec2b2f586e570bed70161bd31e1e190a779aff3b85d5be874948fbe0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cr9h8" podUID="6636406c-76cb-4ddc-8f4d-b82da1f33a92" Nov 5 15:58:17.679758 kubelet[2823]: E1105 15:58:17.679683 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:58:17.680985 containerd[1625]: time="2025-11-05T15:58:17.680589216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7x86x,Uid:2a2a255a-1d40-4545-90a6-e6052dd9a0ae,Namespace:kube-system,Attempt:0,}" Nov 5 15:58:18.106999 containerd[1625]: time="2025-11-05T15:58:18.106686431Z" level=error msg="Failed to destroy network for sandbox \"fd374670a19fc4d3f37cc20664f0ac42bf653237ab56bc3e2a51d9f30b95654b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:58:18.109425 systemd[1]: run-netns-cni\x2d477d7936\x2d4cec\x2d254d\x2dcb82\x2d27e00ac02a98.mount: Deactivated successfully. Nov 5 15:58:18.447673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1709216631.mount: Deactivated successfully. Nov 5 15:58:18.960626 containerd[1625]: time="2025-11-05T15:58:18.959879944Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7x86x,Uid:2a2a255a-1d40-4545-90a6-e6052dd9a0ae,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd374670a19fc4d3f37cc20664f0ac42bf653237ab56bc3e2a51d9f30b95654b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:58:18.961889 kubelet[2823]: E1105 15:58:18.961807 2823 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd374670a19fc4d3f37cc20664f0ac42bf653237ab56bc3e2a51d9f30b95654b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:58:18.962240 kubelet[2823]: E1105 15:58:18.961900 2823 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd374670a19fc4d3f37cc20664f0ac42bf653237ab56bc3e2a51d9f30b95654b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-7x86x" Nov 5 15:58:18.962976 kubelet[2823]: E1105 15:58:18.961927 2823 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd374670a19fc4d3f37cc20664f0ac42bf653237ab56bc3e2a51d9f30b95654b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-7x86x" Nov 5 15:58:18.965700 kubelet[2823]: E1105 15:58:18.965639 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-7x86x_kube-system(2a2a255a-1d40-4545-90a6-e6052dd9a0ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-7x86x_kube-system(2a2a255a-1d40-4545-90a6-e6052dd9a0ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd374670a19fc4d3f37cc20664f0ac42bf653237ab56bc3e2a51d9f30b95654b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-7x86x" podUID="2a2a255a-1d40-4545-90a6-e6052dd9a0ae" Nov 5 15:58:19.134032 containerd[1625]: time="2025-11-05T15:58:19.133678028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:58:19.157026 containerd[1625]: time="2025-11-05T15:58:19.156817538Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 5 15:58:19.162224 containerd[1625]: time="2025-11-05T15:58:19.162149607Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:58:19.244989 containerd[1625]: time="2025-11-05T15:58:19.244473624Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:58:19.246217 containerd[1625]: time="2025-11-05T15:58:19.246115858Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 12.587007332s" Nov 5 15:58:19.246217 containerd[1625]: time="2025-11-05T15:58:19.246161766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 5 15:58:19.312364 containerd[1625]: time="2025-11-05T15:58:19.312250382Z" level=info msg="CreateContainer within sandbox \"84f28e6250bffad9ca098a60e0e38a63b9c7c76af105307ec7cb7a5f0e55ed07\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 5 15:58:19.447001 containerd[1625]: time="2025-11-05T15:58:19.446931005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-94f7df89d-9b28s,Uid:e568ac1b-7203-41c5-978d-53ea0a375013,Namespace:calico-system,Attempt:0,}" Nov 5 15:58:19.453083 containerd[1625]: time="2025-11-05T15:58:19.453020373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-666f597f8b-qsk25,Uid:36f63588-e88e-4e5e-be35-3d453ebfbecf,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:58:19.459339 containerd[1625]: time="2025-11-05T15:58:19.459266381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cr9h8,Uid:6636406c-76cb-4ddc-8f4d-b82da1f33a92,Namespace:calico-system,Attempt:0,}" Nov 5 15:58:19.629551 containerd[1625]: time="2025-11-05T15:58:19.629365051Z" level=info msg="Container 6c0dd3ac4cf01211130c658b31df8de668f217de25d09b0c7ef0bd3a7c9c2c60: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:58:19.776352 containerd[1625]: time="2025-11-05T15:58:19.775173588Z" level=info msg="CreateContainer within sandbox \"84f28e6250bffad9ca098a60e0e38a63b9c7c76af105307ec7cb7a5f0e55ed07\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6c0dd3ac4cf01211130c658b31df8de668f217de25d09b0c7ef0bd3a7c9c2c60\"" Nov 5 15:58:19.776917 containerd[1625]: time="2025-11-05T15:58:19.776865537Z" level=info msg="StartContainer for \"6c0dd3ac4cf01211130c658b31df8de668f217de25d09b0c7ef0bd3a7c9c2c60\"" Nov 5 15:58:19.781275 containerd[1625]: time="2025-11-05T15:58:19.781083754Z" level=info msg="connecting to shim 6c0dd3ac4cf01211130c658b31df8de668f217de25d09b0c7ef0bd3a7c9c2c60" address="unix:///run/containerd/s/3b465a0b86ffead177f5c04fd7685666e718158300a6b5e8822a46cf0dce829b" protocol=ttrpc version=3 Nov 5 15:58:19.785588 containerd[1625]: time="2025-11-05T15:58:19.785511750Z" level=error msg="Failed to destroy network for sandbox \"ff14901eb1a702f848b6fcd7c7e7abbaf388fa3adba5aa2876875379fca4bb2b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:58:19.794722 containerd[1625]: time="2025-11-05T15:58:19.794592786Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-94f7df89d-9b28s,Uid:e568ac1b-7203-41c5-978d-53ea0a375013,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff14901eb1a702f848b6fcd7c7e7abbaf388fa3adba5aa2876875379fca4bb2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:58:19.795049 kubelet[2823]: E1105 15:58:19.794997 2823 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff14901eb1a702f848b6fcd7c7e7abbaf388fa3adba5aa2876875379fca4bb2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:58:19.795136 kubelet[2823]: E1105 15:58:19.795082 2823 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff14901eb1a702f848b6fcd7c7e7abbaf388fa3adba5aa2876875379fca4bb2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-94f7df89d-9b28s" Nov 5 15:58:19.795136 kubelet[2823]: E1105 15:58:19.795109 2823 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff14901eb1a702f848b6fcd7c7e7abbaf388fa3adba5aa2876875379fca4bb2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-94f7df89d-9b28s" Nov 5 15:58:19.795214 kubelet[2823]: E1105 15:58:19.795179 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-94f7df89d-9b28s_calico-system(e568ac1b-7203-41c5-978d-53ea0a375013)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-94f7df89d-9b28s_calico-system(e568ac1b-7203-41c5-978d-53ea0a375013)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff14901eb1a702f848b6fcd7c7e7abbaf388fa3adba5aa2876875379fca4bb2b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-94f7df89d-9b28s" podUID="e568ac1b-7203-41c5-978d-53ea0a375013" Nov 5 15:58:19.818763 systemd[1]: Started cri-containerd-6c0dd3ac4cf01211130c658b31df8de668f217de25d09b0c7ef0bd3a7c9c2c60.scope - libcontainer container 6c0dd3ac4cf01211130c658b31df8de668f217de25d09b0c7ef0bd3a7c9c2c60. Nov 5 15:58:19.836561 containerd[1625]: time="2025-11-05T15:58:19.836469929Z" level=error msg="Failed to destroy network for sandbox \"60843bb50caf22032178c517c5f87f97a2214885b75dd5b268279becbfc8821c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:58:19.839124 containerd[1625]: time="2025-11-05T15:58:19.839033787Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-666f597f8b-qsk25,Uid:36f63588-e88e-4e5e-be35-3d453ebfbecf,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"60843bb50caf22032178c517c5f87f97a2214885b75dd5b268279becbfc8821c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:58:19.839473 kubelet[2823]: E1105 15:58:19.839416 2823 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60843bb50caf22032178c517c5f87f97a2214885b75dd5b268279becbfc8821c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:58:19.839534 kubelet[2823]: E1105 15:58:19.839493 2823 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60843bb50caf22032178c517c5f87f97a2214885b75dd5b268279becbfc8821c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-666f597f8b-qsk25" Nov 5 15:58:19.839594 kubelet[2823]: E1105 15:58:19.839529 2823 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60843bb50caf22032178c517c5f87f97a2214885b75dd5b268279becbfc8821c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-666f597f8b-qsk25" Nov 5 15:58:19.839668 kubelet[2823]: E1105 15:58:19.839628 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-666f597f8b-qsk25_calico-apiserver(36f63588-e88e-4e5e-be35-3d453ebfbecf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-666f597f8b-qsk25_calico-apiserver(36f63588-e88e-4e5e-be35-3d453ebfbecf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60843bb50caf22032178c517c5f87f97a2214885b75dd5b268279becbfc8821c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-666f597f8b-qsk25" podUID="36f63588-e88e-4e5e-be35-3d453ebfbecf" Nov 5 15:58:19.861867 containerd[1625]: time="2025-11-05T15:58:19.861814110Z" level=error msg="Failed to destroy network for sandbox \"6b25401ec7af0afdf26e9cd2794c5ff93dbdc1e214ebfa62c922c45afeb1417b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:58:19.863753 containerd[1625]: time="2025-11-05T15:58:19.863692466Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cr9h8,Uid:6636406c-76cb-4ddc-8f4d-b82da1f33a92,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b25401ec7af0afdf26e9cd2794c5ff93dbdc1e214ebfa62c922c45afeb1417b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:58:19.864262 kubelet[2823]: E1105 15:58:19.864217 2823 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b25401ec7af0afdf26e9cd2794c5ff93dbdc1e214ebfa62c922c45afeb1417b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:58:19.864340 kubelet[2823]: E1105 15:58:19.864285 2823 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b25401ec7af0afdf26e9cd2794c5ff93dbdc1e214ebfa62c922c45afeb1417b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cr9h8" Nov 5 15:58:19.864406 kubelet[2823]: E1105 15:58:19.864349 2823 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b25401ec7af0afdf26e9cd2794c5ff93dbdc1e214ebfa62c922c45afeb1417b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cr9h8" Nov 5 15:58:19.864468 kubelet[2823]: E1105 15:58:19.864438 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cr9h8_calico-system(6636406c-76cb-4ddc-8f4d-b82da1f33a92)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cr9h8_calico-system(6636406c-76cb-4ddc-8f4d-b82da1f33a92)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b25401ec7af0afdf26e9cd2794c5ff93dbdc1e214ebfa62c922c45afeb1417b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cr9h8" podUID="6636406c-76cb-4ddc-8f4d-b82da1f33a92" Nov 5 15:58:19.998917 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 5 15:58:19.999817 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 5 15:58:20.026929 containerd[1625]: time="2025-11-05T15:58:20.026877113Z" level=info msg="StartContainer for \"6c0dd3ac4cf01211130c658b31df8de668f217de25d09b0c7ef0bd3a7c9c2c60\" returns successfully" Nov 5 15:58:20.265975 systemd[1]: run-netns-cni\x2db3f828d2\x2d4ca8\x2d71bc\x2dc88e\x2d61e470463503.mount: Deactivated successfully. Nov 5 15:58:20.269572 systemd[1]: run-netns-cni\x2dd8bce8d5\x2d9535\x2d5ab0\x2d7398\x2d6ac184183773.mount: Deactivated successfully. Nov 5 15:58:20.313924 kubelet[2823]: I1105 15:58:20.313392 2823 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9ea0dd2-f690-473a-aa1d-3c08114559e4-whisker-ca-bundle\") pod \"a9ea0dd2-f690-473a-aa1d-3c08114559e4\" (UID: \"a9ea0dd2-f690-473a-aa1d-3c08114559e4\") " Nov 5 15:58:20.313924 kubelet[2823]: I1105 15:58:20.313463 2823 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8m7z\" (UniqueName: \"kubernetes.io/projected/a9ea0dd2-f690-473a-aa1d-3c08114559e4-kube-api-access-n8m7z\") pod \"a9ea0dd2-f690-473a-aa1d-3c08114559e4\" (UID: \"a9ea0dd2-f690-473a-aa1d-3c08114559e4\") " Nov 5 15:58:20.313924 kubelet[2823]: I1105 15:58:20.313486 2823 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a9ea0dd2-f690-473a-aa1d-3c08114559e4-whisker-backend-key-pair\") pod \"a9ea0dd2-f690-473a-aa1d-3c08114559e4\" (UID: \"a9ea0dd2-f690-473a-aa1d-3c08114559e4\") " Nov 5 15:58:20.316884 kubelet[2823]: I1105 15:58:20.316831 2823 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9ea0dd2-f690-473a-aa1d-3c08114559e4-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a9ea0dd2-f690-473a-aa1d-3c08114559e4" (UID: "a9ea0dd2-f690-473a-aa1d-3c08114559e4"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 15:58:20.333089 kubelet[2823]: I1105 15:58:20.332980 2823 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9ea0dd2-f690-473a-aa1d-3c08114559e4-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a9ea0dd2-f690-473a-aa1d-3c08114559e4" (UID: "a9ea0dd2-f690-473a-aa1d-3c08114559e4"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 5 15:58:20.334134 kubelet[2823]: I1105 15:58:20.334110 2823 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9ea0dd2-f690-473a-aa1d-3c08114559e4-kube-api-access-n8m7z" (OuterVolumeSpecName: "kube-api-access-n8m7z") pod "a9ea0dd2-f690-473a-aa1d-3c08114559e4" (UID: "a9ea0dd2-f690-473a-aa1d-3c08114559e4"). InnerVolumeSpecName "kube-api-access-n8m7z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 15:58:20.334316 systemd[1]: var-lib-kubelet-pods-a9ea0dd2\x2df690\x2d473a\x2daa1d\x2d3c08114559e4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn8m7z.mount: Deactivated successfully. Nov 5 15:58:20.336219 systemd[1]: var-lib-kubelet-pods-a9ea0dd2\x2df690\x2d473a\x2daa1d\x2d3c08114559e4-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 5 15:58:20.414088 kubelet[2823]: I1105 15:58:20.414037 2823 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a9ea0dd2-f690-473a-aa1d-3c08114559e4-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 5 15:58:20.414088 kubelet[2823]: I1105 15:58:20.414070 2823 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9ea0dd2-f690-473a-aa1d-3c08114559e4-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 5 15:58:20.414088 kubelet[2823]: I1105 15:58:20.414079 2823 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n8m7z\" (UniqueName: \"kubernetes.io/projected/a9ea0dd2-f690-473a-aa1d-3c08114559e4-kube-api-access-n8m7z\") on node \"localhost\" DevicePath \"\"" Nov 5 15:58:20.422533 kubelet[2823]: E1105 15:58:20.422486 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:58:20.429109 systemd[1]: Removed slice kubepods-besteffort-poda9ea0dd2_f690_473a_aa1d_3c08114559e4.slice - libcontainer container kubepods-besteffort-poda9ea0dd2_f690_473a_aa1d_3c08114559e4.slice. Nov 5 15:58:20.622794 containerd[1625]: time="2025-11-05T15:58:20.622194571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-666f597f8b-vgtfd,Uid:091e450e-4da8-4476-b3e3-4b2049f9a92c,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:58:21.097043 kubelet[2823]: I1105 15:58:21.096952 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lvlp5" podStartSLOduration=2.943984725 podStartE2EDuration="29.096923049s" podCreationTimestamp="2025-11-05 15:57:52 +0000 UTC" firstStartedPulling="2025-11-05 15:57:53.096441022 +0000 UTC m=+26.866595291" lastFinishedPulling="2025-11-05 15:58:19.249379346 +0000 UTC m=+53.019533615" observedRunningTime="2025-11-05 15:58:21.010687663 +0000 UTC m=+54.780841952" watchObservedRunningTime="2025-11-05 15:58:21.096923049 +0000 UTC m=+54.867077328" Nov 5 15:58:21.183062 kubelet[2823]: I1105 15:58:21.183012 2823 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 15:58:21.185209 kubelet[2823]: E1105 15:58:21.185149 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:58:21.283890 systemd[1]: Created slice kubepods-besteffort-pod5542c344_4a27_4a33_b28f_ee6d288fca27.slice - libcontainer container kubepods-besteffort-pod5542c344_4a27_4a33_b28f_ee6d288fca27.slice. Nov 5 15:58:21.322707 kubelet[2823]: I1105 15:58:21.322577 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5kgf\" (UniqueName: \"kubernetes.io/projected/5542c344-4a27-4a33-b28f-ee6d288fca27-kube-api-access-m5kgf\") pod \"whisker-544bd576d6-k5lb4\" (UID: \"5542c344-4a27-4a33-b28f-ee6d288fca27\") " pod="calico-system/whisker-544bd576d6-k5lb4" Nov 5 15:58:21.323588 kubelet[2823]: I1105 15:58:21.323408 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5542c344-4a27-4a33-b28f-ee6d288fca27-whisker-ca-bundle\") pod \"whisker-544bd576d6-k5lb4\" (UID: \"5542c344-4a27-4a33-b28f-ee6d288fca27\") " pod="calico-system/whisker-544bd576d6-k5lb4" Nov 5 15:58:21.323588 kubelet[2823]: I1105 15:58:21.323511 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5542c344-4a27-4a33-b28f-ee6d288fca27-whisker-backend-key-pair\") pod \"whisker-544bd576d6-k5lb4\" (UID: \"5542c344-4a27-4a33-b28f-ee6d288fca27\") " pod="calico-system/whisker-544bd576d6-k5lb4" Nov 5 15:58:21.363185 systemd-networkd[1522]: cali071ca711233: Link UP Nov 5 15:58:21.367467 systemd-networkd[1522]: cali071ca711233: Gained carrier Nov 5 15:58:21.391470 containerd[1625]: 2025-11-05 15:58:20.870 [INFO][4069] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 15:58:21.391470 containerd[1625]: 2025-11-05 15:58:21.016 [INFO][4069] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--666f597f8b--vgtfd-eth0 calico-apiserver-666f597f8b- calico-apiserver 091e450e-4da8-4476-b3e3-4b2049f9a92c 847 0 2025-11-05 15:57:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:666f597f8b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-666f597f8b-vgtfd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali071ca711233 [] [] }} ContainerID="2cfce9733f89b480a4333641dfb93f1a59a17b2e07497b778d4e4e401a575cf7" Namespace="calico-apiserver" Pod="calico-apiserver-666f597f8b-vgtfd" WorkloadEndpoint="localhost-k8s-calico--apiserver--666f597f8b--vgtfd-" Nov 5 15:58:21.391470 containerd[1625]: 2025-11-05 15:58:21.016 [INFO][4069] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2cfce9733f89b480a4333641dfb93f1a59a17b2e07497b778d4e4e401a575cf7" Namespace="calico-apiserver" Pod="calico-apiserver-666f597f8b-vgtfd" WorkloadEndpoint="localhost-k8s-calico--apiserver--666f597f8b--vgtfd-eth0" Nov 5 15:58:21.391470 containerd[1625]: 2025-11-05 15:58:21.191 [INFO][4094] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2cfce9733f89b480a4333641dfb93f1a59a17b2e07497b778d4e4e401a575cf7" HandleID="k8s-pod-network.2cfce9733f89b480a4333641dfb93f1a59a17b2e07497b778d4e4e401a575cf7" Workload="localhost-k8s-calico--apiserver--666f597f8b--vgtfd-eth0" Nov 5 15:58:21.391470 containerd[1625]: 2025-11-05 15:58:21.192 [INFO][4094] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2cfce9733f89b480a4333641dfb93f1a59a17b2e07497b778d4e4e401a575cf7" HandleID="k8s-pod-network.2cfce9733f89b480a4333641dfb93f1a59a17b2e07497b778d4e4e401a575cf7" Workload="localhost-k8s-calico--apiserver--666f597f8b--vgtfd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000347730), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-666f597f8b-vgtfd", "timestamp":"2025-11-05 15:58:21.191006846 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:58:21.391470 containerd[1625]: 2025-11-05 15:58:21.192 [INFO][4094] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:58:21.391470 containerd[1625]: 2025-11-05 15:58:21.192 [INFO][4094] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:58:21.391470 containerd[1625]: 2025-11-05 15:58:21.192 [INFO][4094] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:58:21.391470 containerd[1625]: 2025-11-05 15:58:21.215 [INFO][4094] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2cfce9733f89b480a4333641dfb93f1a59a17b2e07497b778d4e4e401a575cf7" host="localhost" Nov 5 15:58:21.391470 containerd[1625]: 2025-11-05 15:58:21.244 [INFO][4094] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:58:21.391470 containerd[1625]: 2025-11-05 15:58:21.271 [INFO][4094] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:58:21.391470 containerd[1625]: 2025-11-05 15:58:21.283 [INFO][4094] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:58:21.391470 containerd[1625]: 2025-11-05 15:58:21.292 [INFO][4094] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:58:21.391470 containerd[1625]: 2025-11-05 15:58:21.292 [INFO][4094] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2cfce9733f89b480a4333641dfb93f1a59a17b2e07497b778d4e4e401a575cf7" host="localhost" Nov 5 15:58:21.391470 containerd[1625]: 2025-11-05 15:58:21.294 [INFO][4094] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2cfce9733f89b480a4333641dfb93f1a59a17b2e07497b778d4e4e401a575cf7 Nov 5 15:58:21.391470 containerd[1625]: 2025-11-05 15:58:21.316 [INFO][4094] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2cfce9733f89b480a4333641dfb93f1a59a17b2e07497b778d4e4e401a575cf7" host="localhost" Nov 5 15:58:21.391470 containerd[1625]: 2025-11-05 15:58:21.331 [INFO][4094] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.2cfce9733f89b480a4333641dfb93f1a59a17b2e07497b778d4e4e401a575cf7" host="localhost" Nov 5 15:58:21.391470 containerd[1625]: 2025-11-05 15:58:21.331 [INFO][4094] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.2cfce9733f89b480a4333641dfb93f1a59a17b2e07497b778d4e4e401a575cf7" host="localhost" Nov 5 15:58:21.391470 containerd[1625]: 2025-11-05 15:58:21.331 [INFO][4094] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:58:21.391470 containerd[1625]: 2025-11-05 15:58:21.331 [INFO][4094] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="2cfce9733f89b480a4333641dfb93f1a59a17b2e07497b778d4e4e401a575cf7" HandleID="k8s-pod-network.2cfce9733f89b480a4333641dfb93f1a59a17b2e07497b778d4e4e401a575cf7" Workload="localhost-k8s-calico--apiserver--666f597f8b--vgtfd-eth0" Nov 5 15:58:21.392674 containerd[1625]: 2025-11-05 15:58:21.341 [INFO][4069] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2cfce9733f89b480a4333641dfb93f1a59a17b2e07497b778d4e4e401a575cf7" Namespace="calico-apiserver" Pod="calico-apiserver-666f597f8b-vgtfd" WorkloadEndpoint="localhost-k8s-calico--apiserver--666f597f8b--vgtfd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--666f597f8b--vgtfd-eth0", GenerateName:"calico-apiserver-666f597f8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"091e450e-4da8-4476-b3e3-4b2049f9a92c", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 57, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"666f597f8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-666f597f8b-vgtfd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali071ca711233", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:58:21.392674 containerd[1625]: 2025-11-05 15:58:21.341 [INFO][4069] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="2cfce9733f89b480a4333641dfb93f1a59a17b2e07497b778d4e4e401a575cf7" Namespace="calico-apiserver" Pod="calico-apiserver-666f597f8b-vgtfd" WorkloadEndpoint="localhost-k8s-calico--apiserver--666f597f8b--vgtfd-eth0" Nov 5 15:58:21.392674 containerd[1625]: 2025-11-05 15:58:21.341 [INFO][4069] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali071ca711233 ContainerID="2cfce9733f89b480a4333641dfb93f1a59a17b2e07497b778d4e4e401a575cf7" Namespace="calico-apiserver" Pod="calico-apiserver-666f597f8b-vgtfd" WorkloadEndpoint="localhost-k8s-calico--apiserver--666f597f8b--vgtfd-eth0" Nov 5 15:58:21.392674 containerd[1625]: 2025-11-05 15:58:21.369 [INFO][4069] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2cfce9733f89b480a4333641dfb93f1a59a17b2e07497b778d4e4e401a575cf7" Namespace="calico-apiserver" Pod="calico-apiserver-666f597f8b-vgtfd" WorkloadEndpoint="localhost-k8s-calico--apiserver--666f597f8b--vgtfd-eth0" Nov 5 15:58:21.392674 containerd[1625]: 2025-11-05 15:58:21.371 [INFO][4069] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2cfce9733f89b480a4333641dfb93f1a59a17b2e07497b778d4e4e401a575cf7" Namespace="calico-apiserver" Pod="calico-apiserver-666f597f8b-vgtfd" WorkloadEndpoint="localhost-k8s-calico--apiserver--666f597f8b--vgtfd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--666f597f8b--vgtfd-eth0", GenerateName:"calico-apiserver-666f597f8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"091e450e-4da8-4476-b3e3-4b2049f9a92c", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 57, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"666f597f8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2cfce9733f89b480a4333641dfb93f1a59a17b2e07497b778d4e4e401a575cf7", Pod:"calico-apiserver-666f597f8b-vgtfd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali071ca711233", MAC:"66:8e:e8:30:01:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:58:21.392674 containerd[1625]: 2025-11-05 15:58:21.386 [INFO][4069] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2cfce9733f89b480a4333641dfb93f1a59a17b2e07497b778d4e4e401a575cf7" Namespace="calico-apiserver" Pod="calico-apiserver-666f597f8b-vgtfd" WorkloadEndpoint="localhost-k8s-calico--apiserver--666f597f8b--vgtfd-eth0" Nov 5 15:58:21.425353 kubelet[2823]: E1105 15:58:21.425252 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:58:21.445714 kubelet[2823]: E1105 15:58:21.445649 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:58:21.446540 containerd[1625]: time="2025-11-05T15:58:21.446172242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wctmv,Uid:60a572ee-9f85-42df-8906-eb4bf9d5e5c1,Namespace:kube-system,Attempt:0,}" Nov 5 15:58:21.450431 containerd[1625]: time="2025-11-05T15:58:21.450375340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-4gzxq,Uid:def47882-ae7c-4469-bdea-ed04b63c4c12,Namespace:calico-system,Attempt:0,}" Nov 5 15:58:21.591692 containerd[1625]: time="2025-11-05T15:58:21.591538515Z" level=info msg="connecting to shim 2cfce9733f89b480a4333641dfb93f1a59a17b2e07497b778d4e4e401a575cf7" address="unix:///run/containerd/s/079363c7959d4a2b08008965d1df59954301087814c9ad5d9cbb6454bb8a0f6c" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:58:21.604404 containerd[1625]: time="2025-11-05T15:58:21.604337634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-544bd576d6-k5lb4,Uid:5542c344-4a27-4a33-b28f-ee6d288fca27,Namespace:calico-system,Attempt:0,}" Nov 5 15:58:21.629253 systemd-networkd[1522]: cali22be0e62d73: Link UP Nov 5 15:58:21.629593 systemd[1]: Started cri-containerd-2cfce9733f89b480a4333641dfb93f1a59a17b2e07497b778d4e4e401a575cf7.scope - libcontainer container 2cfce9733f89b480a4333641dfb93f1a59a17b2e07497b778d4e4e401a575cf7. Nov 5 15:58:21.630532 systemd-networkd[1522]: cali22be0e62d73: Gained carrier Nov 5 15:58:21.651043 containerd[1625]: 2025-11-05 15:58:21.501 [INFO][4125] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 15:58:21.651043 containerd[1625]: 2025-11-05 15:58:21.519 [INFO][4125] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--wctmv-eth0 coredns-66bc5c9577- kube-system 60a572ee-9f85-42df-8906-eb4bf9d5e5c1 844 0 2025-11-05 15:57:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-wctmv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali22be0e62d73 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="1382f993993619f843182541d4d2542e320dcff53ab481d397c7ebb3ba24da2d" Namespace="kube-system" Pod="coredns-66bc5c9577-wctmv" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wctmv-" Nov 5 15:58:21.651043 containerd[1625]: 2025-11-05 15:58:21.519 [INFO][4125] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1382f993993619f843182541d4d2542e320dcff53ab481d397c7ebb3ba24da2d" Namespace="kube-system" Pod="coredns-66bc5c9577-wctmv" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wctmv-eth0" Nov 5 15:58:21.651043 containerd[1625]: 2025-11-05 15:58:21.558 [INFO][4148] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1382f993993619f843182541d4d2542e320dcff53ab481d397c7ebb3ba24da2d" HandleID="k8s-pod-network.1382f993993619f843182541d4d2542e320dcff53ab481d397c7ebb3ba24da2d" Workload="localhost-k8s-coredns--66bc5c9577--wctmv-eth0" Nov 5 15:58:21.651043 containerd[1625]: 2025-11-05 15:58:21.559 [INFO][4148] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1382f993993619f843182541d4d2542e320dcff53ab481d397c7ebb3ba24da2d" HandleID="k8s-pod-network.1382f993993619f843182541d4d2542e320dcff53ab481d397c7ebb3ba24da2d" Workload="localhost-k8s-coredns--66bc5c9577--wctmv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf290), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-wctmv", "timestamp":"2025-11-05 15:58:21.558869405 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:58:21.651043 containerd[1625]: 2025-11-05 15:58:21.559 [INFO][4148] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:58:21.651043 containerd[1625]: 2025-11-05 15:58:21.559 [INFO][4148] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:58:21.651043 containerd[1625]: 2025-11-05 15:58:21.559 [INFO][4148] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:58:21.651043 containerd[1625]: 2025-11-05 15:58:21.566 [INFO][4148] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1382f993993619f843182541d4d2542e320dcff53ab481d397c7ebb3ba24da2d" host="localhost" Nov 5 15:58:21.651043 containerd[1625]: 2025-11-05 15:58:21.573 [INFO][4148] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:58:21.651043 containerd[1625]: 2025-11-05 15:58:21.582 [INFO][4148] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:58:21.651043 containerd[1625]: 2025-11-05 15:58:21.584 [INFO][4148] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:58:21.651043 containerd[1625]: 2025-11-05 15:58:21.586 [INFO][4148] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:58:21.651043 containerd[1625]: 2025-11-05 15:58:21.587 [INFO][4148] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1382f993993619f843182541d4d2542e320dcff53ab481d397c7ebb3ba24da2d" host="localhost" Nov 5 15:58:21.651043 containerd[1625]: 2025-11-05 15:58:21.588 [INFO][4148] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1382f993993619f843182541d4d2542e320dcff53ab481d397c7ebb3ba24da2d Nov 5 15:58:21.651043 containerd[1625]: 2025-11-05 15:58:21.595 [INFO][4148] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1382f993993619f843182541d4d2542e320dcff53ab481d397c7ebb3ba24da2d" host="localhost" Nov 5 15:58:21.651043 containerd[1625]: 2025-11-05 15:58:21.610 [INFO][4148] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.1382f993993619f843182541d4d2542e320dcff53ab481d397c7ebb3ba24da2d" host="localhost" Nov 5 15:58:21.651043 containerd[1625]: 2025-11-05 15:58:21.610 [INFO][4148] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.1382f993993619f843182541d4d2542e320dcff53ab481d397c7ebb3ba24da2d" host="localhost" Nov 5 15:58:21.651043 containerd[1625]: 2025-11-05 15:58:21.610 [INFO][4148] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:58:21.651043 containerd[1625]: 2025-11-05 15:58:21.610 [INFO][4148] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="1382f993993619f843182541d4d2542e320dcff53ab481d397c7ebb3ba24da2d" HandleID="k8s-pod-network.1382f993993619f843182541d4d2542e320dcff53ab481d397c7ebb3ba24da2d" Workload="localhost-k8s-coredns--66bc5c9577--wctmv-eth0" Nov 5 15:58:21.651954 containerd[1625]: 2025-11-05 15:58:21.626 [INFO][4125] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1382f993993619f843182541d4d2542e320dcff53ab481d397c7ebb3ba24da2d" Namespace="kube-system" Pod="coredns-66bc5c9577-wctmv" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wctmv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--wctmv-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"60a572ee-9f85-42df-8906-eb4bf9d5e5c1", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 57, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-wctmv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali22be0e62d73", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:58:21.651954 containerd[1625]: 2025-11-05 15:58:21.626 [INFO][4125] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="1382f993993619f843182541d4d2542e320dcff53ab481d397c7ebb3ba24da2d" Namespace="kube-system" Pod="coredns-66bc5c9577-wctmv" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wctmv-eth0" Nov 5 15:58:21.651954 containerd[1625]: 2025-11-05 15:58:21.626 [INFO][4125] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali22be0e62d73 ContainerID="1382f993993619f843182541d4d2542e320dcff53ab481d397c7ebb3ba24da2d" Namespace="kube-system" Pod="coredns-66bc5c9577-wctmv" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wctmv-eth0" Nov 5 15:58:21.651954 containerd[1625]: 2025-11-05 15:58:21.631 [INFO][4125] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1382f993993619f843182541d4d2542e320dcff53ab481d397c7ebb3ba24da2d" Namespace="kube-system" Pod="coredns-66bc5c9577-wctmv" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wctmv-eth0" Nov 5 15:58:21.652178 containerd[1625]: 2025-11-05 15:58:21.632 [INFO][4125] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1382f993993619f843182541d4d2542e320dcff53ab481d397c7ebb3ba24da2d" Namespace="kube-system" Pod="coredns-66bc5c9577-wctmv" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wctmv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--wctmv-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"60a572ee-9f85-42df-8906-eb4bf9d5e5c1", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 57, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1382f993993619f843182541d4d2542e320dcff53ab481d397c7ebb3ba24da2d", Pod:"coredns-66bc5c9577-wctmv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali22be0e62d73", MAC:"06:c7:93:28:c9:98", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:58:21.652178 containerd[1625]: 2025-11-05 15:58:21.644 [INFO][4125] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1382f993993619f843182541d4d2542e320dcff53ab481d397c7ebb3ba24da2d" Namespace="kube-system" Pod="coredns-66bc5c9577-wctmv" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wctmv-eth0" Nov 5 15:58:21.660742 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:58:21.678585 containerd[1625]: time="2025-11-05T15:58:21.678141054Z" level=info msg="connecting to shim 1382f993993619f843182541d4d2542e320dcff53ab481d397c7ebb3ba24da2d" address="unix:///run/containerd/s/5aa7e32d91a041cd23447711f78feb637be6b679ed69ed2728f281123615863b" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:58:21.724058 containerd[1625]: time="2025-11-05T15:58:21.724007985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-666f597f8b-vgtfd,Uid:091e450e-4da8-4476-b3e3-4b2049f9a92c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2cfce9733f89b480a4333641dfb93f1a59a17b2e07497b778d4e4e401a575cf7\"" Nov 5 15:58:21.724943 systemd-networkd[1522]: calib372a32c973: Link UP Nov 5 15:58:21.726025 systemd-networkd[1522]: calib372a32c973: Gained carrier Nov 5 15:58:21.727616 containerd[1625]: time="2025-11-05T15:58:21.727533627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:58:21.748867 systemd[1]: Started cri-containerd-1382f993993619f843182541d4d2542e320dcff53ab481d397c7ebb3ba24da2d.scope - libcontainer container 1382f993993619f843182541d4d2542e320dcff53ab481d397c7ebb3ba24da2d. Nov 5 15:58:21.755620 containerd[1625]: 2025-11-05 15:58:21.498 [INFO][4119] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 15:58:21.755620 containerd[1625]: 2025-11-05 15:58:21.519 [INFO][4119] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--4gzxq-eth0 goldmane-7c778bb748- calico-system def47882-ae7c-4469-bdea-ed04b63c4c12 851 0 2025-11-05 15:57:49 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-4gzxq eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib372a32c973 [] [] }} ContainerID="7e2b5ce76e704a0a555948725b1c8a4532919286ea4277b55e77e7963bee6b4f" Namespace="calico-system" Pod="goldmane-7c778bb748-4gzxq" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--4gzxq-" Nov 5 15:58:21.755620 containerd[1625]: 2025-11-05 15:58:21.520 [INFO][4119] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7e2b5ce76e704a0a555948725b1c8a4532919286ea4277b55e77e7963bee6b4f" Namespace="calico-system" Pod="goldmane-7c778bb748-4gzxq" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--4gzxq-eth0" Nov 5 15:58:21.755620 containerd[1625]: 2025-11-05 15:58:21.565 [INFO][4150] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e2b5ce76e704a0a555948725b1c8a4532919286ea4277b55e77e7963bee6b4f" HandleID="k8s-pod-network.7e2b5ce76e704a0a555948725b1c8a4532919286ea4277b55e77e7963bee6b4f" Workload="localhost-k8s-goldmane--7c778bb748--4gzxq-eth0" Nov 5 15:58:21.755620 containerd[1625]: 2025-11-05 15:58:21.565 [INFO][4150] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7e2b5ce76e704a0a555948725b1c8a4532919286ea4277b55e77e7963bee6b4f" HandleID="k8s-pod-network.7e2b5ce76e704a0a555948725b1c8a4532919286ea4277b55e77e7963bee6b4f" Workload="localhost-k8s-goldmane--7c778bb748--4gzxq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c70a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-4gzxq", "timestamp":"2025-11-05 15:58:21.565432508 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:58:21.755620 containerd[1625]: 2025-11-05 15:58:21.566 [INFO][4150] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:58:21.755620 containerd[1625]: 2025-11-05 15:58:21.610 [INFO][4150] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:58:21.755620 containerd[1625]: 2025-11-05 15:58:21.612 [INFO][4150] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:58:21.755620 containerd[1625]: 2025-11-05 15:58:21.667 [INFO][4150] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7e2b5ce76e704a0a555948725b1c8a4532919286ea4277b55e77e7963bee6b4f" host="localhost" Nov 5 15:58:21.755620 containerd[1625]: 2025-11-05 15:58:21.675 [INFO][4150] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:58:21.755620 containerd[1625]: 2025-11-05 15:58:21.683 [INFO][4150] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:58:21.755620 containerd[1625]: 2025-11-05 15:58:21.686 [INFO][4150] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:58:21.755620 containerd[1625]: 2025-11-05 15:58:21.689 [INFO][4150] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:58:21.755620 containerd[1625]: 2025-11-05 15:58:21.689 [INFO][4150] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7e2b5ce76e704a0a555948725b1c8a4532919286ea4277b55e77e7963bee6b4f" host="localhost" Nov 5 15:58:21.755620 containerd[1625]: 2025-11-05 15:58:21.691 [INFO][4150] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7e2b5ce76e704a0a555948725b1c8a4532919286ea4277b55e77e7963bee6b4f Nov 5 15:58:21.755620 containerd[1625]: 2025-11-05 15:58:21.699 [INFO][4150] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7e2b5ce76e704a0a555948725b1c8a4532919286ea4277b55e77e7963bee6b4f" host="localhost" Nov 5 15:58:21.755620 containerd[1625]: 2025-11-05 15:58:21.713 [INFO][4150] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.7e2b5ce76e704a0a555948725b1c8a4532919286ea4277b55e77e7963bee6b4f" host="localhost" Nov 5 15:58:21.755620 containerd[1625]: 2025-11-05 15:58:21.713 [INFO][4150] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.7e2b5ce76e704a0a555948725b1c8a4532919286ea4277b55e77e7963bee6b4f" host="localhost" Nov 5 15:58:21.755620 containerd[1625]: 2025-11-05 15:58:21.713 [INFO][4150] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:58:21.755620 containerd[1625]: 2025-11-05 15:58:21.713 [INFO][4150] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="7e2b5ce76e704a0a555948725b1c8a4532919286ea4277b55e77e7963bee6b4f" HandleID="k8s-pod-network.7e2b5ce76e704a0a555948725b1c8a4532919286ea4277b55e77e7963bee6b4f" Workload="localhost-k8s-goldmane--7c778bb748--4gzxq-eth0" Nov 5 15:58:21.758753 containerd[1625]: 2025-11-05 15:58:21.720 [INFO][4119] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7e2b5ce76e704a0a555948725b1c8a4532919286ea4277b55e77e7963bee6b4f" Namespace="calico-system" Pod="goldmane-7c778bb748-4gzxq" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--4gzxq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--4gzxq-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"def47882-ae7c-4469-bdea-ed04b63c4c12", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 57, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-4gzxq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib372a32c973", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:58:21.758753 containerd[1625]: 2025-11-05 15:58:21.721 [INFO][4119] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="7e2b5ce76e704a0a555948725b1c8a4532919286ea4277b55e77e7963bee6b4f" Namespace="calico-system" Pod="goldmane-7c778bb748-4gzxq" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--4gzxq-eth0" Nov 5 15:58:21.758753 containerd[1625]: 2025-11-05 15:58:21.721 [INFO][4119] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib372a32c973 ContainerID="7e2b5ce76e704a0a555948725b1c8a4532919286ea4277b55e77e7963bee6b4f" Namespace="calico-system" Pod="goldmane-7c778bb748-4gzxq" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--4gzxq-eth0" Nov 5 15:58:21.758753 containerd[1625]: 2025-11-05 15:58:21.727 [INFO][4119] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e2b5ce76e704a0a555948725b1c8a4532919286ea4277b55e77e7963bee6b4f" Namespace="calico-system" Pod="goldmane-7c778bb748-4gzxq" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--4gzxq-eth0" Nov 5 15:58:21.758753 containerd[1625]: 2025-11-05 15:58:21.730 [INFO][4119] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7e2b5ce76e704a0a555948725b1c8a4532919286ea4277b55e77e7963bee6b4f" Namespace="calico-system" Pod="goldmane-7c778bb748-4gzxq" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--4gzxq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--4gzxq-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"def47882-ae7c-4469-bdea-ed04b63c4c12", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 57, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7e2b5ce76e704a0a555948725b1c8a4532919286ea4277b55e77e7963bee6b4f", Pod:"goldmane-7c778bb748-4gzxq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib372a32c973", MAC:"1e:e1:18:cb:be:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:58:21.758753 containerd[1625]: 2025-11-05 15:58:21.748 [INFO][4119] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7e2b5ce76e704a0a555948725b1c8a4532919286ea4277b55e77e7963bee6b4f" Namespace="calico-system" Pod="goldmane-7c778bb748-4gzxq" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--4gzxq-eth0" Nov 5 15:58:21.771553 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:58:21.793870 containerd[1625]: time="2025-11-05T15:58:21.793782259Z" level=info msg="connecting to shim 7e2b5ce76e704a0a555948725b1c8a4532919286ea4277b55e77e7963bee6b4f" address="unix:///run/containerd/s/e7dea2f3a4a794da3c83911909f7933bbfc5cb72cd37925544af2f55c363adfe" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:58:21.824754 systemd[1]: Started cri-containerd-7e2b5ce76e704a0a555948725b1c8a4532919286ea4277b55e77e7963bee6b4f.scope - libcontainer container 7e2b5ce76e704a0a555948725b1c8a4532919286ea4277b55e77e7963bee6b4f. Nov 5 15:58:21.825766 systemd-networkd[1522]: cali06820400c4c: Link UP Nov 5 15:58:21.826034 systemd-networkd[1522]: cali06820400c4c: Gained carrier Nov 5 15:58:21.830958 containerd[1625]: time="2025-11-05T15:58:21.830883535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wctmv,Uid:60a572ee-9f85-42df-8906-eb4bf9d5e5c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"1382f993993619f843182541d4d2542e320dcff53ab481d397c7ebb3ba24da2d\"" Nov 5 15:58:21.839267 kubelet[2823]: E1105 15:58:21.833810 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:58:21.846538 containerd[1625]: time="2025-11-05T15:58:21.846294098Z" level=info msg="CreateContainer within sandbox \"1382f993993619f843182541d4d2542e320dcff53ab481d397c7ebb3ba24da2d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 15:58:21.849219 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:58:21.853273 containerd[1625]: 2025-11-05 15:58:21.664 [INFO][4191] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 15:58:21.853273 containerd[1625]: 2025-11-05 15:58:21.676 [INFO][4191] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--544bd576d6--k5lb4-eth0 whisker-544bd576d6- calico-system 5542c344-4a27-4a33-b28f-ee6d288fca27 944 0 2025-11-05 15:58:21 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:544bd576d6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-544bd576d6-k5lb4 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali06820400c4c [] [] }} ContainerID="6d516e5ae43edbac3f38e48981e7cbe546f0eac1904836fa7f35fce50fd31198" Namespace="calico-system" Pod="whisker-544bd576d6-k5lb4" WorkloadEndpoint="localhost-k8s-whisker--544bd576d6--k5lb4-" Nov 5 15:58:21.853273 containerd[1625]: 2025-11-05 15:58:21.676 [INFO][4191] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6d516e5ae43edbac3f38e48981e7cbe546f0eac1904836fa7f35fce50fd31198" Namespace="calico-system" Pod="whisker-544bd576d6-k5lb4" WorkloadEndpoint="localhost-k8s-whisker--544bd576d6--k5lb4-eth0" Nov 5 15:58:21.853273 containerd[1625]: 2025-11-05 15:58:21.736 [INFO][4233] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d516e5ae43edbac3f38e48981e7cbe546f0eac1904836fa7f35fce50fd31198" HandleID="k8s-pod-network.6d516e5ae43edbac3f38e48981e7cbe546f0eac1904836fa7f35fce50fd31198" Workload="localhost-k8s-whisker--544bd576d6--k5lb4-eth0" Nov 5 15:58:21.853273 containerd[1625]: 2025-11-05 15:58:21.736 [INFO][4233] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6d516e5ae43edbac3f38e48981e7cbe546f0eac1904836fa7f35fce50fd31198" HandleID="k8s-pod-network.6d516e5ae43edbac3f38e48981e7cbe546f0eac1904836fa7f35fce50fd31198" Workload="localhost-k8s-whisker--544bd576d6--k5lb4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6fd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-544bd576d6-k5lb4", "timestamp":"2025-11-05 15:58:21.736158072 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:58:21.853273 containerd[1625]: 2025-11-05 15:58:21.736 [INFO][4233] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:58:21.853273 containerd[1625]: 2025-11-05 15:58:21.736 [INFO][4233] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:58:21.853273 containerd[1625]: 2025-11-05 15:58:21.736 [INFO][4233] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:58:21.853273 containerd[1625]: 2025-11-05 15:58:21.769 [INFO][4233] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6d516e5ae43edbac3f38e48981e7cbe546f0eac1904836fa7f35fce50fd31198" host="localhost" Nov 5 15:58:21.853273 containerd[1625]: 2025-11-05 15:58:21.777 [INFO][4233] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:58:21.853273 containerd[1625]: 2025-11-05 15:58:21.785 [INFO][4233] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:58:21.853273 containerd[1625]: 2025-11-05 15:58:21.787 [INFO][4233] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:58:21.853273 containerd[1625]: 2025-11-05 15:58:21.791 [INFO][4233] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:58:21.853273 containerd[1625]: 2025-11-05 15:58:21.793 [INFO][4233] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6d516e5ae43edbac3f38e48981e7cbe546f0eac1904836fa7f35fce50fd31198" host="localhost" Nov 5 15:58:21.853273 containerd[1625]: 2025-11-05 15:58:21.797 [INFO][4233] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6d516e5ae43edbac3f38e48981e7cbe546f0eac1904836fa7f35fce50fd31198 Nov 5 15:58:21.853273 containerd[1625]: 2025-11-05 15:58:21.803 [INFO][4233] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6d516e5ae43edbac3f38e48981e7cbe546f0eac1904836fa7f35fce50fd31198" host="localhost" Nov 5 15:58:21.853273 containerd[1625]: 2025-11-05 15:58:21.813 [INFO][4233] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.6d516e5ae43edbac3f38e48981e7cbe546f0eac1904836fa7f35fce50fd31198" host="localhost" Nov 5 15:58:21.853273 containerd[1625]: 2025-11-05 15:58:21.813 [INFO][4233] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.6d516e5ae43edbac3f38e48981e7cbe546f0eac1904836fa7f35fce50fd31198" host="localhost" Nov 5 15:58:21.853273 containerd[1625]: 2025-11-05 15:58:21.813 [INFO][4233] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:58:21.853273 containerd[1625]: 2025-11-05 15:58:21.813 [INFO][4233] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="6d516e5ae43edbac3f38e48981e7cbe546f0eac1904836fa7f35fce50fd31198" HandleID="k8s-pod-network.6d516e5ae43edbac3f38e48981e7cbe546f0eac1904836fa7f35fce50fd31198" Workload="localhost-k8s-whisker--544bd576d6--k5lb4-eth0" Nov 5 15:58:21.854295 containerd[1625]: 2025-11-05 15:58:21.820 [INFO][4191] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6d516e5ae43edbac3f38e48981e7cbe546f0eac1904836fa7f35fce50fd31198" Namespace="calico-system" Pod="whisker-544bd576d6-k5lb4" WorkloadEndpoint="localhost-k8s-whisker--544bd576d6--k5lb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--544bd576d6--k5lb4-eth0", GenerateName:"whisker-544bd576d6-", Namespace:"calico-system", SelfLink:"", UID:"5542c344-4a27-4a33-b28f-ee6d288fca27", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 58, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"544bd576d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-544bd576d6-k5lb4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali06820400c4c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:58:21.854295 containerd[1625]: 2025-11-05 15:58:21.821 [INFO][4191] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="6d516e5ae43edbac3f38e48981e7cbe546f0eac1904836fa7f35fce50fd31198" Namespace="calico-system" Pod="whisker-544bd576d6-k5lb4" WorkloadEndpoint="localhost-k8s-whisker--544bd576d6--k5lb4-eth0" Nov 5 15:58:21.854295 containerd[1625]: 2025-11-05 15:58:21.821 [INFO][4191] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali06820400c4c ContainerID="6d516e5ae43edbac3f38e48981e7cbe546f0eac1904836fa7f35fce50fd31198" Namespace="calico-system" Pod="whisker-544bd576d6-k5lb4" WorkloadEndpoint="localhost-k8s-whisker--544bd576d6--k5lb4-eth0" Nov 5 15:58:21.854295 containerd[1625]: 2025-11-05 15:58:21.825 [INFO][4191] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d516e5ae43edbac3f38e48981e7cbe546f0eac1904836fa7f35fce50fd31198" Namespace="calico-system" Pod="whisker-544bd576d6-k5lb4" WorkloadEndpoint="localhost-k8s-whisker--544bd576d6--k5lb4-eth0" Nov 5 15:58:21.854295 containerd[1625]: 2025-11-05 15:58:21.826 [INFO][4191] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6d516e5ae43edbac3f38e48981e7cbe546f0eac1904836fa7f35fce50fd31198" Namespace="calico-system" Pod="whisker-544bd576d6-k5lb4" WorkloadEndpoint="localhost-k8s-whisker--544bd576d6--k5lb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--544bd576d6--k5lb4-eth0", GenerateName:"whisker-544bd576d6-", Namespace:"calico-system", SelfLink:"", UID:"5542c344-4a27-4a33-b28f-ee6d288fca27", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 58, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"544bd576d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6d516e5ae43edbac3f38e48981e7cbe546f0eac1904836fa7f35fce50fd31198", Pod:"whisker-544bd576d6-k5lb4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali06820400c4c", MAC:"7a:b7:80:1f:29:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:58:21.854295 containerd[1625]: 2025-11-05 15:58:21.845 [INFO][4191] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6d516e5ae43edbac3f38e48981e7cbe546f0eac1904836fa7f35fce50fd31198" Namespace="calico-system" Pod="whisker-544bd576d6-k5lb4" WorkloadEndpoint="localhost-k8s-whisker--544bd576d6--k5lb4-eth0" Nov 5 15:58:21.867385 containerd[1625]: time="2025-11-05T15:58:21.867337834Z" level=info msg="Container e4c89694c46c6b67651dd1eeac70623008b4e7a768c5a8c82bb3e9b2d75809d3: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:58:21.881579 containerd[1625]: time="2025-11-05T15:58:21.881411260Z" level=info msg="CreateContainer within sandbox \"1382f993993619f843182541d4d2542e320dcff53ab481d397c7ebb3ba24da2d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e4c89694c46c6b67651dd1eeac70623008b4e7a768c5a8c82bb3e9b2d75809d3\"" Nov 5 15:58:21.882656 containerd[1625]: time="2025-11-05T15:58:21.882630782Z" level=info msg="StartContainer for \"e4c89694c46c6b67651dd1eeac70623008b4e7a768c5a8c82bb3e9b2d75809d3\"" Nov 5 15:58:21.884205 containerd[1625]: time="2025-11-05T15:58:21.884151671Z" level=info msg="connecting to shim e4c89694c46c6b67651dd1eeac70623008b4e7a768c5a8c82bb3e9b2d75809d3" address="unix:///run/containerd/s/5aa7e32d91a041cd23447711f78feb637be6b679ed69ed2728f281123615863b" protocol=ttrpc version=3 Nov 5 15:58:21.887032 containerd[1625]: time="2025-11-05T15:58:21.886930375Z" level=info msg="connecting to shim 6d516e5ae43edbac3f38e48981e7cbe546f0eac1904836fa7f35fce50fd31198" address="unix:///run/containerd/s/bd5e273b5b6bf3a32b276e32f5a642e07afb3c7dcc7f3c3d5ccd4c3b8b320139" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:58:21.893926 containerd[1625]: time="2025-11-05T15:58:21.893890075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-4gzxq,Uid:def47882-ae7c-4469-bdea-ed04b63c4c12,Namespace:calico-system,Attempt:0,} returns sandbox id \"7e2b5ce76e704a0a555948725b1c8a4532919286ea4277b55e77e7963bee6b4f\"" Nov 5 15:58:21.928712 systemd[1]: Started cri-containerd-6d516e5ae43edbac3f38e48981e7cbe546f0eac1904836fa7f35fce50fd31198.scope - libcontainer container 6d516e5ae43edbac3f38e48981e7cbe546f0eac1904836fa7f35fce50fd31198. Nov 5 15:58:21.930980 systemd[1]: Started cri-containerd-e4c89694c46c6b67651dd1eeac70623008b4e7a768c5a8c82bb3e9b2d75809d3.scope - libcontainer container e4c89694c46c6b67651dd1eeac70623008b4e7a768c5a8c82bb3e9b2d75809d3. Nov 5 15:58:21.944708 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:58:21.980325 containerd[1625]: time="2025-11-05T15:58:21.980257664Z" level=info msg="StartContainer for \"e4c89694c46c6b67651dd1eeac70623008b4e7a768c5a8c82bb3e9b2d75809d3\" returns successfully" Nov 5 15:58:21.988380 containerd[1625]: time="2025-11-05T15:58:21.988254779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-544bd576d6-k5lb4,Uid:5542c344-4a27-4a33-b28f-ee6d288fca27,Namespace:calico-system,Attempt:0,} returns sandbox id \"6d516e5ae43edbac3f38e48981e7cbe546f0eac1904836fa7f35fce50fd31198\"" Nov 5 15:58:22.117650 containerd[1625]: time="2025-11-05T15:58:22.117586777Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:58:22.124334 containerd[1625]: time="2025-11-05T15:58:22.124067055Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:58:22.141020 containerd[1625]: time="2025-11-05T15:58:22.140864294Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:58:22.143113 kubelet[2823]: E1105 15:58:22.141394 2823 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:58:22.143113 kubelet[2823]: E1105 15:58:22.141469 2823 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:58:22.143113 kubelet[2823]: E1105 15:58:22.142170 2823 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-666f597f8b-vgtfd_calico-apiserver(091e450e-4da8-4476-b3e3-4b2049f9a92c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:58:22.143113 kubelet[2823]: E1105 15:58:22.142214 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-666f597f8b-vgtfd" podUID="091e450e-4da8-4476-b3e3-4b2049f9a92c" Nov 5 15:58:22.143450 containerd[1625]: time="2025-11-05T15:58:22.143424729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:58:22.446676 kubelet[2823]: I1105 15:58:22.446576 2823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9ea0dd2-f690-473a-aa1d-3c08114559e4" path="/var/lib/kubelet/pods/a9ea0dd2-f690-473a-aa1d-3c08114559e4/volumes" Nov 5 15:58:22.447386 kubelet[2823]: E1105 15:58:22.447351 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-666f597f8b-vgtfd" podUID="091e450e-4da8-4476-b3e3-4b2049f9a92c" Nov 5 15:58:22.450433 kubelet[2823]: E1105 15:58:22.450068 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:58:22.475002 kubelet[2823]: I1105 15:58:22.474916 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wctmv" podStartSLOduration=49.474894586 podStartE2EDuration="49.474894586s" podCreationTimestamp="2025-11-05 15:57:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:58:22.474862605 +0000 UTC m=+56.245016874" watchObservedRunningTime="2025-11-05 15:58:22.474894586 +0000 UTC m=+56.245048845" Nov 5 15:58:22.503338 containerd[1625]: time="2025-11-05T15:58:22.503260098Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:58:22.504740 containerd[1625]: time="2025-11-05T15:58:22.504705411Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:58:22.504866 containerd[1625]: time="2025-11-05T15:58:22.504827404Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:58:22.505120 kubelet[2823]: E1105 15:58:22.505082 2823 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:58:22.505214 kubelet[2823]: E1105 15:58:22.505130 2823 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:58:22.505772 kubelet[2823]: E1105 15:58:22.505296 2823 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-4gzxq_calico-system(def47882-ae7c-4469-bdea-ed04b63c4c12): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:58:22.505870 containerd[1625]: time="2025-11-05T15:58:22.505536450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:58:22.506006 kubelet[2823]: E1105 15:58:22.505965 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4gzxq" podUID="def47882-ae7c-4469-bdea-ed04b63c4c12" Nov 5 15:58:22.533174 systemd-networkd[1522]: vxlan.calico: Link UP Nov 5 15:58:22.533187 systemd-networkd[1522]: vxlan.calico: Gained carrier Nov 5 15:58:22.754581 systemd-networkd[1522]: cali071ca711233: Gained IPv6LL Nov 5 15:58:22.882446 systemd-networkd[1522]: cali22be0e62d73: Gained IPv6LL Nov 5 15:58:22.955231 containerd[1625]: time="2025-11-05T15:58:22.955156330Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:58:23.293439 containerd[1625]: time="2025-11-05T15:58:23.293327863Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:58:23.293668 containerd[1625]: time="2025-11-05T15:58:23.293362719Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:58:23.294520 kubelet[2823]: E1105 15:58:23.293784 2823 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:58:23.294520 kubelet[2823]: E1105 15:58:23.293839 2823 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:58:23.294520 kubelet[2823]: E1105 15:58:23.293918 2823 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-544bd576d6-k5lb4_calico-system(5542c344-4a27-4a33-b28f-ee6d288fca27): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:58:23.295271 containerd[1625]: time="2025-11-05T15:58:23.294756863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:58:23.330532 systemd-networkd[1522]: calib372a32c973: Gained IPv6LL Nov 5 15:58:23.331534 systemd-networkd[1522]: cali06820400c4c: Gained IPv6LL Nov 5 15:58:23.453385 kubelet[2823]: E1105 15:58:23.453054 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:58:23.454567 kubelet[2823]: E1105 15:58:23.454523 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-666f597f8b-vgtfd" podUID="091e450e-4da8-4476-b3e3-4b2049f9a92c" Nov 5 15:58:23.454567 kubelet[2823]: E1105 15:58:23.454534 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4gzxq" podUID="def47882-ae7c-4469-bdea-ed04b63c4c12" Nov 5 15:58:23.664956 containerd[1625]: time="2025-11-05T15:58:23.664894200Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:58:23.682733 containerd[1625]: time="2025-11-05T15:58:23.682177472Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:58:23.682733 containerd[1625]: time="2025-11-05T15:58:23.682226786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:58:23.683351 kubelet[2823]: E1105 15:58:23.683260 2823 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:58:23.683430 kubelet[2823]: E1105 15:58:23.683369 2823 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:58:23.683596 kubelet[2823]: E1105 15:58:23.683457 2823 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-544bd576d6-k5lb4_calico-system(5542c344-4a27-4a33-b28f-ee6d288fca27): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:58:23.683596 kubelet[2823]: E1105 15:58:23.683516 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544bd576d6-k5lb4" podUID="5542c344-4a27-4a33-b28f-ee6d288fca27" Nov 5 15:58:23.971583 systemd-networkd[1522]: vxlan.calico: Gained IPv6LL Nov 5 15:58:24.454292 kubelet[2823]: E1105 15:58:24.454246 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:58:24.455543 kubelet[2823]: E1105 15:58:24.455504 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544bd576d6-k5lb4" podUID="5542c344-4a27-4a33-b28f-ee6d288fca27" Nov 5 15:58:28.155288 systemd[1]: Started sshd@7-10.0.0.107:22-10.0.0.1:39012.service - OpenSSH per-connection server daemon (10.0.0.1:39012). Nov 5 15:58:28.248757 sshd[4649]: Accepted publickey for core from 10.0.0.1 port 39012 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:58:28.251938 sshd-session[4649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:58:28.257512 systemd-logind[1592]: New session 8 of user core. Nov 5 15:58:28.264517 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 5 15:58:28.413617 sshd[4652]: Connection closed by 10.0.0.1 port 39012 Nov 5 15:58:28.413986 sshd-session[4649]: pam_unix(sshd:session): session closed for user core Nov 5 15:58:28.417731 systemd[1]: sshd@7-10.0.0.107:22-10.0.0.1:39012.service: Deactivated successfully. Nov 5 15:58:28.420171 systemd[1]: session-8.scope: Deactivated successfully. Nov 5 15:58:28.421905 systemd-logind[1592]: Session 8 logged out. Waiting for processes to exit. Nov 5 15:58:28.423661 systemd-logind[1592]: Removed session 8. Nov 5 15:58:32.451179 containerd[1625]: time="2025-11-05T15:58:32.451120943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cr9h8,Uid:6636406c-76cb-4ddc-8f4d-b82da1f33a92,Namespace:calico-system,Attempt:0,}" Nov 5 15:58:32.564858 systemd-networkd[1522]: cali66cff821c65: Link UP Nov 5 15:58:32.565684 systemd-networkd[1522]: cali66cff821c65: Gained carrier Nov 5 15:58:32.586443 containerd[1625]: 2025-11-05 15:58:32.486 [INFO][4667] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--cr9h8-eth0 csi-node-driver- calico-system 6636406c-76cb-4ddc-8f4d-b82da1f33a92 734 0 2025-11-05 15:57:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-cr9h8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali66cff821c65 [] [] }} ContainerID="ea5f481b1dd9e81603191e5f7450819f5f009368ad8863b4f04f3ce7bfaa63f0" Namespace="calico-system" Pod="csi-node-driver-cr9h8" WorkloadEndpoint="localhost-k8s-csi--node--driver--cr9h8-" Nov 5 15:58:32.586443 containerd[1625]: 2025-11-05 15:58:32.486 [INFO][4667] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ea5f481b1dd9e81603191e5f7450819f5f009368ad8863b4f04f3ce7bfaa63f0" Namespace="calico-system" Pod="csi-node-driver-cr9h8" WorkloadEndpoint="localhost-k8s-csi--node--driver--cr9h8-eth0" Nov 5 15:58:32.586443 containerd[1625]: 2025-11-05 15:58:32.518 [INFO][4680] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea5f481b1dd9e81603191e5f7450819f5f009368ad8863b4f04f3ce7bfaa63f0" HandleID="k8s-pod-network.ea5f481b1dd9e81603191e5f7450819f5f009368ad8863b4f04f3ce7bfaa63f0" Workload="localhost-k8s-csi--node--driver--cr9h8-eth0" Nov 5 15:58:32.586443 containerd[1625]: 2025-11-05 15:58:32.519 [INFO][4680] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ea5f481b1dd9e81603191e5f7450819f5f009368ad8863b4f04f3ce7bfaa63f0" HandleID="k8s-pod-network.ea5f481b1dd9e81603191e5f7450819f5f009368ad8863b4f04f3ce7bfaa63f0" Workload="localhost-k8s-csi--node--driver--cr9h8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139490), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-cr9h8", "timestamp":"2025-11-05 15:58:32.518918285 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:58:32.586443 containerd[1625]: 2025-11-05 15:58:32.519 [INFO][4680] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:58:32.586443 containerd[1625]: 2025-11-05 15:58:32.519 [INFO][4680] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:58:32.586443 containerd[1625]: 2025-11-05 15:58:32.519 [INFO][4680] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:58:32.586443 containerd[1625]: 2025-11-05 15:58:32.527 [INFO][4680] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ea5f481b1dd9e81603191e5f7450819f5f009368ad8863b4f04f3ce7bfaa63f0" host="localhost" Nov 5 15:58:32.586443 containerd[1625]: 2025-11-05 15:58:32.532 [INFO][4680] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:58:32.586443 containerd[1625]: 2025-11-05 15:58:32.538 [INFO][4680] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:58:32.586443 containerd[1625]: 2025-11-05 15:58:32.540 [INFO][4680] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:58:32.586443 containerd[1625]: 2025-11-05 15:58:32.543 [INFO][4680] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:58:32.586443 containerd[1625]: 2025-11-05 15:58:32.543 [INFO][4680] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ea5f481b1dd9e81603191e5f7450819f5f009368ad8863b4f04f3ce7bfaa63f0" host="localhost" Nov 5 15:58:32.586443 containerd[1625]: 2025-11-05 15:58:32.545 [INFO][4680] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ea5f481b1dd9e81603191e5f7450819f5f009368ad8863b4f04f3ce7bfaa63f0 Nov 5 15:58:32.586443 containerd[1625]: 2025-11-05 15:58:32.551 [INFO][4680] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ea5f481b1dd9e81603191e5f7450819f5f009368ad8863b4f04f3ce7bfaa63f0" host="localhost" Nov 5 15:58:32.586443 containerd[1625]: 2025-11-05 15:58:32.558 [INFO][4680] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.ea5f481b1dd9e81603191e5f7450819f5f009368ad8863b4f04f3ce7bfaa63f0" host="localhost" Nov 5 15:58:32.586443 containerd[1625]: 2025-11-05 15:58:32.558 [INFO][4680] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.ea5f481b1dd9e81603191e5f7450819f5f009368ad8863b4f04f3ce7bfaa63f0" host="localhost" Nov 5 15:58:32.586443 containerd[1625]: 2025-11-05 15:58:32.558 [INFO][4680] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:58:32.586443 containerd[1625]: 2025-11-05 15:58:32.558 [INFO][4680] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="ea5f481b1dd9e81603191e5f7450819f5f009368ad8863b4f04f3ce7bfaa63f0" HandleID="k8s-pod-network.ea5f481b1dd9e81603191e5f7450819f5f009368ad8863b4f04f3ce7bfaa63f0" Workload="localhost-k8s-csi--node--driver--cr9h8-eth0" Nov 5 15:58:32.587083 containerd[1625]: 2025-11-05 15:58:32.561 [INFO][4667] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ea5f481b1dd9e81603191e5f7450819f5f009368ad8863b4f04f3ce7bfaa63f0" Namespace="calico-system" Pod="csi-node-driver-cr9h8" WorkloadEndpoint="localhost-k8s-csi--node--driver--cr9h8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cr9h8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6636406c-76cb-4ddc-8f4d-b82da1f33a92", ResourceVersion:"734", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 57, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-cr9h8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali66cff821c65", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:58:32.587083 containerd[1625]: 2025-11-05 15:58:32.561 [INFO][4667] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="ea5f481b1dd9e81603191e5f7450819f5f009368ad8863b4f04f3ce7bfaa63f0" Namespace="calico-system" Pod="csi-node-driver-cr9h8" WorkloadEndpoint="localhost-k8s-csi--node--driver--cr9h8-eth0" Nov 5 15:58:32.587083 containerd[1625]: 2025-11-05 15:58:32.561 [INFO][4667] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali66cff821c65 ContainerID="ea5f481b1dd9e81603191e5f7450819f5f009368ad8863b4f04f3ce7bfaa63f0" Namespace="calico-system" Pod="csi-node-driver-cr9h8" WorkloadEndpoint="localhost-k8s-csi--node--driver--cr9h8-eth0" Nov 5 15:58:32.587083 containerd[1625]: 2025-11-05 15:58:32.567 [INFO][4667] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea5f481b1dd9e81603191e5f7450819f5f009368ad8863b4f04f3ce7bfaa63f0" Namespace="calico-system" Pod="csi-node-driver-cr9h8" WorkloadEndpoint="localhost-k8s-csi--node--driver--cr9h8-eth0" Nov 5 15:58:32.587083 containerd[1625]: 2025-11-05 15:58:32.567 [INFO][4667] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ea5f481b1dd9e81603191e5f7450819f5f009368ad8863b4f04f3ce7bfaa63f0" Namespace="calico-system" Pod="csi-node-driver-cr9h8" WorkloadEndpoint="localhost-k8s-csi--node--driver--cr9h8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cr9h8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6636406c-76cb-4ddc-8f4d-b82da1f33a92", ResourceVersion:"734", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 57, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ea5f481b1dd9e81603191e5f7450819f5f009368ad8863b4f04f3ce7bfaa63f0", Pod:"csi-node-driver-cr9h8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali66cff821c65", MAC:"72:c4:d5:45:cc:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:58:32.587083 containerd[1625]: 2025-11-05 15:58:32.579 [INFO][4667] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ea5f481b1dd9e81603191e5f7450819f5f009368ad8863b4f04f3ce7bfaa63f0" Namespace="calico-system" Pod="csi-node-driver-cr9h8" WorkloadEndpoint="localhost-k8s-csi--node--driver--cr9h8-eth0" Nov 5 15:58:32.629375 containerd[1625]: time="2025-11-05T15:58:32.629283382Z" level=info msg="connecting to shim ea5f481b1dd9e81603191e5f7450819f5f009368ad8863b4f04f3ce7bfaa63f0" address="unix:///run/containerd/s/e30c5d293aa306e72cbdae434427aeb79d44ff79394da33ca4f573dc301f74a5" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:58:32.681508 systemd[1]: Started cri-containerd-ea5f481b1dd9e81603191e5f7450819f5f009368ad8863b4f04f3ce7bfaa63f0.scope - libcontainer container ea5f481b1dd9e81603191e5f7450819f5f009368ad8863b4f04f3ce7bfaa63f0. Nov 5 15:58:32.695952 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:58:32.739063 containerd[1625]: time="2025-11-05T15:58:32.738894523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cr9h8,Uid:6636406c-76cb-4ddc-8f4d-b82da1f33a92,Namespace:calico-system,Attempt:0,} returns sandbox id \"ea5f481b1dd9e81603191e5f7450819f5f009368ad8863b4f04f3ce7bfaa63f0\"" Nov 5 15:58:32.740984 containerd[1625]: time="2025-11-05T15:58:32.740929727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:58:33.123948 containerd[1625]: time="2025-11-05T15:58:33.123787472Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:58:33.177185 containerd[1625]: time="2025-11-05T15:58:33.177067383Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:58:33.177185 containerd[1625]: time="2025-11-05T15:58:33.177163245Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:58:33.177623 kubelet[2823]: E1105 15:58:33.177534 2823 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:58:33.177623 kubelet[2823]: E1105 15:58:33.177624 2823 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:58:33.178197 kubelet[2823]: E1105 15:58:33.177729 2823 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-cr9h8_calico-system(6636406c-76cb-4ddc-8f4d-b82da1f33a92): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:58:33.179006 containerd[1625]: time="2025-11-05T15:58:33.178936910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:58:33.430007 systemd[1]: Started sshd@8-10.0.0.107:22-10.0.0.1:56808.service - OpenSSH per-connection server daemon (10.0.0.1:56808). Nov 5 15:58:33.497289 sshd[4751]: Accepted publickey for core from 10.0.0.1 port 56808 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:58:33.498944 sshd-session[4751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:58:33.504853 systemd-logind[1592]: New session 9 of user core. Nov 5 15:58:33.514207 containerd[1625]: time="2025-11-05T15:58:33.514120110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-666f597f8b-qsk25,Uid:36f63588-e88e-4e5e-be35-3d453ebfbecf,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:58:33.516629 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 5 15:58:33.540738 containerd[1625]: time="2025-11-05T15:58:33.540658531Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:58:33.543904 containerd[1625]: time="2025-11-05T15:58:33.542932890Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:58:33.543904 containerd[1625]: time="2025-11-05T15:58:33.543057016Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:58:33.544150 kubelet[2823]: E1105 15:58:33.543340 2823 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:58:33.544150 kubelet[2823]: E1105 15:58:33.543418 2823 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:58:33.544150 kubelet[2823]: E1105 15:58:33.543541 2823 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-cr9h8_calico-system(6636406c-76cb-4ddc-8f4d-b82da1f33a92): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:58:33.544287 kubelet[2823]: E1105 15:58:33.543663 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cr9h8" podUID="6636406c-76cb-4ddc-8f4d-b82da1f33a92" Nov 5 15:58:33.665282 systemd-networkd[1522]: calia6603d83480: Link UP Nov 5 15:58:33.666727 systemd-networkd[1522]: calia6603d83480: Gained carrier Nov 5 15:58:33.679483 sshd[4766]: Connection closed by 10.0.0.1 port 56808 Nov 5 15:58:33.679981 sshd-session[4751]: pam_unix(sshd:session): session closed for user core Nov 5 15:58:33.687814 containerd[1625]: 2025-11-05 15:58:33.565 [INFO][4755] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--666f597f8b--qsk25-eth0 calico-apiserver-666f597f8b- calico-apiserver 36f63588-e88e-4e5e-be35-3d453ebfbecf 850 0 2025-11-05 15:57:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:666f597f8b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-666f597f8b-qsk25 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia6603d83480 [] [] }} ContainerID="289a09b46156e23c2658eb51e8bded1f3e266e319e321ac0e281bc12e9a9d494" Namespace="calico-apiserver" Pod="calico-apiserver-666f597f8b-qsk25" WorkloadEndpoint="localhost-k8s-calico--apiserver--666f597f8b--qsk25-" Nov 5 15:58:33.687814 containerd[1625]: 2025-11-05 15:58:33.565 [INFO][4755] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="289a09b46156e23c2658eb51e8bded1f3e266e319e321ac0e281bc12e9a9d494" Namespace="calico-apiserver" Pod="calico-apiserver-666f597f8b-qsk25" WorkloadEndpoint="localhost-k8s-calico--apiserver--666f597f8b--qsk25-eth0" Nov 5 15:58:33.687814 containerd[1625]: 2025-11-05 15:58:33.597 [INFO][4771] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="289a09b46156e23c2658eb51e8bded1f3e266e319e321ac0e281bc12e9a9d494" HandleID="k8s-pod-network.289a09b46156e23c2658eb51e8bded1f3e266e319e321ac0e281bc12e9a9d494" Workload="localhost-k8s-calico--apiserver--666f597f8b--qsk25-eth0" Nov 5 15:58:33.687814 containerd[1625]: 2025-11-05 15:58:33.597 [INFO][4771] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="289a09b46156e23c2658eb51e8bded1f3e266e319e321ac0e281bc12e9a9d494" HandleID="k8s-pod-network.289a09b46156e23c2658eb51e8bded1f3e266e319e321ac0e281bc12e9a9d494" Workload="localhost-k8s-calico--apiserver--666f597f8b--qsk25-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f6e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-666f597f8b-qsk25", "timestamp":"2025-11-05 15:58:33.597062829 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:58:33.687814 containerd[1625]: 2025-11-05 15:58:33.597 [INFO][4771] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:58:33.687814 containerd[1625]: 2025-11-05 15:58:33.597 [INFO][4771] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:58:33.687814 containerd[1625]: 2025-11-05 15:58:33.597 [INFO][4771] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:58:33.687814 containerd[1625]: 2025-11-05 15:58:33.605 [INFO][4771] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.289a09b46156e23c2658eb51e8bded1f3e266e319e321ac0e281bc12e9a9d494" host="localhost" Nov 5 15:58:33.687814 containerd[1625]: 2025-11-05 15:58:33.616 [INFO][4771] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:58:33.687814 containerd[1625]: 2025-11-05 15:58:33.624 [INFO][4771] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:58:33.687814 containerd[1625]: 2025-11-05 15:58:33.628 [INFO][4771] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:58:33.687814 containerd[1625]: 2025-11-05 15:58:33.631 [INFO][4771] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:58:33.687814 containerd[1625]: 2025-11-05 15:58:33.631 [INFO][4771] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.289a09b46156e23c2658eb51e8bded1f3e266e319e321ac0e281bc12e9a9d494" host="localhost" Nov 5 15:58:33.687814 containerd[1625]: 2025-11-05 15:58:33.634 [INFO][4771] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.289a09b46156e23c2658eb51e8bded1f3e266e319e321ac0e281bc12e9a9d494 Nov 5 15:58:33.687814 containerd[1625]: 2025-11-05 15:58:33.639 [INFO][4771] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.289a09b46156e23c2658eb51e8bded1f3e266e319e321ac0e281bc12e9a9d494" host="localhost" Nov 5 15:58:33.687814 containerd[1625]: 2025-11-05 15:58:33.654 [INFO][4771] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.289a09b46156e23c2658eb51e8bded1f3e266e319e321ac0e281bc12e9a9d494" host="localhost" Nov 5 15:58:33.687814 containerd[1625]: 2025-11-05 15:58:33.654 [INFO][4771] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.289a09b46156e23c2658eb51e8bded1f3e266e319e321ac0e281bc12e9a9d494" host="localhost" Nov 5 15:58:33.687814 containerd[1625]: 2025-11-05 15:58:33.654 [INFO][4771] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:58:33.687814 containerd[1625]: 2025-11-05 15:58:33.654 [INFO][4771] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="289a09b46156e23c2658eb51e8bded1f3e266e319e321ac0e281bc12e9a9d494" HandleID="k8s-pod-network.289a09b46156e23c2658eb51e8bded1f3e266e319e321ac0e281bc12e9a9d494" Workload="localhost-k8s-calico--apiserver--666f597f8b--qsk25-eth0" Nov 5 15:58:33.689349 containerd[1625]: 2025-11-05 15:58:33.658 [INFO][4755] cni-plugin/k8s.go 418: Populated endpoint ContainerID="289a09b46156e23c2658eb51e8bded1f3e266e319e321ac0e281bc12e9a9d494" Namespace="calico-apiserver" Pod="calico-apiserver-666f597f8b-qsk25" WorkloadEndpoint="localhost-k8s-calico--apiserver--666f597f8b--qsk25-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--666f597f8b--qsk25-eth0", GenerateName:"calico-apiserver-666f597f8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"36f63588-e88e-4e5e-be35-3d453ebfbecf", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 57, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"666f597f8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-666f597f8b-qsk25", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia6603d83480", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:58:33.689349 containerd[1625]: 2025-11-05 15:58:33.659 [INFO][4755] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="289a09b46156e23c2658eb51e8bded1f3e266e319e321ac0e281bc12e9a9d494" Namespace="calico-apiserver" Pod="calico-apiserver-666f597f8b-qsk25" WorkloadEndpoint="localhost-k8s-calico--apiserver--666f597f8b--qsk25-eth0" Nov 5 15:58:33.689349 containerd[1625]: 2025-11-05 15:58:33.659 [INFO][4755] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia6603d83480 ContainerID="289a09b46156e23c2658eb51e8bded1f3e266e319e321ac0e281bc12e9a9d494" Namespace="calico-apiserver" Pod="calico-apiserver-666f597f8b-qsk25" WorkloadEndpoint="localhost-k8s-calico--apiserver--666f597f8b--qsk25-eth0" Nov 5 15:58:33.689349 containerd[1625]: 2025-11-05 15:58:33.667 [INFO][4755] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="289a09b46156e23c2658eb51e8bded1f3e266e319e321ac0e281bc12e9a9d494" Namespace="calico-apiserver" Pod="calico-apiserver-666f597f8b-qsk25" WorkloadEndpoint="localhost-k8s-calico--apiserver--666f597f8b--qsk25-eth0" Nov 5 15:58:33.689349 containerd[1625]: 2025-11-05 15:58:33.667 [INFO][4755] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="289a09b46156e23c2658eb51e8bded1f3e266e319e321ac0e281bc12e9a9d494" Namespace="calico-apiserver" Pod="calico-apiserver-666f597f8b-qsk25" WorkloadEndpoint="localhost-k8s-calico--apiserver--666f597f8b--qsk25-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--666f597f8b--qsk25-eth0", GenerateName:"calico-apiserver-666f597f8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"36f63588-e88e-4e5e-be35-3d453ebfbecf", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 57, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"666f597f8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"289a09b46156e23c2658eb51e8bded1f3e266e319e321ac0e281bc12e9a9d494", Pod:"calico-apiserver-666f597f8b-qsk25", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia6603d83480", MAC:"ba:b4:f9:db:6a:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:58:33.689349 containerd[1625]: 2025-11-05 15:58:33.683 [INFO][4755] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="289a09b46156e23c2658eb51e8bded1f3e266e319e321ac0e281bc12e9a9d494" Namespace="calico-apiserver" Pod="calico-apiserver-666f597f8b-qsk25" WorkloadEndpoint="localhost-k8s-calico--apiserver--666f597f8b--qsk25-eth0" Nov 5 15:58:33.688531 systemd[1]: sshd@8-10.0.0.107:22-10.0.0.1:56808.service: Deactivated successfully. Nov 5 15:58:33.692204 systemd[1]: session-9.scope: Deactivated successfully. Nov 5 15:58:33.693966 systemd-logind[1592]: Session 9 logged out. Waiting for processes to exit. Nov 5 15:58:33.695944 systemd-logind[1592]: Removed session 9. Nov 5 15:58:33.715245 containerd[1625]: time="2025-11-05T15:58:33.715167851Z" level=info msg="connecting to shim 289a09b46156e23c2658eb51e8bded1f3e266e319e321ac0e281bc12e9a9d494" address="unix:///run/containerd/s/f0a6fcfb88f1502db310a149e11973032e2867218e10f234c8624f201fc5fa51" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:58:33.748527 systemd[1]: Started cri-containerd-289a09b46156e23c2658eb51e8bded1f3e266e319e321ac0e281bc12e9a9d494.scope - libcontainer container 289a09b46156e23c2658eb51e8bded1f3e266e319e321ac0e281bc12e9a9d494. Nov 5 15:58:33.765043 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:58:33.802299 containerd[1625]: time="2025-11-05T15:58:33.802236080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-666f597f8b-qsk25,Uid:36f63588-e88e-4e5e-be35-3d453ebfbecf,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"289a09b46156e23c2658eb51e8bded1f3e266e319e321ac0e281bc12e9a9d494\"" Nov 5 15:58:33.804371 containerd[1625]: time="2025-11-05T15:58:33.804335175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:58:34.161360 containerd[1625]: time="2025-11-05T15:58:34.161179081Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:58:34.163520 containerd[1625]: time="2025-11-05T15:58:34.163488965Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:58:34.163619 containerd[1625]: time="2025-11-05T15:58:34.163486590Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:58:34.163889 kubelet[2823]: E1105 15:58:34.163823 2823 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:58:34.163962 kubelet[2823]: E1105 15:58:34.163899 2823 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:58:34.164137 kubelet[2823]: E1105 15:58:34.164020 2823 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-666f597f8b-qsk25_calico-apiserver(36f63588-e88e-4e5e-be35-3d453ebfbecf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:58:34.164137 kubelet[2823]: E1105 15:58:34.164077 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-666f597f8b-qsk25" podUID="36f63588-e88e-4e5e-be35-3d453ebfbecf" Nov 5 15:58:34.402576 systemd-networkd[1522]: cali66cff821c65: Gained IPv6LL Nov 5 15:58:34.443941 containerd[1625]: time="2025-11-05T15:58:34.443889663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-94f7df89d-9b28s,Uid:e568ac1b-7203-41c5-978d-53ea0a375013,Namespace:calico-system,Attempt:0,}" Nov 5 15:58:34.446434 kubelet[2823]: E1105 15:58:34.446231 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:58:34.447713 containerd[1625]: time="2025-11-05T15:58:34.447195853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7x86x,Uid:2a2a255a-1d40-4545-90a6-e6052dd9a0ae,Namespace:kube-system,Attempt:0,}" Nov 5 15:58:34.481534 kubelet[2823]: E1105 15:58:34.481439 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-666f597f8b-qsk25" podUID="36f63588-e88e-4e5e-be35-3d453ebfbecf" Nov 5 15:58:34.482187 kubelet[2823]: E1105 15:58:34.482147 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cr9h8" podUID="6636406c-76cb-4ddc-8f4d-b82da1f33a92" Nov 5 15:58:34.579835 systemd-networkd[1522]: cali09533f54fcc: Link UP Nov 5 15:58:34.581418 systemd-networkd[1522]: cali09533f54fcc: Gained carrier Nov 5 15:58:34.599626 containerd[1625]: 2025-11-05 15:58:34.494 [INFO][4857] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--7x86x-eth0 coredns-66bc5c9577- kube-system 2a2a255a-1d40-4545-90a6-e6052dd9a0ae 853 0 2025-11-05 15:57:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-7x86x eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali09533f54fcc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="1cce26ca08fa8a286595b651c788450ba0302cb1d4f53f2060d597ef822ee50d" Namespace="kube-system" Pod="coredns-66bc5c9577-7x86x" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--7x86x-" Nov 5 15:58:34.599626 containerd[1625]: 2025-11-05 15:58:34.494 [INFO][4857] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1cce26ca08fa8a286595b651c788450ba0302cb1d4f53f2060d597ef822ee50d" Namespace="kube-system" Pod="coredns-66bc5c9577-7x86x" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--7x86x-eth0" Nov 5 15:58:34.599626 containerd[1625]: 2025-11-05 15:58:34.536 [INFO][4878] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1cce26ca08fa8a286595b651c788450ba0302cb1d4f53f2060d597ef822ee50d" HandleID="k8s-pod-network.1cce26ca08fa8a286595b651c788450ba0302cb1d4f53f2060d597ef822ee50d" Workload="localhost-k8s-coredns--66bc5c9577--7x86x-eth0" Nov 5 15:58:34.599626 containerd[1625]: 2025-11-05 15:58:34.536 [INFO][4878] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1cce26ca08fa8a286595b651c788450ba0302cb1d4f53f2060d597ef822ee50d" HandleID="k8s-pod-network.1cce26ca08fa8a286595b651c788450ba0302cb1d4f53f2060d597ef822ee50d" Workload="localhost-k8s-coredns--66bc5c9577--7x86x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b1590), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-7x86x", "timestamp":"2025-11-05 15:58:34.536344584 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:58:34.599626 containerd[1625]: 2025-11-05 15:58:34.536 [INFO][4878] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:58:34.599626 containerd[1625]: 2025-11-05 15:58:34.536 [INFO][4878] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:58:34.599626 containerd[1625]: 2025-11-05 15:58:34.536 [INFO][4878] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:58:34.599626 containerd[1625]: 2025-11-05 15:58:34.545 [INFO][4878] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1cce26ca08fa8a286595b651c788450ba0302cb1d4f53f2060d597ef822ee50d" host="localhost" Nov 5 15:58:34.599626 containerd[1625]: 2025-11-05 15:58:34.550 [INFO][4878] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:58:34.599626 containerd[1625]: 2025-11-05 15:58:34.554 [INFO][4878] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:58:34.599626 containerd[1625]: 2025-11-05 15:58:34.556 [INFO][4878] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:58:34.599626 containerd[1625]: 2025-11-05 15:58:34.558 [INFO][4878] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:58:34.599626 containerd[1625]: 2025-11-05 15:58:34.559 [INFO][4878] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1cce26ca08fa8a286595b651c788450ba0302cb1d4f53f2060d597ef822ee50d" host="localhost" Nov 5 15:58:34.599626 containerd[1625]: 2025-11-05 15:58:34.560 [INFO][4878] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1cce26ca08fa8a286595b651c788450ba0302cb1d4f53f2060d597ef822ee50d Nov 5 15:58:34.599626 containerd[1625]: 2025-11-05 15:58:34.568 [INFO][4878] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1cce26ca08fa8a286595b651c788450ba0302cb1d4f53f2060d597ef822ee50d" host="localhost" Nov 5 15:58:34.599626 containerd[1625]: 2025-11-05 15:58:34.573 [INFO][4878] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.1cce26ca08fa8a286595b651c788450ba0302cb1d4f53f2060d597ef822ee50d" host="localhost" Nov 5 15:58:34.599626 containerd[1625]: 2025-11-05 15:58:34.573 [INFO][4878] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.1cce26ca08fa8a286595b651c788450ba0302cb1d4f53f2060d597ef822ee50d" host="localhost" Nov 5 15:58:34.599626 containerd[1625]: 2025-11-05 15:58:34.573 [INFO][4878] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:58:34.599626 containerd[1625]: 2025-11-05 15:58:34.573 [INFO][4878] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="1cce26ca08fa8a286595b651c788450ba0302cb1d4f53f2060d597ef822ee50d" HandleID="k8s-pod-network.1cce26ca08fa8a286595b651c788450ba0302cb1d4f53f2060d597ef822ee50d" Workload="localhost-k8s-coredns--66bc5c9577--7x86x-eth0" Nov 5 15:58:34.600838 containerd[1625]: 2025-11-05 15:58:34.576 [INFO][4857] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1cce26ca08fa8a286595b651c788450ba0302cb1d4f53f2060d597ef822ee50d" Namespace="kube-system" Pod="coredns-66bc5c9577-7x86x" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--7x86x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--7x86x-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"2a2a255a-1d40-4545-90a6-e6052dd9a0ae", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 57, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-7x86x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09533f54fcc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:58:34.600838 containerd[1625]: 2025-11-05 15:58:34.576 [INFO][4857] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="1cce26ca08fa8a286595b651c788450ba0302cb1d4f53f2060d597ef822ee50d" Namespace="kube-system" Pod="coredns-66bc5c9577-7x86x" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--7x86x-eth0" Nov 5 15:58:34.600838 containerd[1625]: 2025-11-05 15:58:34.576 [INFO][4857] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali09533f54fcc ContainerID="1cce26ca08fa8a286595b651c788450ba0302cb1d4f53f2060d597ef822ee50d" Namespace="kube-system" Pod="coredns-66bc5c9577-7x86x" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--7x86x-eth0" Nov 5 15:58:34.600838 containerd[1625]: 2025-11-05 15:58:34.581 [INFO][4857] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1cce26ca08fa8a286595b651c788450ba0302cb1d4f53f2060d597ef822ee50d" Namespace="kube-system" Pod="coredns-66bc5c9577-7x86x" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--7x86x-eth0" Nov 5 15:58:34.601029 containerd[1625]: 2025-11-05 15:58:34.582 [INFO][4857] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1cce26ca08fa8a286595b651c788450ba0302cb1d4f53f2060d597ef822ee50d" Namespace="kube-system" Pod="coredns-66bc5c9577-7x86x" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--7x86x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--7x86x-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"2a2a255a-1d40-4545-90a6-e6052dd9a0ae", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 57, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1cce26ca08fa8a286595b651c788450ba0302cb1d4f53f2060d597ef822ee50d", Pod:"coredns-66bc5c9577-7x86x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09533f54fcc", MAC:"42:06:83:d0:6d:4a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:58:34.601029 containerd[1625]: 2025-11-05 15:58:34.596 [INFO][4857] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1cce26ca08fa8a286595b651c788450ba0302cb1d4f53f2060d597ef822ee50d" Namespace="kube-system" Pod="coredns-66bc5c9577-7x86x" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--7x86x-eth0" Nov 5 15:58:34.633185 containerd[1625]: time="2025-11-05T15:58:34.633132468Z" level=info msg="connecting to shim 1cce26ca08fa8a286595b651c788450ba0302cb1d4f53f2060d597ef822ee50d" address="unix:///run/containerd/s/ce9e50b381b2a6b4659bb5d629fc55973e8da97fd9da9d90ca9b955e9fb091e2" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:58:34.663532 systemd[1]: Started cri-containerd-1cce26ca08fa8a286595b651c788450ba0302cb1d4f53f2060d597ef822ee50d.scope - libcontainer container 1cce26ca08fa8a286595b651c788450ba0302cb1d4f53f2060d597ef822ee50d. Nov 5 15:58:34.680261 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:58:34.687041 systemd-networkd[1522]: cali117b8b5e6b8: Link UP Nov 5 15:58:34.688173 systemd-networkd[1522]: cali117b8b5e6b8: Gained carrier Nov 5 15:58:34.712715 containerd[1625]: 2025-11-05 15:58:34.511 [INFO][4851] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--94f7df89d--9b28s-eth0 calico-kube-controllers-94f7df89d- calico-system e568ac1b-7203-41c5-978d-53ea0a375013 842 0 2025-11-05 15:57:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:94f7df89d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-94f7df89d-9b28s eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali117b8b5e6b8 [] [] }} ContainerID="1d583d7e4502cbc77a6af99d5615e196ba67dae183b063a1356c80261286f3e4" Namespace="calico-system" Pod="calico-kube-controllers-94f7df89d-9b28s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--94f7df89d--9b28s-" Nov 5 15:58:34.712715 containerd[1625]: 2025-11-05 15:58:34.511 [INFO][4851] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1d583d7e4502cbc77a6af99d5615e196ba67dae183b063a1356c80261286f3e4" Namespace="calico-system" Pod="calico-kube-controllers-94f7df89d-9b28s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--94f7df89d--9b28s-eth0" Nov 5 15:58:34.712715 containerd[1625]: 2025-11-05 15:58:34.550 [INFO][4888] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d583d7e4502cbc77a6af99d5615e196ba67dae183b063a1356c80261286f3e4" HandleID="k8s-pod-network.1d583d7e4502cbc77a6af99d5615e196ba67dae183b063a1356c80261286f3e4" Workload="localhost-k8s-calico--kube--controllers--94f7df89d--9b28s-eth0" Nov 5 15:58:34.712715 containerd[1625]: 2025-11-05 15:58:34.550 [INFO][4888] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1d583d7e4502cbc77a6af99d5615e196ba67dae183b063a1356c80261286f3e4" HandleID="k8s-pod-network.1d583d7e4502cbc77a6af99d5615e196ba67dae183b063a1356c80261286f3e4" Workload="localhost-k8s-calico--kube--controllers--94f7df89d--9b28s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122480), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-94f7df89d-9b28s", "timestamp":"2025-11-05 15:58:34.550699091 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:58:34.712715 containerd[1625]: 2025-11-05 15:58:34.551 [INFO][4888] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:58:34.712715 containerd[1625]: 2025-11-05 15:58:34.573 [INFO][4888] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:58:34.712715 containerd[1625]: 2025-11-05 15:58:34.574 [INFO][4888] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:58:34.712715 containerd[1625]: 2025-11-05 15:58:34.647 [INFO][4888] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1d583d7e4502cbc77a6af99d5615e196ba67dae183b063a1356c80261286f3e4" host="localhost" Nov 5 15:58:34.712715 containerd[1625]: 2025-11-05 15:58:34.654 [INFO][4888] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:58:34.712715 containerd[1625]: 2025-11-05 15:58:34.661 [INFO][4888] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:58:34.712715 containerd[1625]: 2025-11-05 15:58:34.663 [INFO][4888] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:58:34.712715 containerd[1625]: 2025-11-05 15:58:34.665 [INFO][4888] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:58:34.712715 containerd[1625]: 2025-11-05 15:58:34.666 [INFO][4888] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1d583d7e4502cbc77a6af99d5615e196ba67dae183b063a1356c80261286f3e4" host="localhost" Nov 5 15:58:34.712715 containerd[1625]: 2025-11-05 15:58:34.667 [INFO][4888] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1d583d7e4502cbc77a6af99d5615e196ba67dae183b063a1356c80261286f3e4 Nov 5 15:58:34.712715 containerd[1625]: 2025-11-05 15:58:34.673 [INFO][4888] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1d583d7e4502cbc77a6af99d5615e196ba67dae183b063a1356c80261286f3e4" host="localhost" Nov 5 15:58:34.712715 containerd[1625]: 2025-11-05 15:58:34.680 [INFO][4888] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.1d583d7e4502cbc77a6af99d5615e196ba67dae183b063a1356c80261286f3e4" host="localhost" Nov 5 15:58:34.712715 containerd[1625]: 2025-11-05 15:58:34.680 [INFO][4888] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.1d583d7e4502cbc77a6af99d5615e196ba67dae183b063a1356c80261286f3e4" host="localhost" Nov 5 15:58:34.712715 containerd[1625]: 2025-11-05 15:58:34.680 [INFO][4888] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:58:34.712715 containerd[1625]: 2025-11-05 15:58:34.680 [INFO][4888] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="1d583d7e4502cbc77a6af99d5615e196ba67dae183b063a1356c80261286f3e4" HandleID="k8s-pod-network.1d583d7e4502cbc77a6af99d5615e196ba67dae183b063a1356c80261286f3e4" Workload="localhost-k8s-calico--kube--controllers--94f7df89d--9b28s-eth0" Nov 5 15:58:34.714299 containerd[1625]: 2025-11-05 15:58:34.683 [INFO][4851] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1d583d7e4502cbc77a6af99d5615e196ba67dae183b063a1356c80261286f3e4" Namespace="calico-system" Pod="calico-kube-controllers-94f7df89d-9b28s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--94f7df89d--9b28s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--94f7df89d--9b28s-eth0", GenerateName:"calico-kube-controllers-94f7df89d-", Namespace:"calico-system", SelfLink:"", UID:"e568ac1b-7203-41c5-978d-53ea0a375013", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 57, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"94f7df89d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-94f7df89d-9b28s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali117b8b5e6b8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:58:34.714299 containerd[1625]: 2025-11-05 15:58:34.684 [INFO][4851] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="1d583d7e4502cbc77a6af99d5615e196ba67dae183b063a1356c80261286f3e4" Namespace="calico-system" Pod="calico-kube-controllers-94f7df89d-9b28s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--94f7df89d--9b28s-eth0" Nov 5 15:58:34.714299 containerd[1625]: 2025-11-05 15:58:34.684 [INFO][4851] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali117b8b5e6b8 ContainerID="1d583d7e4502cbc77a6af99d5615e196ba67dae183b063a1356c80261286f3e4" Namespace="calico-system" Pod="calico-kube-controllers-94f7df89d-9b28s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--94f7df89d--9b28s-eth0" Nov 5 15:58:34.714299 containerd[1625]: 2025-11-05 15:58:34.688 [INFO][4851] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d583d7e4502cbc77a6af99d5615e196ba67dae183b063a1356c80261286f3e4" Namespace="calico-system" Pod="calico-kube-controllers-94f7df89d-9b28s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--94f7df89d--9b28s-eth0" Nov 5 15:58:34.714299 containerd[1625]: 2025-11-05 15:58:34.688 [INFO][4851] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1d583d7e4502cbc77a6af99d5615e196ba67dae183b063a1356c80261286f3e4" Namespace="calico-system" Pod="calico-kube-controllers-94f7df89d-9b28s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--94f7df89d--9b28s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--94f7df89d--9b28s-eth0", GenerateName:"calico-kube-controllers-94f7df89d-", Namespace:"calico-system", SelfLink:"", UID:"e568ac1b-7203-41c5-978d-53ea0a375013", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 57, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"94f7df89d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1d583d7e4502cbc77a6af99d5615e196ba67dae183b063a1356c80261286f3e4", Pod:"calico-kube-controllers-94f7df89d-9b28s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali117b8b5e6b8", MAC:"4e:e4:b9:73:76:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:58:34.714299 containerd[1625]: 2025-11-05 15:58:34.701 [INFO][4851] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1d583d7e4502cbc77a6af99d5615e196ba67dae183b063a1356c80261286f3e4" Namespace="calico-system" Pod="calico-kube-controllers-94f7df89d-9b28s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--94f7df89d--9b28s-eth0" Nov 5 15:58:34.720636 containerd[1625]: time="2025-11-05T15:58:34.720579692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7x86x,Uid:2a2a255a-1d40-4545-90a6-e6052dd9a0ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"1cce26ca08fa8a286595b651c788450ba0302cb1d4f53f2060d597ef822ee50d\"" Nov 5 15:58:34.721941 kubelet[2823]: E1105 15:58:34.721582 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:58:34.727035 containerd[1625]: time="2025-11-05T15:58:34.726986019Z" level=info msg="CreateContainer within sandbox \"1cce26ca08fa8a286595b651c788450ba0302cb1d4f53f2060d597ef822ee50d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 15:58:34.745400 containerd[1625]: time="2025-11-05T15:58:34.744902393Z" level=info msg="Container 4c9405ae4e61526a601880151ebd05cee9cbc863a2f0c70b6c96cacb737fd7a3: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:58:34.748349 containerd[1625]: time="2025-11-05T15:58:34.748114714Z" level=info msg="connecting to shim 1d583d7e4502cbc77a6af99d5615e196ba67dae183b063a1356c80261286f3e4" address="unix:///run/containerd/s/231a2271f3025b2c75f3d9cd1d81fd72a71c4f9d518d16f6cf739e7660a92d1c" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:58:34.752543 containerd[1625]: time="2025-11-05T15:58:34.752517229Z" level=info msg="CreateContainer within sandbox \"1cce26ca08fa8a286595b651c788450ba0302cb1d4f53f2060d597ef822ee50d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4c9405ae4e61526a601880151ebd05cee9cbc863a2f0c70b6c96cacb737fd7a3\"" Nov 5 15:58:34.753297 containerd[1625]: time="2025-11-05T15:58:34.753272295Z" level=info msg="StartContainer for \"4c9405ae4e61526a601880151ebd05cee9cbc863a2f0c70b6c96cacb737fd7a3\"" Nov 5 15:58:34.754510 containerd[1625]: time="2025-11-05T15:58:34.754484682Z" level=info msg="connecting to shim 4c9405ae4e61526a601880151ebd05cee9cbc863a2f0c70b6c96cacb737fd7a3" address="unix:///run/containerd/s/ce9e50b381b2a6b4659bb5d629fc55973e8da97fd9da9d90ca9b955e9fb091e2" protocol=ttrpc version=3 Nov 5 15:58:34.778492 systemd[1]: Started cri-containerd-1d583d7e4502cbc77a6af99d5615e196ba67dae183b063a1356c80261286f3e4.scope - libcontainer container 1d583d7e4502cbc77a6af99d5615e196ba67dae183b063a1356c80261286f3e4. Nov 5 15:58:34.783184 systemd[1]: Started cri-containerd-4c9405ae4e61526a601880151ebd05cee9cbc863a2f0c70b6c96cacb737fd7a3.scope - libcontainer container 4c9405ae4e61526a601880151ebd05cee9cbc863a2f0c70b6c96cacb737fd7a3. Nov 5 15:58:34.802142 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:58:34.829263 containerd[1625]: time="2025-11-05T15:58:34.829188334Z" level=info msg="StartContainer for \"4c9405ae4e61526a601880151ebd05cee9cbc863a2f0c70b6c96cacb737fd7a3\" returns successfully" Nov 5 15:58:34.851609 containerd[1625]: time="2025-11-05T15:58:34.851540585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-94f7df89d-9b28s,Uid:e568ac1b-7203-41c5-978d-53ea0a375013,Namespace:calico-system,Attempt:0,} returns sandbox id \"1d583d7e4502cbc77a6af99d5615e196ba67dae183b063a1356c80261286f3e4\"" Nov 5 15:58:34.854281 containerd[1625]: time="2025-11-05T15:58:34.854236594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:58:34.978566 systemd-networkd[1522]: calia6603d83480: Gained IPv6LL Nov 5 15:58:35.171487 containerd[1625]: time="2025-11-05T15:58:35.171418578Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:58:35.172860 containerd[1625]: time="2025-11-05T15:58:35.172814803Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:58:35.172933 containerd[1625]: time="2025-11-05T15:58:35.172888984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:58:35.173161 kubelet[2823]: E1105 15:58:35.173108 2823 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:58:35.173210 kubelet[2823]: E1105 15:58:35.173165 2823 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:58:35.173286 kubelet[2823]: E1105 15:58:35.173259 2823 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-94f7df89d-9b28s_calico-system(e568ac1b-7203-41c5-978d-53ea0a375013): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:58:35.173367 kubelet[2823]: E1105 15:58:35.173320 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-94f7df89d-9b28s" podUID="e568ac1b-7203-41c5-978d-53ea0a375013" Nov 5 15:58:35.488375 kubelet[2823]: E1105 15:58:35.487080 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:58:35.490170 kubelet[2823]: E1105 15:58:35.489319 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-94f7df89d-9b28s" podUID="e568ac1b-7203-41c5-978d-53ea0a375013" Nov 5 15:58:35.490170 kubelet[2823]: E1105 15:58:35.489604 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-666f597f8b-qsk25" podUID="36f63588-e88e-4e5e-be35-3d453ebfbecf" Nov 5 15:58:35.646922 kubelet[2823]: I1105 15:58:35.646459 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-7x86x" podStartSLOduration=62.64644168 podStartE2EDuration="1m2.64644168s" podCreationTimestamp="2025-11-05 15:57:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:58:35.645901383 +0000 UTC m=+69.416055652" watchObservedRunningTime="2025-11-05 15:58:35.64644168 +0000 UTC m=+69.416595939" Nov 5 15:58:36.388707 systemd-networkd[1522]: cali09533f54fcc: Gained IPv6LL Nov 5 15:58:36.443167 containerd[1625]: time="2025-11-05T15:58:36.443128671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:58:36.488609 kubelet[2823]: E1105 15:58:36.488492 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:58:36.489016 kubelet[2823]: E1105 15:58:36.488641 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-94f7df89d-9b28s" podUID="e568ac1b-7203-41c5-978d-53ea0a375013" Nov 5 15:58:36.662162 systemd-networkd[1522]: cali117b8b5e6b8: Gained IPv6LL Nov 5 15:58:37.053909 containerd[1625]: time="2025-11-05T15:58:37.053733854Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:58:37.239401 containerd[1625]: time="2025-11-05T15:58:37.239266907Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:58:37.239401 containerd[1625]: time="2025-11-05T15:58:37.239335115Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:58:37.239724 kubelet[2823]: E1105 15:58:37.239654 2823 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:58:37.239724 kubelet[2823]: E1105 15:58:37.239719 2823 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:58:37.239856 kubelet[2823]: E1105 15:58:37.239818 2823 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-666f597f8b-vgtfd_calico-apiserver(091e450e-4da8-4476-b3e3-4b2049f9a92c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:58:37.239895 kubelet[2823]: E1105 15:58:37.239871 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-666f597f8b-vgtfd" podUID="091e450e-4da8-4476-b3e3-4b2049f9a92c" Nov 5 15:58:37.441175 containerd[1625]: time="2025-11-05T15:58:37.441130926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:58:37.768715 containerd[1625]: time="2025-11-05T15:58:37.768534959Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:58:37.772157 containerd[1625]: time="2025-11-05T15:58:37.771915794Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:58:37.772157 containerd[1625]: time="2025-11-05T15:58:37.771976831Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:58:37.772445 kubelet[2823]: E1105 15:58:37.772376 2823 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:58:37.772812 kubelet[2823]: E1105 15:58:37.772447 2823 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:58:37.772812 kubelet[2823]: E1105 15:58:37.772556 2823 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-4gzxq_calico-system(def47882-ae7c-4469-bdea-ed04b63c4c12): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:58:37.772812 kubelet[2823]: E1105 15:58:37.772611 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4gzxq" podUID="def47882-ae7c-4469-bdea-ed04b63c4c12" Nov 5 15:58:38.440852 kubelet[2823]: E1105 15:58:38.439795 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:58:38.441035 containerd[1625]: time="2025-11-05T15:58:38.440663575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:58:38.694919 systemd[1]: Started sshd@9-10.0.0.107:22-10.0.0.1:56822.service - OpenSSH per-connection server daemon (10.0.0.1:56822). Nov 5 15:58:38.770329 sshd[5047]: Accepted publickey for core from 10.0.0.1 port 56822 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:58:38.775576 sshd-session[5047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:58:38.780675 systemd-logind[1592]: New session 10 of user core. Nov 5 15:58:38.796634 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 5 15:58:38.815868 containerd[1625]: time="2025-11-05T15:58:38.815785977Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:58:38.822318 containerd[1625]: time="2025-11-05T15:58:38.822219610Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:58:38.822482 containerd[1625]: time="2025-11-05T15:58:38.822335951Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:58:38.822624 kubelet[2823]: E1105 15:58:38.822524 2823 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:58:38.822624 kubelet[2823]: E1105 15:58:38.822607 2823 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:58:38.823160 kubelet[2823]: E1105 15:58:38.822739 2823 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-544bd576d6-k5lb4_calico-system(5542c344-4a27-4a33-b28f-ee6d288fca27): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:58:38.824829 containerd[1625]: time="2025-11-05T15:58:38.824794342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:58:38.938356 sshd[5050]: Connection closed by 10.0.0.1 port 56822 Nov 5 15:58:38.940675 sshd-session[5047]: pam_unix(sshd:session): session closed for user core Nov 5 15:58:38.950297 systemd[1]: sshd@9-10.0.0.107:22-10.0.0.1:56822.service: Deactivated successfully. Nov 5 15:58:38.953073 systemd[1]: session-10.scope: Deactivated successfully. Nov 5 15:58:38.954399 systemd-logind[1592]: Session 10 logged out. Waiting for processes to exit. Nov 5 15:58:38.956175 systemd-logind[1592]: Removed session 10. Nov 5 15:58:39.188423 containerd[1625]: time="2025-11-05T15:58:39.188366136Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:58:39.189900 containerd[1625]: time="2025-11-05T15:58:39.189841527Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:58:39.189900 containerd[1625]: time="2025-11-05T15:58:39.189875472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:58:39.190143 kubelet[2823]: E1105 15:58:39.190091 2823 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:58:39.190143 kubelet[2823]: E1105 15:58:39.190141 2823 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:58:39.190240 kubelet[2823]: E1105 15:58:39.190214 2823 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-544bd576d6-k5lb4_calico-system(5542c344-4a27-4a33-b28f-ee6d288fca27): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:58:39.190282 kubelet[2823]: E1105 15:58:39.190251 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544bd576d6-k5lb4" podUID="5542c344-4a27-4a33-b28f-ee6d288fca27" Nov 5 15:58:40.440196 kubelet[2823]: E1105 15:58:40.440126 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:58:43.954399 systemd[1]: Started sshd@10-10.0.0.107:22-10.0.0.1:59234.service - OpenSSH per-connection server daemon (10.0.0.1:59234). Nov 5 15:58:44.022349 sshd[5073]: Accepted publickey for core from 10.0.0.1 port 59234 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:58:44.025411 sshd-session[5073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:58:44.030608 systemd-logind[1592]: New session 11 of user core. Nov 5 15:58:44.039478 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 5 15:58:44.173101 sshd[5076]: Connection closed by 10.0.0.1 port 59234 Nov 5 15:58:44.173599 sshd-session[5073]: pam_unix(sshd:session): session closed for user core Nov 5 15:58:44.185127 systemd[1]: sshd@10-10.0.0.107:22-10.0.0.1:59234.service: Deactivated successfully. Nov 5 15:58:44.187019 systemd[1]: session-11.scope: Deactivated successfully. Nov 5 15:58:44.187834 systemd-logind[1592]: Session 11 logged out. Waiting for processes to exit. Nov 5 15:58:44.190736 systemd[1]: Started sshd@11-10.0.0.107:22-10.0.0.1:59240.service - OpenSSH per-connection server daemon (10.0.0.1:59240). Nov 5 15:58:44.191505 systemd-logind[1592]: Removed session 11. Nov 5 15:58:44.256208 sshd[5090]: Accepted publickey for core from 10.0.0.1 port 59240 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:58:44.258147 sshd-session[5090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:58:44.263529 systemd-logind[1592]: New session 12 of user core. Nov 5 15:58:44.275580 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 5 15:58:44.440271 kubelet[2823]: E1105 15:58:44.440151 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:58:44.447456 sshd[5093]: Connection closed by 10.0.0.1 port 59240 Nov 5 15:58:44.448442 sshd-session[5090]: pam_unix(sshd:session): session closed for user core Nov 5 15:58:44.460921 systemd[1]: sshd@11-10.0.0.107:22-10.0.0.1:59240.service: Deactivated successfully. Nov 5 15:58:44.466072 systemd[1]: session-12.scope: Deactivated successfully. Nov 5 15:58:44.471114 systemd-logind[1592]: Session 12 logged out. Waiting for processes to exit. Nov 5 15:58:44.474149 systemd[1]: Started sshd@12-10.0.0.107:22-10.0.0.1:59242.service - OpenSSH per-connection server daemon (10.0.0.1:59242). Nov 5 15:58:44.476792 systemd-logind[1592]: Removed session 12. Nov 5 15:58:44.541934 sshd[5104]: Accepted publickey for core from 10.0.0.1 port 59242 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:58:44.543669 sshd-session[5104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:58:44.548762 systemd-logind[1592]: New session 13 of user core. Nov 5 15:58:44.558550 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 5 15:58:44.680955 sshd[5107]: Connection closed by 10.0.0.1 port 59242 Nov 5 15:58:44.681269 sshd-session[5104]: pam_unix(sshd:session): session closed for user core Nov 5 15:58:44.686265 systemd[1]: sshd@12-10.0.0.107:22-10.0.0.1:59242.service: Deactivated successfully. Nov 5 15:58:44.688428 systemd[1]: session-13.scope: Deactivated successfully. Nov 5 15:58:44.689193 systemd-logind[1592]: Session 13 logged out. Waiting for processes to exit. Nov 5 15:58:44.690879 systemd-logind[1592]: Removed session 13. Nov 5 15:58:49.698043 systemd[1]: Started sshd@13-10.0.0.107:22-10.0.0.1:59258.service - OpenSSH per-connection server daemon (10.0.0.1:59258). Nov 5 15:58:49.750777 sshd[5123]: Accepted publickey for core from 10.0.0.1 port 59258 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:58:49.752093 sshd-session[5123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:58:49.756812 systemd-logind[1592]: New session 14 of user core. Nov 5 15:58:49.770475 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 5 15:58:49.886739 sshd[5126]: Connection closed by 10.0.0.1 port 59258 Nov 5 15:58:49.887187 sshd-session[5123]: pam_unix(sshd:session): session closed for user core Nov 5 15:58:49.892843 systemd[1]: sshd@13-10.0.0.107:22-10.0.0.1:59258.service: Deactivated successfully. Nov 5 15:58:49.894799 systemd[1]: session-14.scope: Deactivated successfully. Nov 5 15:58:49.895817 systemd-logind[1592]: Session 14 logged out. Waiting for processes to exit. Nov 5 15:58:49.897397 systemd-logind[1592]: Removed session 14. Nov 5 15:58:50.423599 kubelet[2823]: E1105 15:58:50.423530 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:58:50.445554 kubelet[2823]: E1105 15:58:50.445474 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-666f597f8b-vgtfd" podUID="091e450e-4da8-4476-b3e3-4b2049f9a92c" Nov 5 15:58:50.446555 containerd[1625]: time="2025-11-05T15:58:50.446260598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:58:50.539860 containerd[1625]: time="2025-11-05T15:58:50.539801594Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6c0dd3ac4cf01211130c658b31df8de668f217de25d09b0c7ef0bd3a7c9c2c60\" id:\"a0136b479a57d3416fb746a340e0e97795455a80efb5b2df4391aa5b1a4a13e3\" pid:5150 exited_at:{seconds:1762358330 nanos:539402147}" Nov 5 15:58:50.543259 kubelet[2823]: E1105 15:58:50.542358 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:58:50.632249 containerd[1625]: time="2025-11-05T15:58:50.632183153Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6c0dd3ac4cf01211130c658b31df8de668f217de25d09b0c7ef0bd3a7c9c2c60\" id:\"986fbcd7bfac6456d856fe899c5470be6d38426700592e96630baba170c50f12\" pid:5175 exited_at:{seconds:1762358330 nanos:631822720}" Nov 5 15:58:50.783694 containerd[1625]: time="2025-11-05T15:58:50.783497005Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:58:50.918563 containerd[1625]: time="2025-11-05T15:58:50.918449091Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:58:50.918563 containerd[1625]: time="2025-11-05T15:58:50.918555613Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:58:50.918809 kubelet[2823]: E1105 15:58:50.918756 2823 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:58:50.918809 kubelet[2823]: E1105 15:58:50.918805 2823 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:58:50.919127 kubelet[2823]: E1105 15:58:50.919008 2823 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-94f7df89d-9b28s_calico-system(e568ac1b-7203-41c5-978d-53ea0a375013): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:58:50.919127 kubelet[2823]: E1105 15:58:50.919065 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-94f7df89d-9b28s" podUID="e568ac1b-7203-41c5-978d-53ea0a375013" Nov 5 15:58:50.919384 containerd[1625]: time="2025-11-05T15:58:50.919331683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:58:51.385149 containerd[1625]: time="2025-11-05T15:58:51.385079811Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:58:51.440799 kubelet[2823]: E1105 15:58:51.440736 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4gzxq" podUID="def47882-ae7c-4469-bdea-ed04b63c4c12" Nov 5 15:58:51.460579 containerd[1625]: time="2025-11-05T15:58:51.460526431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:58:51.460954 containerd[1625]: time="2025-11-05T15:58:51.460603196Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:58:51.460982 kubelet[2823]: E1105 15:58:51.460861 2823 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:58:51.460982 kubelet[2823]: E1105 15:58:51.460919 2823 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:58:51.461151 kubelet[2823]: E1105 15:58:51.461110 2823 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-cr9h8_calico-system(6636406c-76cb-4ddc-8f4d-b82da1f33a92): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:58:51.461561 containerd[1625]: time="2025-11-05T15:58:51.461536294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:58:51.954087 containerd[1625]: time="2025-11-05T15:58:51.954018167Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:58:52.047859 containerd[1625]: time="2025-11-05T15:58:52.047777253Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:58:52.047990 containerd[1625]: time="2025-11-05T15:58:52.047857174Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:58:52.048258 kubelet[2823]: E1105 15:58:52.048200 2823 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:58:52.048258 kubelet[2823]: E1105 15:58:52.048255 2823 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:58:52.048480 kubelet[2823]: E1105 15:58:52.048433 2823 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-666f597f8b-qsk25_calico-apiserver(36f63588-e88e-4e5e-be35-3d453ebfbecf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:58:52.048536 kubelet[2823]: E1105 15:58:52.048484 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-666f597f8b-qsk25" podUID="36f63588-e88e-4e5e-be35-3d453ebfbecf" Nov 5 15:58:52.048829 containerd[1625]: time="2025-11-05T15:58:52.048783007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:58:52.444575 kubelet[2823]: E1105 15:58:52.444519 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544bd576d6-k5lb4" podUID="5542c344-4a27-4a33-b28f-ee6d288fca27" Nov 5 15:58:52.491110 containerd[1625]: time="2025-11-05T15:58:52.491037066Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:58:52.492416 containerd[1625]: time="2025-11-05T15:58:52.492364210Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:58:52.492502 containerd[1625]: time="2025-11-05T15:58:52.492442869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:58:52.492673 kubelet[2823]: E1105 15:58:52.492630 2823 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:58:52.492793 kubelet[2823]: E1105 15:58:52.492682 2823 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:58:52.492793 kubelet[2823]: E1105 15:58:52.492772 2823 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-cr9h8_calico-system(6636406c-76cb-4ddc-8f4d-b82da1f33a92): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:58:52.492918 kubelet[2823]: E1105 15:58:52.492815 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cr9h8" podUID="6636406c-76cb-4ddc-8f4d-b82da1f33a92" Nov 5 15:58:54.903751 systemd[1]: Started sshd@14-10.0.0.107:22-10.0.0.1:55488.service - OpenSSH per-connection server daemon (10.0.0.1:55488). Nov 5 15:58:55.028850 sshd[5196]: Accepted publickey for core from 10.0.0.1 port 55488 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:58:55.030946 sshd-session[5196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:58:55.038267 systemd-logind[1592]: New session 15 of user core. Nov 5 15:58:55.042557 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 5 15:58:55.217869 sshd[5199]: Connection closed by 10.0.0.1 port 55488 Nov 5 15:58:55.218201 sshd-session[5196]: pam_unix(sshd:session): session closed for user core Nov 5 15:58:55.224141 systemd[1]: sshd@14-10.0.0.107:22-10.0.0.1:55488.service: Deactivated successfully. Nov 5 15:58:55.227088 systemd[1]: session-15.scope: Deactivated successfully. Nov 5 15:58:55.228181 systemd-logind[1592]: Session 15 logged out. Waiting for processes to exit. Nov 5 15:58:55.229949 systemd-logind[1592]: Removed session 15. Nov 5 15:59:00.235398 systemd[1]: Started sshd@15-10.0.0.107:22-10.0.0.1:53724.service - OpenSSH per-connection server daemon (10.0.0.1:53724). Nov 5 15:59:00.295766 sshd[5213]: Accepted publickey for core from 10.0.0.1 port 53724 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:59:00.297956 sshd-session[5213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:59:00.303789 systemd-logind[1592]: New session 16 of user core. Nov 5 15:59:00.311625 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 5 15:59:00.502882 sshd[5216]: Connection closed by 10.0.0.1 port 53724 Nov 5 15:59:00.505732 sshd-session[5213]: pam_unix(sshd:session): session closed for user core Nov 5 15:59:00.511243 systemd-logind[1592]: Session 16 logged out. Waiting for processes to exit. Nov 5 15:59:00.511883 systemd[1]: sshd@15-10.0.0.107:22-10.0.0.1:53724.service: Deactivated successfully. Nov 5 15:59:00.514697 systemd[1]: session-16.scope: Deactivated successfully. Nov 5 15:59:00.517151 systemd-logind[1592]: Removed session 16. Nov 5 15:59:03.441621 kubelet[2823]: E1105 15:59:03.441526 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-94f7df89d-9b28s" podUID="e568ac1b-7203-41c5-978d-53ea0a375013" Nov 5 15:59:04.441000 containerd[1625]: time="2025-11-05T15:59:04.440653320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:59:04.853069 containerd[1625]: time="2025-11-05T15:59:04.852893795Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:59:04.854084 containerd[1625]: time="2025-11-05T15:59:04.854026876Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:59:04.854084 containerd[1625]: time="2025-11-05T15:59:04.854093462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:59:04.854401 kubelet[2823]: E1105 15:59:04.854325 2823 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:59:04.854766 kubelet[2823]: E1105 15:59:04.854409 2823 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:59:04.854766 kubelet[2823]: E1105 15:59:04.854659 2823 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-544bd576d6-k5lb4_calico-system(5542c344-4a27-4a33-b28f-ee6d288fca27): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:59:04.854953 containerd[1625]: time="2025-11-05T15:59:04.854925364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:59:05.186015 containerd[1625]: time="2025-11-05T15:59:05.185964315Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:59:05.215035 containerd[1625]: time="2025-11-05T15:59:05.214935577Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:59:05.215228 containerd[1625]: time="2025-11-05T15:59:05.215053540Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:59:05.215376 kubelet[2823]: E1105 15:59:05.215275 2823 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:59:05.215648 kubelet[2823]: E1105 15:59:05.215615 2823 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:59:05.215947 kubelet[2823]: E1105 15:59:05.215915 2823 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-4gzxq_calico-system(def47882-ae7c-4469-bdea-ed04b63c4c12): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:59:05.216217 containerd[1625]: time="2025-11-05T15:59:05.216187293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:59:05.216452 kubelet[2823]: E1105 15:59:05.216391 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4gzxq" podUID="def47882-ae7c-4469-bdea-ed04b63c4c12" Nov 5 15:59:05.519117 systemd[1]: Started sshd@16-10.0.0.107:22-10.0.0.1:53732.service - OpenSSH per-connection server daemon (10.0.0.1:53732). Nov 5 15:59:05.580129 sshd[5239]: Accepted publickey for core from 10.0.0.1 port 53732 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:59:05.581892 sshd-session[5239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:59:05.586624 systemd-logind[1592]: New session 17 of user core. Nov 5 15:59:05.596485 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 5 15:59:05.671196 containerd[1625]: time="2025-11-05T15:59:05.671138157Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:59:05.736035 containerd[1625]: time="2025-11-05T15:59:05.735749565Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:59:05.736035 containerd[1625]: time="2025-11-05T15:59:05.735796474Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:59:05.736246 kubelet[2823]: E1105 15:59:05.736068 2823 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:59:05.736246 kubelet[2823]: E1105 15:59:05.736118 2823 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:59:05.736348 kubelet[2823]: E1105 15:59:05.736290 2823 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-544bd576d6-k5lb4_calico-system(5542c344-4a27-4a33-b28f-ee6d288fca27): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:59:05.736379 kubelet[2823]: E1105 15:59:05.736357 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544bd576d6-k5lb4" podUID="5542c344-4a27-4a33-b28f-ee6d288fca27" Nov 5 15:59:05.736879 containerd[1625]: time="2025-11-05T15:59:05.736840417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:59:05.739421 sshd[5242]: Connection closed by 10.0.0.1 port 53732 Nov 5 15:59:05.739889 sshd-session[5239]: pam_unix(sshd:session): session closed for user core Nov 5 15:59:05.745785 systemd[1]: sshd@16-10.0.0.107:22-10.0.0.1:53732.service: Deactivated successfully. Nov 5 15:59:05.748091 systemd[1]: session-17.scope: Deactivated successfully. Nov 5 15:59:05.748989 systemd-logind[1592]: Session 17 logged out. Waiting for processes to exit. Nov 5 15:59:05.750641 systemd-logind[1592]: Removed session 17. Nov 5 15:59:06.129088 containerd[1625]: time="2025-11-05T15:59:06.129016605Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:59:06.130286 containerd[1625]: time="2025-11-05T15:59:06.130232512Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:59:06.130376 containerd[1625]: time="2025-11-05T15:59:06.130356987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:59:06.130669 kubelet[2823]: E1105 15:59:06.130590 2823 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:59:06.131011 kubelet[2823]: E1105 15:59:06.130672 2823 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:59:06.131011 kubelet[2823]: E1105 15:59:06.130756 2823 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-666f597f8b-vgtfd_calico-apiserver(091e450e-4da8-4476-b3e3-4b2049f9a92c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:59:06.131011 kubelet[2823]: E1105 15:59:06.130793 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-666f597f8b-vgtfd" podUID="091e450e-4da8-4476-b3e3-4b2049f9a92c" Nov 5 15:59:06.443439 kubelet[2823]: E1105 15:59:06.443290 2823 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:59:06.444649 kubelet[2823]: E1105 15:59:06.444588 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cr9h8" podUID="6636406c-76cb-4ddc-8f4d-b82da1f33a92" Nov 5 15:59:07.440993 kubelet[2823]: E1105 15:59:07.440770 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-666f597f8b-qsk25" podUID="36f63588-e88e-4e5e-be35-3d453ebfbecf" Nov 5 15:59:10.753835 systemd[1]: Started sshd@17-10.0.0.107:22-10.0.0.1:57974.service - OpenSSH per-connection server daemon (10.0.0.1:57974). Nov 5 15:59:10.816049 sshd[5256]: Accepted publickey for core from 10.0.0.1 port 57974 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:59:10.817906 sshd-session[5256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:59:10.823065 systemd-logind[1592]: New session 18 of user core. Nov 5 15:59:10.833527 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 5 15:59:10.968666 sshd[5259]: Connection closed by 10.0.0.1 port 57974 Nov 5 15:59:10.969053 sshd-session[5256]: pam_unix(sshd:session): session closed for user core Nov 5 15:59:10.979407 systemd[1]: sshd@17-10.0.0.107:22-10.0.0.1:57974.service: Deactivated successfully. Nov 5 15:59:10.981835 systemd[1]: session-18.scope: Deactivated successfully. Nov 5 15:59:10.982941 systemd-logind[1592]: Session 18 logged out. Waiting for processes to exit. Nov 5 15:59:10.987093 systemd[1]: Started sshd@18-10.0.0.107:22-10.0.0.1:57982.service - OpenSSH per-connection server daemon (10.0.0.1:57982). Nov 5 15:59:10.987859 systemd-logind[1592]: Removed session 18. Nov 5 15:59:11.039788 sshd[5272]: Accepted publickey for core from 10.0.0.1 port 57982 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:59:11.041662 sshd-session[5272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:59:11.047101 systemd-logind[1592]: New session 19 of user core. Nov 5 15:59:11.055452 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 5 15:59:12.158216 sshd[5275]: Connection closed by 10.0.0.1 port 57982 Nov 5 15:59:12.158021 sshd-session[5272]: pam_unix(sshd:session): session closed for user core Nov 5 15:59:12.170077 systemd[1]: sshd@18-10.0.0.107:22-10.0.0.1:57982.service: Deactivated successfully. Nov 5 15:59:12.174163 systemd[1]: session-19.scope: Deactivated successfully. Nov 5 15:59:12.175533 systemd-logind[1592]: Session 19 logged out. Waiting for processes to exit. Nov 5 15:59:12.180231 systemd[1]: Started sshd@19-10.0.0.107:22-10.0.0.1:57992.service - OpenSSH per-connection server daemon (10.0.0.1:57992). Nov 5 15:59:12.180988 systemd-logind[1592]: Removed session 19. Nov 5 15:59:12.271887 sshd[5287]: Accepted publickey for core from 10.0.0.1 port 57992 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:59:12.273822 sshd-session[5287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:59:12.279510 systemd-logind[1592]: New session 20 of user core. Nov 5 15:59:12.294591 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 5 15:59:13.053933 sshd[5290]: Connection closed by 10.0.0.1 port 57992 Nov 5 15:59:13.056490 sshd-session[5287]: pam_unix(sshd:session): session closed for user core Nov 5 15:59:13.073517 systemd[1]: sshd@19-10.0.0.107:22-10.0.0.1:57992.service: Deactivated successfully. Nov 5 15:59:13.078141 systemd[1]: session-20.scope: Deactivated successfully. Nov 5 15:59:13.082371 systemd-logind[1592]: Session 20 logged out. Waiting for processes to exit. Nov 5 15:59:13.087553 systemd-logind[1592]: Removed session 20. Nov 5 15:59:13.091686 systemd[1]: Started sshd@20-10.0.0.107:22-10.0.0.1:58008.service - OpenSSH per-connection server daemon (10.0.0.1:58008). Nov 5 15:59:13.178851 sshd[5307]: Accepted publickey for core from 10.0.0.1 port 58008 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:59:13.181009 sshd-session[5307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:59:13.189437 systemd-logind[1592]: New session 21 of user core. Nov 5 15:59:13.202554 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 5 15:59:13.474932 sshd[5310]: Connection closed by 10.0.0.1 port 58008 Nov 5 15:59:13.479064 sshd-session[5307]: pam_unix(sshd:session): session closed for user core Nov 5 15:59:13.493127 systemd[1]: sshd@20-10.0.0.107:22-10.0.0.1:58008.service: Deactivated successfully. Nov 5 15:59:13.500223 systemd[1]: session-21.scope: Deactivated successfully. Nov 5 15:59:13.506284 systemd-logind[1592]: Session 21 logged out. Waiting for processes to exit. Nov 5 15:59:13.514541 systemd[1]: Started sshd@21-10.0.0.107:22-10.0.0.1:58014.service - OpenSSH per-connection server daemon (10.0.0.1:58014). Nov 5 15:59:13.518656 systemd-logind[1592]: Removed session 21. Nov 5 15:59:13.599335 sshd[5322]: Accepted publickey for core from 10.0.0.1 port 58014 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:59:13.601029 sshd-session[5322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:59:13.607660 systemd-logind[1592]: New session 22 of user core. Nov 5 15:59:13.613629 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 5 15:59:13.760713 sshd[5325]: Connection closed by 10.0.0.1 port 58014 Nov 5 15:59:13.760684 sshd-session[5322]: pam_unix(sshd:session): session closed for user core Nov 5 15:59:13.767627 systemd[1]: sshd@21-10.0.0.107:22-10.0.0.1:58014.service: Deactivated successfully. Nov 5 15:59:13.770471 systemd[1]: session-22.scope: Deactivated successfully. Nov 5 15:59:13.771766 systemd-logind[1592]: Session 22 logged out. Waiting for processes to exit. Nov 5 15:59:13.773711 systemd-logind[1592]: Removed session 22. Nov 5 15:59:16.444242 containerd[1625]: time="2025-11-05T15:59:16.444150902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:59:16.868159 containerd[1625]: time="2025-11-05T15:59:16.867992484Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:59:16.945329 containerd[1625]: time="2025-11-05T15:59:16.945235385Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:59:16.945482 containerd[1625]: time="2025-11-05T15:59:16.945356012Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:59:16.945678 kubelet[2823]: E1105 15:59:16.945581 2823 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:59:16.946192 kubelet[2823]: E1105 15:59:16.945719 2823 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:59:16.946192 kubelet[2823]: E1105 15:59:16.945833 2823 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-94f7df89d-9b28s_calico-system(e568ac1b-7203-41c5-978d-53ea0a375013): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:59:16.946192 kubelet[2823]: E1105 15:59:16.945927 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-94f7df89d-9b28s" podUID="e568ac1b-7203-41c5-978d-53ea0a375013" Nov 5 15:59:17.441871 kubelet[2823]: E1105 15:59:17.441781 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544bd576d6-k5lb4" podUID="5542c344-4a27-4a33-b28f-ee6d288fca27" Nov 5 15:59:17.442841 containerd[1625]: time="2025-11-05T15:59:17.442287604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:59:17.896073 containerd[1625]: time="2025-11-05T15:59:17.895726282Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:59:18.128572 containerd[1625]: time="2025-11-05T15:59:18.128475332Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:59:18.128572 containerd[1625]: time="2025-11-05T15:59:18.128494739Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:59:18.128915 kubelet[2823]: E1105 15:59:18.128855 2823 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:59:18.129390 kubelet[2823]: E1105 15:59:18.128923 2823 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:59:18.129390 kubelet[2823]: E1105 15:59:18.129030 2823 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-cr9h8_calico-system(6636406c-76cb-4ddc-8f4d-b82da1f33a92): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:59:18.129867 containerd[1625]: time="2025-11-05T15:59:18.129831101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:59:18.441481 kubelet[2823]: E1105 15:59:18.441415 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4gzxq" podUID="def47882-ae7c-4469-bdea-ed04b63c4c12" Nov 5 15:59:18.574384 containerd[1625]: time="2025-11-05T15:59:18.574275812Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:59:18.641254 containerd[1625]: time="2025-11-05T15:59:18.641179415Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:59:18.641488 containerd[1625]: time="2025-11-05T15:59:18.641240199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:59:18.641563 kubelet[2823]: E1105 15:59:18.641511 2823 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:59:18.641633 kubelet[2823]: E1105 15:59:18.641570 2823 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:59:18.641678 kubelet[2823]: E1105 15:59:18.641655 2823 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-cr9h8_calico-system(6636406c-76cb-4ddc-8f4d-b82da1f33a92): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:59:18.641749 kubelet[2823]: E1105 15:59:18.641700 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cr9h8" podUID="6636406c-76cb-4ddc-8f4d-b82da1f33a92" Nov 5 15:59:18.776282 systemd[1]: Started sshd@22-10.0.0.107:22-10.0.0.1:58018.service - OpenSSH per-connection server daemon (10.0.0.1:58018). Nov 5 15:59:18.831082 sshd[5344]: Accepted publickey for core from 10.0.0.1 port 58018 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:59:18.834699 sshd-session[5344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:59:18.843186 systemd-logind[1592]: New session 23 of user core. Nov 5 15:59:18.849609 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 5 15:59:19.008495 sshd[5347]: Connection closed by 10.0.0.1 port 58018 Nov 5 15:59:19.009554 sshd-session[5344]: pam_unix(sshd:session): session closed for user core Nov 5 15:59:19.024855 systemd[1]: sshd@22-10.0.0.107:22-10.0.0.1:58018.service: Deactivated successfully. Nov 5 15:59:19.031046 systemd[1]: session-23.scope: Deactivated successfully. Nov 5 15:59:19.033402 systemd-logind[1592]: Session 23 logged out. Waiting for processes to exit. Nov 5 15:59:19.040552 systemd-logind[1592]: Removed session 23. Nov 5 15:59:20.441727 containerd[1625]: time="2025-11-05T15:59:20.441672893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:59:20.739613 containerd[1625]: time="2025-11-05T15:59:20.739483912Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6c0dd3ac4cf01211130c658b31df8de668f217de25d09b0c7ef0bd3a7c9c2c60\" id:\"6b01b248df0b306945711bb09f984b8cb1cdbaff4a2cc677e625858eaa87a3f2\" pid:5374 exited_at:{seconds:1762358360 nanos:739031087}" Nov 5 15:59:20.820889 containerd[1625]: time="2025-11-05T15:59:20.820827352Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:59:20.822402 containerd[1625]: time="2025-11-05T15:59:20.822343544Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:59:20.822478 containerd[1625]: time="2025-11-05T15:59:20.822439445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:59:20.822712 kubelet[2823]: E1105 15:59:20.822657 2823 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:59:20.822712 kubelet[2823]: E1105 15:59:20.822710 2823 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:59:20.823390 kubelet[2823]: E1105 15:59:20.822816 2823 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-666f597f8b-qsk25_calico-apiserver(36f63588-e88e-4e5e-be35-3d453ebfbecf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:59:20.823390 kubelet[2823]: E1105 15:59:20.822859 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-666f597f8b-qsk25" podUID="36f63588-e88e-4e5e-be35-3d453ebfbecf" Nov 5 15:59:21.441471 kubelet[2823]: E1105 15:59:21.441408 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-666f597f8b-vgtfd" podUID="091e450e-4da8-4476-b3e3-4b2049f9a92c" Nov 5 15:59:24.024387 systemd[1]: Started sshd@23-10.0.0.107:22-10.0.0.1:46712.service - OpenSSH per-connection server daemon (10.0.0.1:46712). Nov 5 15:59:24.094458 sshd[5389]: Accepted publickey for core from 10.0.0.1 port 46712 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:59:24.097014 sshd-session[5389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:59:24.102733 systemd-logind[1592]: New session 24 of user core. Nov 5 15:59:24.110531 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 5 15:59:24.264158 sshd[5392]: Connection closed by 10.0.0.1 port 46712 Nov 5 15:59:24.264551 sshd-session[5389]: pam_unix(sshd:session): session closed for user core Nov 5 15:59:24.269255 systemd[1]: sshd@23-10.0.0.107:22-10.0.0.1:46712.service: Deactivated successfully. Nov 5 15:59:24.273019 systemd[1]: session-24.scope: Deactivated successfully. Nov 5 15:59:24.276509 systemd-logind[1592]: Session 24 logged out. Waiting for processes to exit. Nov 5 15:59:24.277738 systemd-logind[1592]: Removed session 24. Nov 5 15:59:29.276754 systemd[1]: Started sshd@24-10.0.0.107:22-10.0.0.1:46720.service - OpenSSH per-connection server daemon (10.0.0.1:46720). Nov 5 15:59:29.338969 sshd[5407]: Accepted publickey for core from 10.0.0.1 port 46720 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:59:29.340719 sshd-session[5407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:59:29.346928 systemd-logind[1592]: New session 25 of user core. Nov 5 15:59:29.359583 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 5 15:59:29.442061 kubelet[2823]: E1105 15:59:29.441867 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cr9h8" podUID="6636406c-76cb-4ddc-8f4d-b82da1f33a92" Nov 5 15:59:29.584116 sshd[5410]: Connection closed by 10.0.0.1 port 46720 Nov 5 15:59:29.584864 sshd-session[5407]: pam_unix(sshd:session): session closed for user core Nov 5 15:59:29.593632 systemd[1]: sshd@24-10.0.0.107:22-10.0.0.1:46720.service: Deactivated successfully. Nov 5 15:59:29.595913 systemd[1]: session-25.scope: Deactivated successfully. Nov 5 15:59:29.597296 systemd-logind[1592]: Session 25 logged out. Waiting for processes to exit. Nov 5 15:59:29.599761 systemd-logind[1592]: Removed session 25. Nov 5 15:59:30.441626 kubelet[2823]: E1105 15:59:30.441564 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-94f7df89d-9b28s" podUID="e568ac1b-7203-41c5-978d-53ea0a375013" Nov 5 15:59:30.442567 kubelet[2823]: E1105 15:59:30.442399 2823 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544bd576d6-k5lb4" podUID="5542c344-4a27-4a33-b28f-ee6d288fca27"