Oct 28 05:11:34.489235 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 28 03:19:40 -00 2025 Oct 28 05:11:34.489265 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=449db75fd0bf4f00a7b0da93783dc37f82f4a66df937e11c006397de0369495c Oct 28 05:11:34.489274 kernel: BIOS-provided physical RAM map: Oct 28 05:11:34.489281 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Oct 28 05:11:34.489288 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Oct 28 05:11:34.489301 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Oct 28 05:11:34.489309 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Oct 28 05:11:34.489316 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Oct 28 05:11:34.489322 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Oct 28 05:11:34.489329 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Oct 28 05:11:34.489336 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Oct 28 05:11:34.489343 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Oct 28 05:11:34.489362 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Oct 28 05:11:34.489381 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Oct 28 05:11:34.489392 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Oct 28 05:11:34.489402 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Oct 28 05:11:34.489412 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 28 05:11:34.489422 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 28 05:11:34.489439 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 28 05:11:34.489447 kernel: NX (Execute Disable) protection: active Oct 28 05:11:34.489454 kernel: APIC: Static calls initialized Oct 28 05:11:34.489462 kernel: e820: update [mem 0x9a13e018-0x9a147c57] usable ==> usable Oct 28 05:11:34.489469 kernel: e820: update [mem 0x9a101018-0x9a13de57] usable ==> usable Oct 28 05:11:34.489477 kernel: extended physical RAM map: Oct 28 05:11:34.489484 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Oct 28 05:11:34.489492 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Oct 28 05:11:34.489499 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Oct 28 05:11:34.489506 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Oct 28 05:11:34.489520 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a101017] usable Oct 28 05:11:34.489528 kernel: reserve setup_data: [mem 0x000000009a101018-0x000000009a13de57] usable Oct 28 05:11:34.489535 kernel: reserve setup_data: [mem 0x000000009a13de58-0x000000009a13e017] usable Oct 28 05:11:34.489542 kernel: reserve setup_data: [mem 0x000000009a13e018-0x000000009a147c57] usable Oct 28 05:11:34.489550 kernel: reserve setup_data: [mem 0x000000009a147c58-0x000000009b8ecfff] usable Oct 28 05:11:34.489557 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Oct 28 05:11:34.489564 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Oct 28 05:11:34.489572 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Oct 28 05:11:34.489579 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Oct 28 05:11:34.489587 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Oct 28 05:11:34.489594 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Oct 28 05:11:34.489608 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Oct 28 05:11:34.489623 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Oct 28 05:11:34.489630 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 28 05:11:34.489638 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 28 05:11:34.489652 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 28 05:11:34.489659 kernel: efi: EFI v2.7 by EDK II Oct 28 05:11:34.489667 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Oct 28 05:11:34.489675 kernel: random: crng init done Oct 28 05:11:34.489683 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Oct 28 05:11:34.489690 kernel: secureboot: Secure boot enabled Oct 28 05:11:34.489698 kernel: SMBIOS 2.8 present. Oct 28 05:11:34.489705 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Oct 28 05:11:34.489713 kernel: DMI: Memory slots populated: 1/1 Oct 28 05:11:34.489720 kernel: Hypervisor detected: KVM Oct 28 05:11:34.489742 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Oct 28 05:11:34.489751 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 28 05:11:34.489759 kernel: kvm-clock: using sched offset of 4990731107 cycles Oct 28 05:11:34.489767 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 28 05:11:34.489775 kernel: tsc: Detected 2794.748 MHz processor Oct 28 05:11:34.489784 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 28 05:11:34.489792 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 28 05:11:34.489800 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Oct 28 05:11:34.489808 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 28 05:11:34.489823 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 28 05:11:34.489831 kernel: Using GB pages for direct mapping Oct 28 05:11:34.489839 kernel: ACPI: Early table checksum verification disabled Oct 28 05:11:34.489847 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Oct 28 05:11:34.489855 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Oct 28 05:11:34.489863 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 05:11:34.489871 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 05:11:34.489885 kernel: ACPI: FACS 0x000000009BBDD000 000040 Oct 28 05:11:34.489893 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 05:11:34.489901 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 05:11:34.489909 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 05:11:34.489917 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 05:11:34.489925 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Oct 28 05:11:34.489933 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Oct 28 05:11:34.489947 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Oct 28 05:11:34.489955 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Oct 28 05:11:34.489963 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Oct 28 05:11:34.489971 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Oct 28 05:11:34.489979 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Oct 28 05:11:34.489987 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Oct 28 05:11:34.489995 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Oct 28 05:11:34.490009 kernel: No NUMA configuration found Oct 28 05:11:34.490017 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Oct 28 05:11:34.490025 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Oct 28 05:11:34.490033 kernel: Zone ranges: Oct 28 05:11:34.490041 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 28 05:11:34.490052 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Oct 28 05:11:34.490062 kernel: Normal empty Oct 28 05:11:34.490072 kernel: Device empty Oct 28 05:11:34.490091 kernel: Movable zone start for each node Oct 28 05:11:34.490102 kernel: Early memory node ranges Oct 28 05:11:34.490113 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Oct 28 05:11:34.490124 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Oct 28 05:11:34.490134 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Oct 28 05:11:34.490144 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Oct 28 05:11:34.490154 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Oct 28 05:11:34.490164 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Oct 28 05:11:34.490184 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 28 05:11:34.490195 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Oct 28 05:11:34.490206 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 28 05:11:34.490217 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Oct 28 05:11:34.490227 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Oct 28 05:11:34.490238 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Oct 28 05:11:34.490248 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 28 05:11:34.490270 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 28 05:11:34.490281 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 28 05:11:34.490292 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 28 05:11:34.490303 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 28 05:11:34.490314 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 28 05:11:34.490324 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 28 05:11:34.490336 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 28 05:11:34.490376 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 28 05:11:34.490387 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 28 05:11:34.490398 kernel: TSC deadline timer available Oct 28 05:11:34.490409 kernel: CPU topo: Max. logical packages: 1 Oct 28 05:11:34.490417 kernel: CPU topo: Max. logical dies: 1 Oct 28 05:11:34.490453 kernel: CPU topo: Max. dies per package: 1 Oct 28 05:11:34.490461 kernel: CPU topo: Max. threads per core: 1 Oct 28 05:11:34.490469 kernel: CPU topo: Num. cores per package: 4 Oct 28 05:11:34.490478 kernel: CPU topo: Num. threads per package: 4 Oct 28 05:11:34.490486 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Oct 28 05:11:34.490500 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 28 05:11:34.490508 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 28 05:11:34.490517 kernel: kvm-guest: setup PV sched yield Oct 28 05:11:34.490531 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Oct 28 05:11:34.490539 kernel: Booting paravirtualized kernel on KVM Oct 28 05:11:34.490548 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 28 05:11:34.490556 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 28 05:11:34.490565 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Oct 28 05:11:34.490573 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Oct 28 05:11:34.490581 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 28 05:11:34.490595 kernel: kvm-guest: PV spinlocks enabled Oct 28 05:11:34.490603 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 28 05:11:34.490613 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=449db75fd0bf4f00a7b0da93783dc37f82f4a66df937e11c006397de0369495c Oct 28 05:11:34.490622 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 28 05:11:34.490630 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 28 05:11:34.490638 kernel: Fallback order for Node 0: 0 Oct 28 05:11:34.490646 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Oct 28 05:11:34.490661 kernel: Policy zone: DMA32 Oct 28 05:11:34.490669 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 28 05:11:34.490677 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 28 05:11:34.490686 kernel: ftrace: allocating 40092 entries in 157 pages Oct 28 05:11:34.490694 kernel: ftrace: allocated 157 pages with 5 groups Oct 28 05:11:34.490702 kernel: Dynamic Preempt: voluntary Oct 28 05:11:34.490710 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 28 05:11:34.490725 kernel: rcu: RCU event tracing is enabled. Oct 28 05:11:34.490734 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 28 05:11:34.490753 kernel: Trampoline variant of Tasks RCU enabled. Oct 28 05:11:34.490763 kernel: Rude variant of Tasks RCU enabled. Oct 28 05:11:34.490774 kernel: Tracing variant of Tasks RCU enabled. Oct 28 05:11:34.490782 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 28 05:11:34.490790 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 28 05:11:34.490799 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 28 05:11:34.490814 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 28 05:11:34.490823 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 28 05:11:34.490831 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 28 05:11:34.490839 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 28 05:11:34.490848 kernel: Console: colour dummy device 80x25 Oct 28 05:11:34.490856 kernel: printk: legacy console [ttyS0] enabled Oct 28 05:11:34.490864 kernel: ACPI: Core revision 20240827 Oct 28 05:11:34.490883 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 28 05:11:34.490891 kernel: APIC: Switch to symmetric I/O mode setup Oct 28 05:11:34.490900 kernel: x2apic enabled Oct 28 05:11:34.490908 kernel: APIC: Switched APIC routing to: physical x2apic Oct 28 05:11:34.490916 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 28 05:11:34.490925 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 28 05:11:34.490933 kernel: kvm-guest: setup PV IPIs Oct 28 05:11:34.490947 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 28 05:11:34.490956 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Oct 28 05:11:34.490964 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 28 05:11:34.490973 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 28 05:11:34.490981 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 28 05:11:34.490989 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 28 05:11:34.490998 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 28 05:11:34.491012 kernel: Spectre V2 : Mitigation: Retpolines Oct 28 05:11:34.491020 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 28 05:11:34.491029 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 28 05:11:34.491037 kernel: active return thunk: retbleed_return_thunk Oct 28 05:11:34.491045 kernel: RETBleed: Mitigation: untrained return thunk Oct 28 05:11:34.491054 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 28 05:11:34.491062 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 28 05:11:34.491077 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 28 05:11:34.491086 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 28 05:11:34.491094 kernel: active return thunk: srso_return_thunk Oct 28 05:11:34.491102 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 28 05:11:34.491111 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 28 05:11:34.491119 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 28 05:11:34.491134 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 28 05:11:34.491142 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 28 05:11:34.491151 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 28 05:11:34.491159 kernel: Freeing SMP alternatives memory: 32K Oct 28 05:11:34.491167 kernel: pid_max: default: 32768 minimum: 301 Oct 28 05:11:34.491175 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 28 05:11:34.491184 kernel: landlock: Up and running. Oct 28 05:11:34.491198 kernel: SELinux: Initializing. Oct 28 05:11:34.491206 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 28 05:11:34.491214 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 28 05:11:34.491223 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 28 05:11:34.491231 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 28 05:11:34.491239 kernel: ... version: 0 Oct 28 05:11:34.491247 kernel: ... bit width: 48 Oct 28 05:11:34.491256 kernel: ... generic registers: 6 Oct 28 05:11:34.491270 kernel: ... value mask: 0000ffffffffffff Oct 28 05:11:34.491278 kernel: ... max period: 00007fffffffffff Oct 28 05:11:34.491286 kernel: ... fixed-purpose events: 0 Oct 28 05:11:34.491295 kernel: ... event mask: 000000000000003f Oct 28 05:11:34.491303 kernel: signal: max sigframe size: 1776 Oct 28 05:11:34.491311 kernel: rcu: Hierarchical SRCU implementation. Oct 28 05:11:34.491320 kernel: rcu: Max phase no-delay instances is 400. Oct 28 05:11:34.491334 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 28 05:11:34.491343 kernel: smp: Bringing up secondary CPUs ... Oct 28 05:11:34.491412 kernel: smpboot: x86: Booting SMP configuration: Oct 28 05:11:34.491422 kernel: .... node #0, CPUs: #1 #2 #3 Oct 28 05:11:34.491430 kernel: smp: Brought up 1 node, 4 CPUs Oct 28 05:11:34.491438 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 28 05:11:34.491448 kernel: Memory: 2427648K/2552216K available (14336K kernel code, 2443K rwdata, 29892K rodata, 15960K init, 2084K bss, 118632K reserved, 0K cma-reserved) Oct 28 05:11:34.491471 kernel: devtmpfs: initialized Oct 28 05:11:34.491482 kernel: x86/mm: Memory block size: 128MB Oct 28 05:11:34.491493 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Oct 28 05:11:34.491504 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Oct 28 05:11:34.491516 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 28 05:11:34.491527 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 28 05:11:34.491536 kernel: pinctrl core: initialized pinctrl subsystem Oct 28 05:11:34.491553 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 28 05:11:34.491562 kernel: audit: initializing netlink subsys (disabled) Oct 28 05:11:34.491570 kernel: audit: type=2000 audit(1761628291.888:1): state=initialized audit_enabled=0 res=1 Oct 28 05:11:34.491578 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 28 05:11:34.491586 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 28 05:11:34.491595 kernel: cpuidle: using governor menu Oct 28 05:11:34.491605 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 28 05:11:34.491624 kernel: dca service started, version 1.12.1 Oct 28 05:11:34.491636 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Oct 28 05:11:34.491647 kernel: PCI: Using configuration type 1 for base access Oct 28 05:11:34.491658 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 28 05:11:34.491669 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 28 05:11:34.491681 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 28 05:11:34.491689 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 28 05:11:34.491705 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 28 05:11:34.491714 kernel: ACPI: Added _OSI(Module Device) Oct 28 05:11:34.491722 kernel: ACPI: Added _OSI(Processor Device) Oct 28 05:11:34.491730 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 28 05:11:34.491746 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 28 05:11:34.491754 kernel: ACPI: Interpreter enabled Oct 28 05:11:34.491763 kernel: ACPI: PM: (supports S0 S5) Oct 28 05:11:34.491778 kernel: ACPI: Using IOAPIC for interrupt routing Oct 28 05:11:34.491786 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 28 05:11:34.491794 kernel: PCI: Using E820 reservations for host bridge windows Oct 28 05:11:34.491803 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 28 05:11:34.491811 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 28 05:11:34.492074 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 28 05:11:34.492248 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 28 05:11:34.492486 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 28 05:11:34.492503 kernel: PCI host bridge to bus 0000:00 Oct 28 05:11:34.492711 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 28 05:11:34.492910 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 28 05:11:34.493080 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 28 05:11:34.493246 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Oct 28 05:11:34.493443 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Oct 28 05:11:34.493633 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Oct 28 05:11:34.493827 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 28 05:11:34.494037 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Oct 28 05:11:34.494230 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Oct 28 05:11:34.494543 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Oct 28 05:11:34.494762 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Oct 28 05:11:34.494929 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Oct 28 05:11:34.495146 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 28 05:11:34.495351 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 28 05:11:34.495592 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Oct 28 05:11:34.495769 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Oct 28 05:11:34.495934 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Oct 28 05:11:34.496152 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 28 05:11:34.496325 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Oct 28 05:11:34.496521 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Oct 28 05:11:34.496702 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Oct 28 05:11:34.496892 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 28 05:11:34.497084 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Oct 28 05:11:34.497290 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Oct 28 05:11:34.497519 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Oct 28 05:11:34.497722 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Oct 28 05:11:34.497960 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Oct 28 05:11:34.498161 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 28 05:11:34.498379 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Oct 28 05:11:34.498585 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Oct 28 05:11:34.498773 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Oct 28 05:11:34.499227 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Oct 28 05:11:34.499598 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Oct 28 05:11:34.499611 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 28 05:11:34.499620 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 28 05:11:34.499628 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 28 05:11:34.499637 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 28 05:11:34.499645 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 28 05:11:34.499666 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 28 05:11:34.499675 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 28 05:11:34.499683 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 28 05:11:34.499691 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 28 05:11:34.499699 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 28 05:11:34.499708 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 28 05:11:34.499716 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 28 05:11:34.499731 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 28 05:11:34.499749 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 28 05:11:34.499757 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 28 05:11:34.499766 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 28 05:11:34.499774 kernel: iommu: Default domain type: Translated Oct 28 05:11:34.499782 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 28 05:11:34.499791 kernel: efivars: Registered efivars operations Oct 28 05:11:34.499806 kernel: PCI: Using ACPI for IRQ routing Oct 28 05:11:34.499815 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 28 05:11:34.499824 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Oct 28 05:11:34.499832 kernel: e820: reserve RAM buffer [mem 0x9a101018-0x9bffffff] Oct 28 05:11:34.499840 kernel: e820: reserve RAM buffer [mem 0x9a13e018-0x9bffffff] Oct 28 05:11:34.499849 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Oct 28 05:11:34.499857 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Oct 28 05:11:34.500032 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 28 05:11:34.500196 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 28 05:11:34.500378 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 28 05:11:34.500391 kernel: vgaarb: loaded Oct 28 05:11:34.500400 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 28 05:11:34.500409 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 28 05:11:34.500417 kernel: clocksource: Switched to clocksource kvm-clock Oct 28 05:11:34.500436 kernel: VFS: Disk quotas dquot_6.6.0 Oct 28 05:11:34.500445 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 28 05:11:34.500453 kernel: pnp: PnP ACPI init Oct 28 05:11:34.500633 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Oct 28 05:11:34.500646 kernel: pnp: PnP ACPI: found 6 devices Oct 28 05:11:34.500654 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 28 05:11:34.500672 kernel: NET: Registered PF_INET protocol family Oct 28 05:11:34.500680 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 28 05:11:34.500689 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 28 05:11:34.500697 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 28 05:11:34.500706 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 28 05:11:34.500714 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 28 05:11:34.500722 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 28 05:11:34.500747 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 28 05:11:34.500756 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 28 05:11:34.500765 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 28 05:11:34.500773 kernel: NET: Registered PF_XDP protocol family Oct 28 05:11:34.500938 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Oct 28 05:11:34.501106 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Oct 28 05:11:34.501260 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 28 05:11:34.501462 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 28 05:11:34.501664 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 28 05:11:34.501857 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Oct 28 05:11:34.502043 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Oct 28 05:11:34.502235 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Oct 28 05:11:34.502253 kernel: PCI: CLS 0 bytes, default 64 Oct 28 05:11:34.502280 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Oct 28 05:11:34.502291 kernel: Initialise system trusted keyrings Oct 28 05:11:34.502303 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 28 05:11:34.502314 kernel: Key type asymmetric registered Oct 28 05:11:34.502326 kernel: Asymmetric key parser 'x509' registered Oct 28 05:11:34.502422 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 28 05:11:34.502442 kernel: io scheduler mq-deadline registered Oct 28 05:11:34.502461 kernel: io scheduler kyber registered Oct 28 05:11:34.502473 kernel: io scheduler bfq registered Oct 28 05:11:34.502485 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 28 05:11:34.502505 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 28 05:11:34.502517 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 28 05:11:34.502528 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 28 05:11:34.502540 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 28 05:11:34.502552 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 28 05:11:34.502574 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 28 05:11:34.502585 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 28 05:11:34.502597 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 28 05:11:34.502830 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 28 05:11:34.502850 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 28 05:11:34.503014 kernel: rtc_cmos 00:04: registered as rtc0 Oct 28 05:11:34.503186 kernel: rtc_cmos 00:04: setting system clock to 2025-10-28T05:11:32 UTC (1761628292) Oct 28 05:11:34.503342 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Oct 28 05:11:34.503375 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 28 05:11:34.503388 kernel: efifb: probing for efifb Oct 28 05:11:34.503398 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Oct 28 05:11:34.503409 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Oct 28 05:11:34.503417 kernel: efifb: scrolling: redraw Oct 28 05:11:34.503436 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 28 05:11:34.503446 kernel: Console: switching to colour frame buffer device 160x50 Oct 28 05:11:34.503461 kernel: fb0: EFI VGA frame buffer device Oct 28 05:11:34.503469 kernel: pstore: Using crash dump compression: deflate Oct 28 05:11:34.503484 kernel: pstore: Registered efi_pstore as persistent store backend Oct 28 05:11:34.503493 kernel: NET: Registered PF_INET6 protocol family Oct 28 05:11:34.503502 kernel: Segment Routing with IPv6 Oct 28 05:11:34.503510 kernel: In-situ OAM (IOAM) with IPv6 Oct 28 05:11:34.503519 kernel: NET: Registered PF_PACKET protocol family Oct 28 05:11:34.503528 kernel: Key type dns_resolver registered Oct 28 05:11:34.503537 kernel: IPI shorthand broadcast: enabled Oct 28 05:11:34.503545 kernel: sched_clock: Marking stable (1622003007, 253631868)->(1926507229, -50872354) Oct 28 05:11:34.503560 kernel: registered taskstats version 1 Oct 28 05:11:34.503569 kernel: Loading compiled-in X.509 certificates Oct 28 05:11:34.503578 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: a9d98af1927e389c63ed03bf44a9f2758bf88a8e' Oct 28 05:11:34.503587 kernel: Demotion targets for Node 0: null Oct 28 05:11:34.503596 kernel: Key type .fscrypt registered Oct 28 05:11:34.503604 kernel: Key type fscrypt-provisioning registered Oct 28 05:11:34.503613 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 28 05:11:34.503629 kernel: ima: Allocated hash algorithm: sha1 Oct 28 05:11:34.503638 kernel: ima: No architecture policies found Oct 28 05:11:34.503646 kernel: clk: Disabling unused clocks Oct 28 05:11:34.503655 kernel: Freeing unused kernel image (initmem) memory: 15960K Oct 28 05:11:34.503664 kernel: Write protecting the kernel read-only data: 45056k Oct 28 05:11:34.503673 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Oct 28 05:11:34.503682 kernel: Run /init as init process Oct 28 05:11:34.503696 kernel: with arguments: Oct 28 05:11:34.503705 kernel: /init Oct 28 05:11:34.503714 kernel: with environment: Oct 28 05:11:34.503723 kernel: HOME=/ Oct 28 05:11:34.503731 kernel: TERM=linux Oct 28 05:11:34.503748 kernel: SCSI subsystem initialized Oct 28 05:11:34.503758 kernel: libata version 3.00 loaded. Oct 28 05:11:34.503938 kernel: ahci 0000:00:1f.2: version 3.0 Oct 28 05:11:34.503950 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 28 05:11:34.504128 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Oct 28 05:11:34.504293 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Oct 28 05:11:34.504487 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 28 05:11:34.504679 kernel: scsi host0: ahci Oct 28 05:11:34.504879 kernel: scsi host1: ahci Oct 28 05:11:34.505060 kernel: scsi host2: ahci Oct 28 05:11:34.505234 kernel: scsi host3: ahci Oct 28 05:11:34.505566 kernel: scsi host4: ahci Oct 28 05:11:34.505761 kernel: scsi host5: ahci Oct 28 05:11:34.505786 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Oct 28 05:11:34.505796 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Oct 28 05:11:34.505805 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Oct 28 05:11:34.505814 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Oct 28 05:11:34.505823 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Oct 28 05:11:34.505838 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Oct 28 05:11:34.505847 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 28 05:11:34.505862 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 28 05:11:34.505871 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 28 05:11:34.505880 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 28 05:11:34.505889 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 28 05:11:34.505898 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 28 05:11:34.505906 kernel: ata3.00: LPM support broken, forcing max_power Oct 28 05:11:34.505915 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 28 05:11:34.505930 kernel: ata3.00: applying bridge limits Oct 28 05:11:34.505939 kernel: ata3.00: LPM support broken, forcing max_power Oct 28 05:11:34.505947 kernel: ata3.00: configured for UDMA/100 Oct 28 05:11:34.506148 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 28 05:11:34.506329 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 28 05:11:34.506549 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 28 05:11:34.506576 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 28 05:11:34.506585 kernel: GPT:16515071 != 27000831 Oct 28 05:11:34.506594 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 28 05:11:34.506603 kernel: GPT:16515071 != 27000831 Oct 28 05:11:34.506611 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 28 05:11:34.506620 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 28 05:11:34.506629 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 28 05:11:34.506877 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 28 05:11:34.506895 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 28 05:11:34.507114 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 28 05:11:34.507130 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 28 05:11:34.507143 kernel: device-mapper: uevent: version 1.0.3 Oct 28 05:11:34.507156 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 28 05:11:34.507168 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 28 05:11:34.507193 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 28 05:11:34.507205 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 28 05:11:34.507217 kernel: raid6: avx2x4 gen() 28448 MB/s Oct 28 05:11:34.507229 kernel: raid6: avx2x2 gen() 30191 MB/s Oct 28 05:11:34.507241 kernel: raid6: avx2x1 gen() 24991 MB/s Oct 28 05:11:34.507254 kernel: raid6: using algorithm avx2x2 gen() 30191 MB/s Oct 28 05:11:34.507266 kernel: raid6: .... xor() 19431 MB/s, rmw enabled Oct 28 05:11:34.507286 kernel: raid6: using avx2x2 recovery algorithm Oct 28 05:11:34.507299 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 28 05:11:34.507311 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 28 05:11:34.507323 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 28 05:11:34.507336 kernel: xor: automatically using best checksumming function avx Oct 28 05:11:34.507348 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 28 05:11:34.507397 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 28 05:11:34.507411 kernel: BTRFS: device fsid 98ad3ab2-0171-42ae-a5fc-7be2369f5a89 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (176) Oct 28 05:11:34.507434 kernel: BTRFS info (device dm-0): first mount of filesystem 98ad3ab2-0171-42ae-a5fc-7be2369f5a89 Oct 28 05:11:34.507446 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 28 05:11:34.507459 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 28 05:11:34.507471 kernel: BTRFS info (device dm-0): enabling free space tree Oct 28 05:11:34.507484 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 28 05:11:34.507495 kernel: loop: module loaded Oct 28 05:11:34.507508 kernel: loop0: detected capacity change from 0 to 100136 Oct 28 05:11:34.507534 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 28 05:11:34.507548 systemd[1]: Successfully made /usr/ read-only. Oct 28 05:11:34.507564 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 28 05:11:34.507577 systemd[1]: Detected virtualization kvm. Oct 28 05:11:34.507590 systemd[1]: Detected architecture x86-64. Oct 28 05:11:34.507602 systemd[1]: Running in initrd. Oct 28 05:11:34.507623 systemd[1]: No hostname configured, using default hostname. Oct 28 05:11:34.507637 systemd[1]: Hostname set to . Oct 28 05:11:34.507649 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 28 05:11:34.507661 systemd[1]: Queued start job for default target initrd.target. Oct 28 05:11:34.507674 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 28 05:11:34.507686 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 28 05:11:34.507698 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 28 05:11:34.507723 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 28 05:11:34.507746 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 28 05:11:34.507759 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 28 05:11:34.507773 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 28 05:11:34.507785 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 28 05:11:34.507806 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 28 05:11:34.507819 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 28 05:11:34.507830 systemd[1]: Reached target paths.target - Path Units. Oct 28 05:11:34.507842 systemd[1]: Reached target slices.target - Slice Units. Oct 28 05:11:34.507854 systemd[1]: Reached target swap.target - Swaps. Oct 28 05:11:34.507866 systemd[1]: Reached target timers.target - Timer Units. Oct 28 05:11:34.507879 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 28 05:11:34.507898 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 28 05:11:34.507910 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 28 05:11:34.507923 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 28 05:11:34.507935 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 28 05:11:34.507947 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 28 05:11:34.507959 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 28 05:11:34.507972 systemd[1]: Reached target sockets.target - Socket Units. Oct 28 05:11:34.507992 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 28 05:11:34.508004 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 28 05:11:34.508016 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 28 05:11:34.508029 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 28 05:11:34.508042 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 28 05:11:34.508054 systemd[1]: Starting systemd-fsck-usr.service... Oct 28 05:11:34.508074 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 28 05:11:34.508086 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 28 05:11:34.508098 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 05:11:34.508110 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 28 05:11:34.508130 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 28 05:11:34.508143 systemd[1]: Finished systemd-fsck-usr.service. Oct 28 05:11:34.508156 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 28 05:11:34.508206 systemd-journald[310]: Collecting audit messages is disabled. Oct 28 05:11:34.508243 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 28 05:11:34.508255 kernel: Bridge firewalling registered Oct 28 05:11:34.508268 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 28 05:11:34.508280 systemd-journald[310]: Journal started Oct 28 05:11:34.508312 systemd-journald[310]: Runtime Journal (/run/log/journal/1487c99eda1d4860b32396c9a7d1c871) is 5.9M, max 47.8M, 41.8M free. Oct 28 05:11:34.506499 systemd-modules-load[314]: Inserted module 'br_netfilter' Oct 28 05:11:34.511601 systemd[1]: Started systemd-journald.service - Journal Service. Oct 28 05:11:34.515151 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 28 05:11:34.519449 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 28 05:11:34.541896 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 05:11:34.554587 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 28 05:11:34.558608 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 28 05:11:34.561158 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 28 05:11:34.569897 systemd-tmpfiles[330]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 28 05:11:34.576829 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 28 05:11:34.581028 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 28 05:11:34.583887 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 28 05:11:34.588314 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 28 05:11:34.598890 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 28 05:11:34.604722 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 28 05:11:34.637130 dracut-cmdline[354]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=449db75fd0bf4f00a7b0da93783dc37f82f4a66df937e11c006397de0369495c Oct 28 05:11:34.656307 systemd-resolved[348]: Positive Trust Anchors: Oct 28 05:11:34.656322 systemd-resolved[348]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 28 05:11:34.656326 systemd-resolved[348]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 28 05:11:34.656367 systemd-resolved[348]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 28 05:11:34.701634 systemd-resolved[348]: Defaulting to hostname 'linux'. Oct 28 05:11:34.703180 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 28 05:11:34.735905 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 28 05:11:34.778397 kernel: Loading iSCSI transport class v2.0-870. Oct 28 05:11:34.793397 kernel: iscsi: registered transport (tcp) Oct 28 05:11:34.817875 kernel: iscsi: registered transport (qla4xxx) Oct 28 05:11:34.818122 kernel: QLogic iSCSI HBA Driver Oct 28 05:11:34.848590 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 28 05:11:34.876828 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 28 05:11:34.880625 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 28 05:11:34.941042 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 28 05:11:34.947345 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 28 05:11:34.950734 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 28 05:11:35.006194 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 28 05:11:35.016557 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 28 05:11:35.070445 systemd-udevd[573]: Using default interface naming scheme 'v257'. Oct 28 05:11:35.085560 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 28 05:11:35.093160 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 28 05:11:35.163614 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 28 05:11:35.167240 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 28 05:11:35.205226 dracut-pre-trigger[637]: rd.md=0: removing MD RAID activation Oct 28 05:11:35.249230 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 28 05:11:35.252135 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 28 05:11:35.256666 systemd-networkd[691]: lo: Link UP Oct 28 05:11:35.256673 systemd-networkd[691]: lo: Gained carrier Oct 28 05:11:35.259258 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 28 05:11:35.259977 systemd[1]: Reached target network.target - Network. Oct 28 05:11:35.387152 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 28 05:11:35.395497 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 28 05:11:35.471378 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 28 05:11:35.495830 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 28 05:11:35.515721 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 28 05:11:35.521249 kernel: cryptd: max_cpu_qlen set to 1000 Oct 28 05:11:35.530762 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 28 05:11:35.549532 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 28 05:11:35.660451 systemd-networkd[691]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 28 05:11:35.660458 systemd-networkd[691]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 28 05:11:35.679489 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 28 05:11:35.660915 systemd-networkd[691]: eth0: Link UP Oct 28 05:11:35.662811 systemd-networkd[691]: eth0: Gained carrier Oct 28 05:11:35.686645 kernel: AES CTR mode by8 optimization enabled Oct 28 05:11:35.662822 systemd-networkd[691]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 28 05:11:35.675592 systemd-networkd[691]: eth0: DHCPv4 address 10.0.0.49/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 28 05:11:35.678222 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 28 05:11:35.678497 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 05:11:35.683843 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 05:11:35.685168 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 05:11:35.711553 disk-uuid[803]: Primary Header is updated. Oct 28 05:11:35.711553 disk-uuid[803]: Secondary Entries is updated. Oct 28 05:11:35.711553 disk-uuid[803]: Secondary Header is updated. Oct 28 05:11:35.738526 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 05:11:35.763347 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 28 05:11:35.764757 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 28 05:11:35.769473 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 28 05:11:35.773174 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 28 05:11:35.777514 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 28 05:11:35.801861 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 28 05:11:35.954329 systemd-resolved[348]: Detected conflict on linux IN A 10.0.0.49 Oct 28 05:11:35.954350 systemd-resolved[348]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Oct 28 05:11:36.755343 disk-uuid[828]: Warning: The kernel is still using the old partition table. Oct 28 05:11:36.755343 disk-uuid[828]: The new table will be used at the next reboot or after you Oct 28 05:11:36.755343 disk-uuid[828]: run partprobe(8) or kpartx(8) Oct 28 05:11:36.755343 disk-uuid[828]: The operation has completed successfully. Oct 28 05:11:36.766257 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 28 05:11:36.766450 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 28 05:11:36.770522 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 28 05:11:36.806448 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (855) Oct 28 05:11:36.806517 kernel: BTRFS info (device vda6): first mount of filesystem 7acd037c-32ce-4796-90d6-101869832417 Oct 28 05:11:36.806530 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 28 05:11:36.811916 kernel: BTRFS info (device vda6): turning on async discard Oct 28 05:11:36.811986 kernel: BTRFS info (device vda6): enabling free space tree Oct 28 05:11:36.820385 kernel: BTRFS info (device vda6): last unmount of filesystem 7acd037c-32ce-4796-90d6-101869832417 Oct 28 05:11:36.821549 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 28 05:11:36.824505 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 28 05:11:36.934959 ignition[874]: Ignition 2.22.0 Oct 28 05:11:36.934974 ignition[874]: Stage: fetch-offline Oct 28 05:11:36.935024 ignition[874]: no configs at "/usr/lib/ignition/base.d" Oct 28 05:11:36.935036 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 05:11:36.935147 ignition[874]: parsed url from cmdline: "" Oct 28 05:11:36.935151 ignition[874]: no config URL provided Oct 28 05:11:36.935156 ignition[874]: reading system config file "/usr/lib/ignition/user.ign" Oct 28 05:11:36.935167 ignition[874]: no config at "/usr/lib/ignition/user.ign" Oct 28 05:11:36.935211 ignition[874]: op(1): [started] loading QEMU firmware config module Oct 28 05:11:36.935215 ignition[874]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 28 05:11:37.107067 ignition[874]: op(1): [finished] loading QEMU firmware config module Oct 28 05:11:37.148537 systemd-networkd[691]: eth0: Gained IPv6LL Oct 28 05:11:37.195813 ignition[874]: parsing config with SHA512: 2b5ec781e31ce6e60acf6c938f597985dfff230fb2015189062116fcbf91516816f6c7a998f6c6735192b1f9c281fc5e66e8859c2c9588ed1733edc2eec56ca3 Oct 28 05:11:37.201699 unknown[874]: fetched base config from "system" Oct 28 05:11:37.201710 unknown[874]: fetched user config from "qemu" Oct 28 05:11:37.202057 ignition[874]: fetch-offline: fetch-offline passed Oct 28 05:11:37.202115 ignition[874]: Ignition finished successfully Oct 28 05:11:37.210548 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 28 05:11:37.211434 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 28 05:11:37.218106 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 28 05:11:37.265937 ignition[883]: Ignition 2.22.0 Oct 28 05:11:37.265952 ignition[883]: Stage: kargs Oct 28 05:11:37.266114 ignition[883]: no configs at "/usr/lib/ignition/base.d" Oct 28 05:11:37.266126 ignition[883]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 05:11:37.267083 ignition[883]: kargs: kargs passed Oct 28 05:11:37.267139 ignition[883]: Ignition finished successfully Oct 28 05:11:37.275275 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 28 05:11:37.277045 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 28 05:11:37.312419 ignition[891]: Ignition 2.22.0 Oct 28 05:11:37.312431 ignition[891]: Stage: disks Oct 28 05:11:37.312569 ignition[891]: no configs at "/usr/lib/ignition/base.d" Oct 28 05:11:37.312579 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 05:11:37.313247 ignition[891]: disks: disks passed Oct 28 05:11:37.313295 ignition[891]: Ignition finished successfully Oct 28 05:11:37.320960 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 28 05:11:37.321921 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 28 05:11:37.324785 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 28 05:11:37.325297 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 28 05:11:37.332594 systemd[1]: Reached target sysinit.target - System Initialization. Oct 28 05:11:37.335679 systemd[1]: Reached target basic.target - Basic System. Oct 28 05:11:37.339847 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 28 05:11:37.387753 systemd-fsck[901]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 28 05:11:37.396207 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 28 05:11:37.401708 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 28 05:11:37.527412 kernel: EXT4-fs (vda9): mounted filesystem 0ce42fa0-8451-4928-b788-6e54ab295d7a r/w with ordered data mode. Quota mode: none. Oct 28 05:11:37.528198 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 28 05:11:37.529449 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 28 05:11:37.534428 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 28 05:11:37.535940 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 28 05:11:37.537924 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 28 05:11:37.537959 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 28 05:11:37.537986 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 28 05:11:37.562583 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 28 05:11:37.569484 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (910) Oct 28 05:11:37.565325 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 28 05:11:37.574646 kernel: BTRFS info (device vda6): first mount of filesystem 7acd037c-32ce-4796-90d6-101869832417 Oct 28 05:11:37.574672 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 28 05:11:37.577960 kernel: BTRFS info (device vda6): turning on async discard Oct 28 05:11:37.577984 kernel: BTRFS info (device vda6): enabling free space tree Oct 28 05:11:37.579302 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 28 05:11:37.625203 initrd-setup-root[934]: cut: /sysroot/etc/passwd: No such file or directory Oct 28 05:11:37.629782 initrd-setup-root[941]: cut: /sysroot/etc/group: No such file or directory Oct 28 05:11:37.634256 initrd-setup-root[948]: cut: /sysroot/etc/shadow: No such file or directory Oct 28 05:11:37.638780 initrd-setup-root[955]: cut: /sysroot/etc/gshadow: No such file or directory Oct 28 05:11:37.736539 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 28 05:11:37.740004 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 28 05:11:37.741556 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 28 05:11:37.771406 kernel: BTRFS info (device vda6): last unmount of filesystem 7acd037c-32ce-4796-90d6-101869832417 Oct 28 05:11:37.790533 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 28 05:11:37.795261 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 28 05:11:37.813283 ignition[1024]: INFO : Ignition 2.22.0 Oct 28 05:11:37.813283 ignition[1024]: INFO : Stage: mount Oct 28 05:11:37.816038 ignition[1024]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 28 05:11:37.816038 ignition[1024]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 05:11:37.819911 ignition[1024]: INFO : mount: mount passed Oct 28 05:11:37.821127 ignition[1024]: INFO : Ignition finished successfully Oct 28 05:11:37.824981 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 28 05:11:37.828297 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 28 05:11:37.852766 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 28 05:11:37.880378 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1036) Oct 28 05:11:37.884020 kernel: BTRFS info (device vda6): first mount of filesystem 7acd037c-32ce-4796-90d6-101869832417 Oct 28 05:11:37.884083 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 28 05:11:37.888133 kernel: BTRFS info (device vda6): turning on async discard Oct 28 05:11:37.888158 kernel: BTRFS info (device vda6): enabling free space tree Oct 28 05:11:37.890038 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 28 05:11:37.937909 ignition[1053]: INFO : Ignition 2.22.0 Oct 28 05:11:37.937909 ignition[1053]: INFO : Stage: files Oct 28 05:11:37.940600 ignition[1053]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 28 05:11:37.940600 ignition[1053]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 05:11:37.940600 ignition[1053]: DEBUG : files: compiled without relabeling support, skipping Oct 28 05:11:37.946998 ignition[1053]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 28 05:11:37.946998 ignition[1053]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 28 05:11:37.955238 ignition[1053]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 28 05:11:37.957591 ignition[1053]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 28 05:11:37.960132 unknown[1053]: wrote ssh authorized keys file for user: core Oct 28 05:11:37.962131 ignition[1053]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 28 05:11:37.962131 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 28 05:11:37.962131 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 28 05:11:38.012177 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 28 05:11:38.502224 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 28 05:11:38.502224 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 28 05:11:38.508581 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 28 05:11:38.508581 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 28 05:11:38.508581 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 28 05:11:38.508581 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 28 05:11:38.508581 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 28 05:11:38.508581 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 28 05:11:38.508581 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 28 05:11:38.581806 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 28 05:11:38.584834 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 28 05:11:38.584834 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 28 05:11:38.615546 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 28 05:11:38.615546 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 28 05:11:38.666028 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Oct 28 05:11:38.959548 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 28 05:11:39.910139 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 28 05:11:39.910139 ignition[1053]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 28 05:11:39.926663 ignition[1053]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 28 05:11:40.113940 ignition[1053]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 28 05:11:40.113940 ignition[1053]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 28 05:11:40.113940 ignition[1053]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 28 05:11:40.121738 ignition[1053]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 28 05:11:40.121738 ignition[1053]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 28 05:11:40.121738 ignition[1053]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 28 05:11:40.121738 ignition[1053]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 28 05:11:40.157981 ignition[1053]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 28 05:11:40.178627 ignition[1053]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 28 05:11:40.181539 ignition[1053]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 28 05:11:40.181539 ignition[1053]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 28 05:11:40.181539 ignition[1053]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 28 05:11:40.181539 ignition[1053]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 28 05:11:40.181539 ignition[1053]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 28 05:11:40.181539 ignition[1053]: INFO : files: files passed Oct 28 05:11:40.181539 ignition[1053]: INFO : Ignition finished successfully Oct 28 05:11:40.194192 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 28 05:11:40.198567 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 28 05:11:40.201685 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 28 05:11:40.229850 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 28 05:11:40.231036 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 28 05:11:40.232607 initrd-setup-root-after-ignition[1084]: grep: /sysroot/oem/oem-release: No such file or directory Oct 28 05:11:40.237448 initrd-setup-root-after-ignition[1086]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 28 05:11:40.237448 initrd-setup-root-after-ignition[1086]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 28 05:11:40.244713 initrd-setup-root-after-ignition[1090]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 28 05:11:40.239018 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 28 05:11:40.241270 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 28 05:11:40.247099 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 28 05:11:40.371334 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 28 05:11:40.371507 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 28 05:11:40.375328 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 28 05:11:40.378697 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 28 05:11:40.382180 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 28 05:11:40.383512 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 28 05:11:40.422921 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 28 05:11:40.426327 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 28 05:11:40.453472 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 28 05:11:40.453845 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 28 05:11:40.454704 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 28 05:11:40.455301 systemd[1]: Stopped target timers.target - Timer Units. Oct 28 05:11:40.465034 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 28 05:11:40.465167 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 28 05:11:40.470112 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 28 05:11:40.473408 systemd[1]: Stopped target basic.target - Basic System. Oct 28 05:11:40.474294 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 28 05:11:40.478186 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 28 05:11:40.481788 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 28 05:11:40.484982 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 28 05:11:40.488317 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 28 05:11:40.492004 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 28 05:11:40.494881 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 28 05:11:40.498981 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 28 05:11:40.501931 systemd[1]: Stopped target swap.target - Swaps. Oct 28 05:11:40.505368 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 28 05:11:40.505517 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 28 05:11:40.510597 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 28 05:11:40.511561 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 28 05:11:40.516061 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 28 05:11:40.519182 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 28 05:11:40.519916 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 28 05:11:40.520054 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 28 05:11:40.526475 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 28 05:11:40.526619 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 28 05:11:40.529852 systemd[1]: Stopped target paths.target - Path Units. Oct 28 05:11:40.530917 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 28 05:11:40.534452 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 28 05:11:40.535216 systemd[1]: Stopped target slices.target - Slice Units. Oct 28 05:11:40.539844 systemd[1]: Stopped target sockets.target - Socket Units. Oct 28 05:11:40.543751 systemd[1]: iscsid.socket: Deactivated successfully. Oct 28 05:11:40.543932 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 28 05:11:40.544694 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 28 05:11:40.544791 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 28 05:11:40.550550 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 28 05:11:40.550751 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 28 05:11:40.553941 systemd[1]: ignition-files.service: Deactivated successfully. Oct 28 05:11:40.554140 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 28 05:11:40.557869 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 28 05:11:40.562682 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 28 05:11:40.563401 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 28 05:11:40.563555 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 28 05:11:40.568146 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 28 05:11:40.568254 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 28 05:11:40.571911 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 28 05:11:40.572021 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 28 05:11:40.594408 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 28 05:11:40.595190 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 28 05:11:40.620101 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 28 05:11:40.683854 ignition[1111]: INFO : Ignition 2.22.0 Oct 28 05:11:40.683854 ignition[1111]: INFO : Stage: umount Oct 28 05:11:40.686625 ignition[1111]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 28 05:11:40.686625 ignition[1111]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 05:11:40.686625 ignition[1111]: INFO : umount: umount passed Oct 28 05:11:40.686625 ignition[1111]: INFO : Ignition finished successfully Oct 28 05:11:40.693690 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 28 05:11:40.693839 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 28 05:11:40.695141 systemd[1]: Stopped target network.target - Network. Oct 28 05:11:40.698718 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 28 05:11:40.698793 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 28 05:11:40.701382 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 28 05:11:40.701454 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 28 05:11:40.704369 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 28 05:11:40.704441 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 28 05:11:40.707808 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 28 05:11:40.707867 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 28 05:11:40.711015 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 28 05:11:40.713936 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 28 05:11:40.730842 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 28 05:11:40.731026 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 28 05:11:40.738498 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 28 05:11:40.738696 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 28 05:11:40.745660 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 28 05:11:40.746725 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 28 05:11:40.746788 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 28 05:11:40.755573 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 28 05:11:40.757468 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 28 05:11:40.757606 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 28 05:11:40.758784 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 28 05:11:40.758841 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 28 05:11:40.759281 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 28 05:11:40.759346 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 28 05:11:40.766838 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 28 05:11:40.767805 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 28 05:11:40.780584 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 28 05:11:40.782060 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 28 05:11:40.782166 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 28 05:11:40.791346 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 28 05:11:40.791679 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 28 05:11:40.792954 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 28 05:11:40.793007 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 28 05:11:40.800112 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 28 05:11:40.800167 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 28 05:11:40.801210 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 28 05:11:40.801295 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 28 05:11:40.810688 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 28 05:11:40.810813 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 28 05:11:40.815195 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 28 05:11:40.815300 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 28 05:11:40.821799 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 28 05:11:40.825279 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 28 05:11:40.825400 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 28 05:11:40.826303 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 28 05:11:40.826390 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 28 05:11:40.826828 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 28 05:11:40.826898 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 28 05:11:40.835557 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 28 05:11:40.835663 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 28 05:11:40.842548 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 28 05:11:40.842649 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 05:11:40.854713 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 28 05:11:40.854863 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 28 05:11:40.862715 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 28 05:11:40.862832 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 28 05:11:40.864387 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 28 05:11:40.871338 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 28 05:11:40.905019 systemd[1]: Switching root. Oct 28 05:11:40.956675 systemd-journald[310]: Journal stopped Oct 28 05:11:42.285753 systemd-journald[310]: Received SIGTERM from PID 1 (systemd). Oct 28 05:11:42.285823 kernel: SELinux: policy capability network_peer_controls=1 Oct 28 05:11:42.285838 kernel: SELinux: policy capability open_perms=1 Oct 28 05:11:42.285855 kernel: SELinux: policy capability extended_socket_class=1 Oct 28 05:11:42.285867 kernel: SELinux: policy capability always_check_network=0 Oct 28 05:11:42.286018 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 28 05:11:42.286057 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 28 05:11:42.286079 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 28 05:11:42.286092 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 28 05:11:42.286104 kernel: SELinux: policy capability userspace_initial_context=0 Oct 28 05:11:42.286116 kernel: audit: type=1403 audit(1761628301.344:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 28 05:11:42.286130 systemd[1]: Successfully loaded SELinux policy in 69.367ms. Oct 28 05:11:42.286168 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.746ms. Oct 28 05:11:42.286200 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 28 05:11:42.286238 systemd[1]: Detected virtualization kvm. Oct 28 05:11:42.286262 systemd[1]: Detected architecture x86-64. Oct 28 05:11:42.286286 systemd[1]: Detected first boot. Oct 28 05:11:42.286309 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 28 05:11:42.286322 zram_generator::config[1158]: No configuration found. Oct 28 05:11:42.286344 kernel: Guest personality initialized and is inactive Oct 28 05:11:42.286370 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 28 05:11:42.286382 kernel: Initialized host personality Oct 28 05:11:42.286395 kernel: NET: Registered PF_VSOCK protocol family Oct 28 05:11:42.286408 systemd[1]: Populated /etc with preset unit settings. Oct 28 05:11:42.286426 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 28 05:11:42.286447 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 28 05:11:42.286461 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 28 05:11:42.286474 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 28 05:11:42.286495 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 28 05:11:42.286508 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 28 05:11:42.286522 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 28 05:11:42.286538 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 28 05:11:42.286558 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 28 05:11:42.286571 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 28 05:11:42.286584 systemd[1]: Created slice user.slice - User and Session Slice. Oct 28 05:11:42.286596 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 28 05:11:42.286610 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 28 05:11:42.286622 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 28 05:11:42.286635 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 28 05:11:42.286655 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 28 05:11:42.286669 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 28 05:11:42.286686 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 28 05:11:42.286702 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 28 05:11:42.286715 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 28 05:11:42.286728 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 28 05:11:42.286749 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 28 05:11:42.286761 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 28 05:11:42.286774 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 28 05:11:42.286787 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 28 05:11:42.286800 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 28 05:11:42.286812 systemd[1]: Reached target slices.target - Slice Units. Oct 28 05:11:42.286825 systemd[1]: Reached target swap.target - Swaps. Oct 28 05:11:42.286837 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 28 05:11:42.286858 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 28 05:11:42.286871 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 28 05:11:42.286883 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 28 05:11:42.286896 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 28 05:11:42.286909 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 28 05:11:42.286922 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 28 05:11:42.286934 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 28 05:11:42.286964 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 28 05:11:42.286981 systemd[1]: Mounting media.mount - External Media Directory... Oct 28 05:11:42.286998 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 05:11:42.287012 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 28 05:11:42.287028 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 28 05:11:42.287046 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 28 05:11:42.287063 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 28 05:11:42.287089 systemd[1]: Reached target machines.target - Containers. Oct 28 05:11:42.287102 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 28 05:11:42.287115 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 28 05:11:42.287127 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 28 05:11:42.287142 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 28 05:11:42.287158 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 28 05:11:42.287177 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 28 05:11:42.287190 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 28 05:11:42.287202 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 28 05:11:42.287215 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 28 05:11:42.287227 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 28 05:11:42.287240 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 28 05:11:42.287252 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 28 05:11:42.287273 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 28 05:11:42.287286 systemd[1]: Stopped systemd-fsck-usr.service. Oct 28 05:11:42.287299 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 28 05:11:42.287313 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 28 05:11:42.287325 kernel: fuse: init (API version 7.41) Oct 28 05:11:42.287338 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 28 05:11:42.287350 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 28 05:11:42.287391 kernel: ACPI: bus type drm_connector registered Oct 28 05:11:42.287404 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 28 05:11:42.287417 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 28 05:11:42.287453 systemd-journald[1236]: Collecting audit messages is disabled. Oct 28 05:11:42.287493 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 28 05:11:42.287506 systemd-journald[1236]: Journal started Oct 28 05:11:42.287530 systemd-journald[1236]: Runtime Journal (/run/log/journal/1487c99eda1d4860b32396c9a7d1c871) is 5.9M, max 47.8M, 41.8M free. Oct 28 05:11:41.947030 systemd[1]: Queued start job for default target multi-user.target. Oct 28 05:11:41.967253 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 28 05:11:41.967791 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 28 05:11:42.291851 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 05:11:42.293776 systemd[1]: Started systemd-journald.service - Journal Service. Oct 28 05:11:42.297739 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 28 05:11:42.299733 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 28 05:11:42.301822 systemd[1]: Mounted media.mount - External Media Directory. Oct 28 05:11:42.304167 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 28 05:11:42.306668 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 28 05:11:42.308680 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 28 05:11:42.310640 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 28 05:11:42.313161 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 28 05:11:42.315855 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 28 05:11:42.316077 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 28 05:11:42.318505 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 28 05:11:42.318746 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 28 05:11:42.321407 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 28 05:11:42.321745 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 28 05:11:42.323835 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 28 05:11:42.324053 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 28 05:11:42.326865 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 28 05:11:42.327160 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 28 05:11:42.329383 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 28 05:11:42.329629 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 28 05:11:42.332014 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 28 05:11:42.367459 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 28 05:11:42.371022 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 28 05:11:42.380048 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 28 05:11:42.394113 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 28 05:11:42.396509 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 28 05:11:42.399930 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 28 05:11:42.402916 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 28 05:11:42.404792 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 28 05:11:42.404903 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 28 05:11:42.407833 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 28 05:11:42.410149 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 28 05:11:42.419501 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 28 05:11:42.422487 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 28 05:11:42.424798 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 28 05:11:42.427500 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 28 05:11:42.429431 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 28 05:11:42.430666 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 28 05:11:42.434602 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 28 05:11:42.439646 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 28 05:11:42.444000 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 28 05:11:42.446552 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 28 05:11:42.447181 systemd-journald[1236]: Time spent on flushing to /var/log/journal/1487c99eda1d4860b32396c9a7d1c871 is 16.324ms for 1035 entries. Oct 28 05:11:42.447181 systemd-journald[1236]: System Journal (/var/log/journal/1487c99eda1d4860b32396c9a7d1c871) is 8M, max 163.5M, 155.5M free. Oct 28 05:11:42.470505 systemd-journald[1236]: Received client request to flush runtime journal. Oct 28 05:11:42.466929 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 28 05:11:42.473200 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 28 05:11:42.485378 kernel: loop1: detected capacity change from 0 to 229808 Oct 28 05:11:42.489109 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 28 05:11:42.492285 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 28 05:11:42.498076 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 28 05:11:42.500650 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 28 05:11:42.502928 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. Oct 28 05:11:42.502960 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. Oct 28 05:11:42.514815 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 28 05:11:42.520641 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 28 05:11:42.526400 kernel: loop2: detected capacity change from 0 to 111544 Oct 28 05:11:42.537319 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 28 05:11:42.561903 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 28 05:11:42.566213 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 28 05:11:42.572438 kernel: loop3: detected capacity change from 0 to 128912 Oct 28 05:11:42.573054 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 28 05:11:42.585227 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 28 05:11:42.605428 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Oct 28 05:11:42.605463 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Oct 28 05:11:42.611406 kernel: loop4: detected capacity change from 0 to 229808 Oct 28 05:11:42.613912 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 28 05:11:42.622375 kernel: loop5: detected capacity change from 0 to 111544 Oct 28 05:11:42.631392 kernel: loop6: detected capacity change from 0 to 128912 Oct 28 05:11:42.633938 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 28 05:11:42.640144 (sd-merge)[1302]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 28 05:11:42.643749 (sd-merge)[1302]: Merged extensions into '/usr'. Oct 28 05:11:42.648210 systemd[1]: Reload requested from client PID 1277 ('systemd-sysext') (unit systemd-sysext.service)... Oct 28 05:11:42.648380 systemd[1]: Reloading... Oct 28 05:11:42.707883 zram_generator::config[1340]: No configuration found. Oct 28 05:11:42.721420 systemd-resolved[1297]: Positive Trust Anchors: Oct 28 05:11:42.721439 systemd-resolved[1297]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 28 05:11:42.721444 systemd-resolved[1297]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 28 05:11:42.721502 systemd-resolved[1297]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 28 05:11:42.727438 systemd-resolved[1297]: Defaulting to hostname 'linux'. Oct 28 05:11:42.991561 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 28 05:11:42.992550 systemd[1]: Reloading finished in 343 ms. Oct 28 05:11:43.033821 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 28 05:11:43.036068 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 28 05:11:43.040718 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 28 05:11:43.184150 systemd[1]: Starting ensure-sysext.service... Oct 28 05:11:43.186726 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 28 05:11:43.199411 systemd[1]: Reload requested from client PID 1373 ('systemctl') (unit ensure-sysext.service)... Oct 28 05:11:43.199429 systemd[1]: Reloading... Oct 28 05:11:43.210901 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 28 05:11:43.210948 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 28 05:11:43.211232 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 28 05:11:43.211525 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 28 05:11:43.212461 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 28 05:11:43.212727 systemd-tmpfiles[1374]: ACLs are not supported, ignoring. Oct 28 05:11:43.212799 systemd-tmpfiles[1374]: ACLs are not supported, ignoring. Oct 28 05:11:43.226303 systemd-tmpfiles[1374]: Detected autofs mount point /boot during canonicalization of boot. Oct 28 05:11:43.226316 systemd-tmpfiles[1374]: Skipping /boot Oct 28 05:11:43.237677 systemd-tmpfiles[1374]: Detected autofs mount point /boot during canonicalization of boot. Oct 28 05:11:43.237848 systemd-tmpfiles[1374]: Skipping /boot Oct 28 05:11:43.279743 zram_generator::config[1404]: No configuration found. Oct 28 05:11:43.470050 systemd[1]: Reloading finished in 270 ms. Oct 28 05:11:43.490108 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 28 05:11:43.508490 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 28 05:11:43.519199 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 28 05:11:43.521945 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 28 05:11:43.531552 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 28 05:11:43.537685 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 28 05:11:43.542706 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 28 05:11:43.547577 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 28 05:11:43.551759 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 05:11:43.553233 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 28 05:11:43.559043 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 28 05:11:43.562891 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 28 05:11:43.567561 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 28 05:11:43.569376 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 28 05:11:43.569498 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 28 05:11:43.569590 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 05:11:43.580116 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 28 05:11:43.580339 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 28 05:11:43.631527 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 05:11:43.631947 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 28 05:11:43.641815 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 28 05:11:43.643701 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 28 05:11:43.643813 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 28 05:11:43.643926 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 05:11:43.646010 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 28 05:11:43.648879 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 28 05:11:43.649192 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 28 05:11:43.652015 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 28 05:11:43.652843 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 28 05:11:43.654879 systemd-udevd[1453]: Using default interface naming scheme 'v257'. Oct 28 05:11:43.658591 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 28 05:11:43.658815 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 28 05:11:43.673533 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 28 05:11:43.677067 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 05:11:43.677341 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 28 05:11:43.679601 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 28 05:11:43.680186 augenrules[1477]: No rules Oct 28 05:11:43.682555 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 28 05:11:43.687391 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 28 05:11:43.697199 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 28 05:11:43.699740 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 28 05:11:43.699887 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 28 05:11:43.700027 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 05:11:43.703202 systemd[1]: audit-rules.service: Deactivated successfully. Oct 28 05:11:43.703564 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 28 05:11:43.705995 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 28 05:11:43.706328 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 28 05:11:43.709463 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 28 05:11:43.709719 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 28 05:11:43.712291 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 28 05:11:43.715134 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 28 05:11:43.718202 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 28 05:11:43.720958 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 28 05:11:43.721171 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 28 05:11:43.723819 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 28 05:11:43.735178 systemd[1]: Finished ensure-sysext.service. Oct 28 05:11:43.743857 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 28 05:11:43.745910 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 28 05:11:43.746053 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 28 05:11:43.749052 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 28 05:11:43.752547 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 28 05:11:43.913903 systemd-networkd[1506]: lo: Link UP Oct 28 05:11:43.913916 systemd-networkd[1506]: lo: Gained carrier Oct 28 05:11:43.916891 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 28 05:11:43.919250 systemd[1]: Reached target network.target - Network. Oct 28 05:11:43.926479 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 28 05:11:43.931248 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 28 05:11:43.942166 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 28 05:11:43.944247 systemd[1]: Reached target time-set.target - System Time Set. Oct 28 05:11:43.951099 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 28 05:11:43.961029 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 28 05:11:43.995069 systemd-networkd[1506]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 28 05:11:43.995144 systemd-networkd[1506]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 28 05:11:43.996630 systemd-networkd[1506]: eth0: Link UP Oct 28 05:11:43.997965 systemd-networkd[1506]: eth0: Gained carrier Oct 28 05:11:43.998100 systemd-networkd[1506]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 28 05:11:44.086699 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 28 05:11:44.096444 systemd-networkd[1506]: eth0: DHCPv4 address 10.0.0.49/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 28 05:11:44.098080 systemd-timesyncd[1508]: Network configuration changed, trying to establish connection. Oct 28 05:11:44.903644 systemd-timesyncd[1508]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 28 05:11:44.903747 systemd-timesyncd[1508]: Initial clock synchronization to Tue 2025-10-28 05:11:44.903540 UTC. Oct 28 05:11:44.905044 systemd-resolved[1297]: Clock change detected. Flushing caches. Oct 28 05:11:44.905808 kernel: mousedev: PS/2 mouse device common for all mice Oct 28 05:11:44.918822 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 28 05:11:44.923334 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 28 05:11:44.941959 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 28 05:11:44.976672 kernel: ACPI: button: Power Button [PWRF] Oct 28 05:11:44.979404 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Oct 28 05:11:44.979836 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 28 05:11:44.983885 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 28 05:11:45.010772 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 05:11:45.116720 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 28 05:11:45.117107 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 05:11:45.123167 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 05:11:45.201206 kernel: kvm_amd: TSC scaling supported Oct 28 05:11:45.201313 kernel: kvm_amd: Nested Virtualization enabled Oct 28 05:11:45.201374 kernel: kvm_amd: Nested Paging enabled Oct 28 05:11:45.202737 kernel: kvm_amd: LBR virtualization supported Oct 28 05:11:45.202762 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 28 05:11:45.203762 kernel: kvm_amd: Virtual GIF supported Oct 28 05:11:45.251105 kernel: EDAC MC: Ver: 3.0.0 Oct 28 05:11:45.261936 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 05:11:45.288860 ldconfig[1445]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 28 05:11:45.339741 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 28 05:11:45.343631 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 28 05:11:45.371431 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 28 05:11:45.373607 systemd[1]: Reached target sysinit.target - System Initialization. Oct 28 05:11:45.375494 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 28 05:11:45.377565 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 28 05:11:45.379718 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 28 05:11:45.381998 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 28 05:11:45.383855 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 28 05:11:45.385895 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 28 05:11:45.387941 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 28 05:11:45.387973 systemd[1]: Reached target paths.target - Path Units. Oct 28 05:11:45.389589 systemd[1]: Reached target timers.target - Timer Units. Oct 28 05:11:45.392072 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 28 05:11:45.395627 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 28 05:11:45.400459 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 28 05:11:45.402704 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 28 05:11:45.404750 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 28 05:11:45.412006 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 28 05:11:45.414225 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 28 05:11:45.416739 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 28 05:11:45.419620 systemd[1]: Reached target sockets.target - Socket Units. Oct 28 05:11:45.421319 systemd[1]: Reached target basic.target - Basic System. Oct 28 05:11:45.422915 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 28 05:11:45.422942 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 28 05:11:45.423954 systemd[1]: Starting containerd.service - containerd container runtime... Oct 28 05:11:45.426808 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 28 05:11:45.430055 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 28 05:11:45.433479 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 28 05:11:45.436438 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 28 05:11:45.438165 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 28 05:11:45.439730 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 28 05:11:45.443719 jq[1568]: false Oct 28 05:11:45.442958 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 28 05:11:45.447888 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 28 05:11:45.450510 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Refreshing passwd entry cache Oct 28 05:11:45.451017 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 28 05:11:45.451902 oslogin_cache_refresh[1570]: Refreshing passwd entry cache Oct 28 05:11:45.456259 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 28 05:11:45.463505 extend-filesystems[1569]: Found /dev/vda6 Oct 28 05:11:45.462311 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 28 05:11:45.463277 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 28 05:11:45.463703 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 28 05:11:45.465011 systemd[1]: Starting update-engine.service - Update Engine... Oct 28 05:11:45.467462 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Failure getting users, quitting Oct 28 05:11:45.467453 oslogin_cache_refresh[1570]: Failure getting users, quitting Oct 28 05:11:45.467550 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 28 05:11:45.467550 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Refreshing group entry cache Oct 28 05:11:45.467484 oslogin_cache_refresh[1570]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 28 05:11:45.467555 oslogin_cache_refresh[1570]: Refreshing group entry cache Oct 28 05:11:45.471156 extend-filesystems[1569]: Found /dev/vda9 Oct 28 05:11:45.474853 extend-filesystems[1569]: Checking size of /dev/vda9 Oct 28 05:11:45.475532 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Failure getting groups, quitting Oct 28 05:11:45.475532 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 28 05:11:45.474264 oslogin_cache_refresh[1570]: Failure getting groups, quitting Oct 28 05:11:45.474879 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 28 05:11:45.474274 oslogin_cache_refresh[1570]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 28 05:11:45.560294 update_engine[1585]: I20251028 05:11:45.544879 1585 main.cc:92] Flatcar Update Engine starting Oct 28 05:11:45.545642 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 28 05:11:45.548174 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 28 05:11:45.548412 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 28 05:11:45.548738 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 28 05:11:45.549036 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 28 05:11:45.551187 systemd[1]: motdgen.service: Deactivated successfully. Oct 28 05:11:45.551420 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 28 05:11:45.555159 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 28 05:11:45.555392 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 28 05:11:45.582823 jq[1586]: true Oct 28 05:11:45.583670 extend-filesystems[1569]: Resized partition /dev/vda9 Oct 28 05:11:45.591947 extend-filesystems[1612]: resize2fs 1.47.3 (8-Jul-2025) Oct 28 05:11:45.596151 tar[1594]: linux-amd64/LICENSE Oct 28 05:11:45.596619 tar[1594]: linux-amd64/helm Oct 28 05:11:45.614883 jq[1614]: true Oct 28 05:11:45.623232 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 28 05:11:45.635235 dbus-daemon[1566]: [system] SELinux support is enabled Oct 28 05:11:45.699508 update_engine[1585]: I20251028 05:11:45.639194 1585 update_check_scheduler.cc:74] Next update check in 8m5s Oct 28 05:11:45.636062 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 28 05:11:45.747092 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 28 05:11:45.747133 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 28 05:11:45.783014 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 28 05:11:45.783052 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 28 05:11:45.785679 systemd[1]: Started update-engine.service - Update Engine. Oct 28 05:11:45.786805 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 28 05:11:45.795033 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 28 05:11:45.833270 systemd-logind[1584]: Watching system buttons on /dev/input/event2 (Power Button) Oct 28 05:11:45.833302 systemd-logind[1584]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 28 05:11:45.833873 systemd-logind[1584]: New seat seat0. Oct 28 05:11:45.834106 extend-filesystems[1612]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 28 05:11:45.834106 extend-filesystems[1612]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 28 05:11:45.834106 extend-filesystems[1612]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 28 05:11:45.870743 extend-filesystems[1569]: Resized filesystem in /dev/vda9 Oct 28 05:11:45.835053 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 28 05:11:45.835347 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 28 05:11:45.872731 systemd[1]: Started systemd-logind.service - User Login Management. Oct 28 05:11:45.878820 sshd_keygen[1609]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 28 05:11:45.902816 bash[1634]: Updated "/home/core/.ssh/authorized_keys" Oct 28 05:11:45.903758 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 28 05:11:45.906970 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 28 05:11:45.909283 locksmithd[1620]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 28 05:11:45.914353 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 28 05:11:45.918261 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 28 05:11:45.954661 systemd[1]: issuegen.service: Deactivated successfully. Oct 28 05:11:45.954987 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 28 05:11:46.001605 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 28 05:11:46.024156 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 28 05:11:46.034293 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 28 05:11:46.044290 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 28 05:11:46.046337 systemd[1]: Reached target getty.target - Login Prompts. Oct 28 05:11:46.118961 containerd[1611]: time="2025-10-28T05:11:46Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 28 05:11:46.119689 containerd[1611]: time="2025-10-28T05:11:46.119645798Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 28 05:11:46.132338 containerd[1611]: time="2025-10-28T05:11:46.132289860Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.832µs" Oct 28 05:11:46.132338 containerd[1611]: time="2025-10-28T05:11:46.132315849Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 28 05:11:46.132338 containerd[1611]: time="2025-10-28T05:11:46.132332550Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 28 05:11:46.132519 containerd[1611]: time="2025-10-28T05:11:46.132503120Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 28 05:11:46.132541 containerd[1611]: time="2025-10-28T05:11:46.132531874Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 28 05:11:46.132569 containerd[1611]: time="2025-10-28T05:11:46.132557162Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 28 05:11:46.132642 containerd[1611]: time="2025-10-28T05:11:46.132627333Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 28 05:11:46.132723 containerd[1611]: time="2025-10-28T05:11:46.132640879Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 28 05:11:46.132915 containerd[1611]: time="2025-10-28T05:11:46.132880358Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 28 05:11:46.132915 containerd[1611]: time="2025-10-28T05:11:46.132901367Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 28 05:11:46.132915 containerd[1611]: time="2025-10-28T05:11:46.132911586Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 28 05:11:46.132972 containerd[1611]: time="2025-10-28T05:11:46.132919501Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 28 05:11:46.133034 containerd[1611]: time="2025-10-28T05:11:46.133017094Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 28 05:11:46.133368 containerd[1611]: time="2025-10-28T05:11:46.133341913Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 28 05:11:46.133390 containerd[1611]: time="2025-10-28T05:11:46.133376709Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 28 05:11:46.133390 containerd[1611]: time="2025-10-28T05:11:46.133386307Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 28 05:11:46.133426 containerd[1611]: time="2025-10-28T05:11:46.133420260Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 28 05:11:46.181644 containerd[1611]: time="2025-10-28T05:11:46.181527719Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 28 05:11:46.181976 containerd[1611]: time="2025-10-28T05:11:46.181935504Z" level=info msg="metadata content store policy set" policy=shared Oct 28 05:11:46.189416 containerd[1611]: time="2025-10-28T05:11:46.189329310Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 28 05:11:46.189626 containerd[1611]: time="2025-10-28T05:11:46.189457821Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 28 05:11:46.189626 containerd[1611]: time="2025-10-28T05:11:46.189562788Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 28 05:11:46.189626 containerd[1611]: time="2025-10-28T05:11:46.189582865Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 28 05:11:46.189626 containerd[1611]: time="2025-10-28T05:11:46.189624143Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 28 05:11:46.189725 containerd[1611]: time="2025-10-28T05:11:46.189639401Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 28 05:11:46.189725 containerd[1611]: time="2025-10-28T05:11:46.189651764Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 28 05:11:46.189725 containerd[1611]: time="2025-10-28T05:11:46.189667504Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 28 05:11:46.189725 containerd[1611]: time="2025-10-28T05:11:46.189700165Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 28 05:11:46.189725 containerd[1611]: time="2025-10-28T05:11:46.189713330Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 28 05:11:46.189725 containerd[1611]: time="2025-10-28T05:11:46.189725683Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 28 05:11:46.189882 containerd[1611]: time="2025-10-28T05:11:46.189752223Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 28 05:11:46.190075 containerd[1611]: time="2025-10-28T05:11:46.190026647Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 28 05:11:46.190119 containerd[1611]: time="2025-10-28T05:11:46.190078645Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 28 05:11:46.190119 containerd[1611]: time="2025-10-28T05:11:46.190110665Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 28 05:11:46.190198 containerd[1611]: time="2025-10-28T05:11:46.190134830Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 28 05:11:46.190198 containerd[1611]: time="2025-10-28T05:11:46.190157914Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 28 05:11:46.190198 containerd[1611]: time="2025-10-28T05:11:46.190183001Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 28 05:11:46.190198 containerd[1611]: time="2025-10-28T05:11:46.190197368Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 28 05:11:46.190320 containerd[1611]: time="2025-10-28T05:11:46.190265535Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 28 05:11:46.190320 containerd[1611]: time="2025-10-28T05:11:46.190302324Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 28 05:11:46.190320 containerd[1611]: time="2025-10-28T05:11:46.190314317Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 28 05:11:46.190410 containerd[1611]: time="2025-10-28T05:11:46.190334505Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 28 05:11:46.190591 containerd[1611]: time="2025-10-28T05:11:46.190547905Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 28 05:11:46.190591 containerd[1611]: time="2025-10-28T05:11:46.190579584Z" level=info msg="Start snapshots syncer" Oct 28 05:11:46.190674 containerd[1611]: time="2025-10-28T05:11:46.190641921Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 28 05:11:46.191393 containerd[1611]: time="2025-10-28T05:11:46.191318841Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 28 05:11:46.191647 containerd[1611]: time="2025-10-28T05:11:46.191413288Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 28 05:11:46.191647 containerd[1611]: time="2025-10-28T05:11:46.191518145Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 28 05:11:46.191647 containerd[1611]: time="2025-10-28T05:11:46.191630636Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 28 05:11:46.191719 containerd[1611]: time="2025-10-28T05:11:46.191663938Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 28 05:11:46.191719 containerd[1611]: time="2025-10-28T05:11:46.191689827Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 28 05:11:46.191719 containerd[1611]: time="2025-10-28T05:11:46.191702230Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 28 05:11:46.191719 containerd[1611]: time="2025-10-28T05:11:46.191717589Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 28 05:11:46.191841 containerd[1611]: time="2025-10-28T05:11:46.191727768Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 28 05:11:46.191841 containerd[1611]: time="2025-10-28T05:11:46.191740522Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 28 05:11:46.191903 containerd[1611]: time="2025-10-28T05:11:46.191856900Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 28 05:11:46.191903 containerd[1611]: time="2025-10-28T05:11:46.191877128Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 28 05:11:46.191972 containerd[1611]: time="2025-10-28T05:11:46.191952669Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 28 05:11:46.192031 containerd[1611]: time="2025-10-28T05:11:46.192010137Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 28 05:11:46.192063 containerd[1611]: time="2025-10-28T05:11:46.192034162Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 28 05:11:46.192158 containerd[1611]: time="2025-10-28T05:11:46.192048449Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 28 05:11:46.192158 containerd[1611]: time="2025-10-28T05:11:46.192146553Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 28 05:11:46.192158 containerd[1611]: time="2025-10-28T05:11:46.192158275Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 28 05:11:46.192223 containerd[1611]: time="2025-10-28T05:11:46.192181448Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 28 05:11:46.192223 containerd[1611]: time="2025-10-28T05:11:46.192192479Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 28 05:11:46.192223 containerd[1611]: time="2025-10-28T05:11:46.192214941Z" level=info msg="runtime interface created" Oct 28 05:11:46.192223 containerd[1611]: time="2025-10-28T05:11:46.192220812Z" level=info msg="created NRI interface" Oct 28 05:11:46.192342 containerd[1611]: time="2025-10-28T05:11:46.192228918Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 28 05:11:46.192342 containerd[1611]: time="2025-10-28T05:11:46.192240129Z" level=info msg="Connect containerd service" Oct 28 05:11:46.192342 containerd[1611]: time="2025-10-28T05:11:46.192297937Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 28 05:11:46.196544 containerd[1611]: time="2025-10-28T05:11:46.196490309Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 28 05:11:46.442046 systemd-networkd[1506]: eth0: Gained IPv6LL Oct 28 05:11:46.445343 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 28 05:11:46.449742 systemd[1]: Reached target network-online.target - Network is Online. Oct 28 05:11:46.454007 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 28 05:11:46.459343 tar[1594]: linux-amd64/README.md Oct 28 05:11:46.460157 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 05:11:46.463270 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 28 05:11:46.485139 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 28 05:11:46.498183 containerd[1611]: time="2025-10-28T05:11:46.498098021Z" level=info msg="Start subscribing containerd event" Oct 28 05:11:46.498990 containerd[1611]: time="2025-10-28T05:11:46.498900667Z" level=info msg="Start recovering state" Oct 28 05:11:46.499475 containerd[1611]: time="2025-10-28T05:11:46.499423858Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 28 05:11:46.499623 containerd[1611]: time="2025-10-28T05:11:46.499597163Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 28 05:11:46.500296 containerd[1611]: time="2025-10-28T05:11:46.500252462Z" level=info msg="Start event monitor" Oct 28 05:11:46.500339 containerd[1611]: time="2025-10-28T05:11:46.500327763Z" level=info msg="Start cni network conf syncer for default" Oct 28 05:11:46.500360 containerd[1611]: time="2025-10-28T05:11:46.500347440Z" level=info msg="Start streaming server" Oct 28 05:11:46.500545 containerd[1611]: time="2025-10-28T05:11:46.500431026Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 28 05:11:46.501137 containerd[1611]: time="2025-10-28T05:11:46.500887653Z" level=info msg="runtime interface starting up..." Oct 28 05:11:46.501137 containerd[1611]: time="2025-10-28T05:11:46.500903513Z" level=info msg="starting plugins..." Oct 28 05:11:46.501137 containerd[1611]: time="2025-10-28T05:11:46.500926526Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 28 05:11:46.501444 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 28 05:11:46.506050 containerd[1611]: time="2025-10-28T05:11:46.506009067Z" level=info msg="containerd successfully booted in 0.387874s" Oct 28 05:11:46.506660 systemd[1]: Started containerd.service - containerd container runtime. Oct 28 05:11:46.509724 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 28 05:11:46.510158 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 28 05:11:46.513613 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 28 05:11:46.627297 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 28 05:11:46.630462 systemd[1]: Started sshd@0-10.0.0.49:22-10.0.0.1:57142.service - OpenSSH per-connection server daemon (10.0.0.1:57142). Oct 28 05:11:46.770440 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 57142 ssh2: RSA SHA256:fnPxJp9OOcM7toOTW/sODQxaZsmsBo9HTVuuDohs1/Q Oct 28 05:11:46.772779 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:11:46.780236 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 28 05:11:46.783141 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 28 05:11:46.791331 systemd-logind[1584]: New session 1 of user core. Oct 28 05:11:46.811609 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 28 05:11:46.817274 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 28 05:11:46.847995 (systemd)[1709]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 28 05:11:46.850526 systemd-logind[1584]: New session c1 of user core. Oct 28 05:11:47.070874 systemd[1709]: Queued start job for default target default.target. Oct 28 05:11:47.098213 systemd[1709]: Created slice app.slice - User Application Slice. Oct 28 05:11:47.098237 systemd[1709]: Reached target paths.target - Paths. Oct 28 05:11:47.098276 systemd[1709]: Reached target timers.target - Timers. Oct 28 05:11:47.099752 systemd[1709]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 28 05:11:47.113027 systemd[1709]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 28 05:11:47.113162 systemd[1709]: Reached target sockets.target - Sockets. Oct 28 05:11:47.113200 systemd[1709]: Reached target basic.target - Basic System. Oct 28 05:11:47.113241 systemd[1709]: Reached target default.target - Main User Target. Oct 28 05:11:47.113275 systemd[1709]: Startup finished in 178ms. Oct 28 05:11:47.113544 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 28 05:11:47.116915 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 28 05:11:47.141929 systemd[1]: Started sshd@1-10.0.0.49:22-10.0.0.1:57148.service - OpenSSH per-connection server daemon (10.0.0.1:57148). Oct 28 05:11:47.199502 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 57148 ssh2: RSA SHA256:fnPxJp9OOcM7toOTW/sODQxaZsmsBo9HTVuuDohs1/Q Oct 28 05:11:47.200816 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:11:47.205454 systemd-logind[1584]: New session 2 of user core. Oct 28 05:11:47.276132 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 28 05:11:47.295478 sshd[1723]: Connection closed by 10.0.0.1 port 57148 Oct 28 05:11:47.295708 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Oct 28 05:11:47.309977 systemd[1]: sshd@1-10.0.0.49:22-10.0.0.1:57148.service: Deactivated successfully. Oct 28 05:11:47.311755 systemd[1]: session-2.scope: Deactivated successfully. Oct 28 05:11:47.312555 systemd-logind[1584]: Session 2 logged out. Waiting for processes to exit. Oct 28 05:11:47.315278 systemd[1]: Started sshd@2-10.0.0.49:22-10.0.0.1:57164.service - OpenSSH per-connection server daemon (10.0.0.1:57164). Oct 28 05:11:47.318473 systemd-logind[1584]: Removed session 2. Oct 28 05:11:47.391652 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 57164 ssh2: RSA SHA256:fnPxJp9OOcM7toOTW/sODQxaZsmsBo9HTVuuDohs1/Q Oct 28 05:11:47.393679 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:11:47.398742 systemd-logind[1584]: New session 3 of user core. Oct 28 05:11:47.408984 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 28 05:11:47.427810 sshd[1732]: Connection closed by 10.0.0.1 port 57164 Oct 28 05:11:47.426954 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Oct 28 05:11:47.432477 systemd[1]: sshd@2-10.0.0.49:22-10.0.0.1:57164.service: Deactivated successfully. Oct 28 05:11:47.434668 systemd[1]: session-3.scope: Deactivated successfully. Oct 28 05:11:47.435454 systemd-logind[1584]: Session 3 logged out. Waiting for processes to exit. Oct 28 05:11:47.438080 systemd-logind[1584]: Removed session 3. Oct 28 05:11:47.821127 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 05:11:47.823506 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 28 05:11:47.825375 systemd[1]: Startup finished in 3.136s (kernel) + 7.230s (initrd) + 5.746s (userspace) = 16.113s. Oct 28 05:11:47.842166 (kubelet)[1742]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 28 05:11:48.593675 kubelet[1742]: E1028 05:11:48.593552 1742 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 28 05:11:48.597962 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 28 05:11:48.598197 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 28 05:11:48.598665 systemd[1]: kubelet.service: Consumed 1.895s CPU time, 267.4M memory peak. Oct 28 05:11:57.443928 systemd[1]: Started sshd@3-10.0.0.49:22-10.0.0.1:35068.service - OpenSSH per-connection server daemon (10.0.0.1:35068). Oct 28 05:11:57.508555 sshd[1755]: Accepted publickey for core from 10.0.0.1 port 35068 ssh2: RSA SHA256:fnPxJp9OOcM7toOTW/sODQxaZsmsBo9HTVuuDohs1/Q Oct 28 05:11:57.510564 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:11:57.515467 systemd-logind[1584]: New session 4 of user core. Oct 28 05:11:57.524912 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 28 05:11:57.538657 sshd[1758]: Connection closed by 10.0.0.1 port 35068 Oct 28 05:11:57.539087 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Oct 28 05:11:57.552278 systemd[1]: sshd@3-10.0.0.49:22-10.0.0.1:35068.service: Deactivated successfully. Oct 28 05:11:57.554179 systemd[1]: session-4.scope: Deactivated successfully. Oct 28 05:11:57.554947 systemd-logind[1584]: Session 4 logged out. Waiting for processes to exit. Oct 28 05:11:57.557735 systemd[1]: Started sshd@4-10.0.0.49:22-10.0.0.1:35070.service - OpenSSH per-connection server daemon (10.0.0.1:35070). Oct 28 05:11:57.558539 systemd-logind[1584]: Removed session 4. Oct 28 05:11:57.609955 sshd[1764]: Accepted publickey for core from 10.0.0.1 port 35070 ssh2: RSA SHA256:fnPxJp9OOcM7toOTW/sODQxaZsmsBo9HTVuuDohs1/Q Oct 28 05:11:57.611372 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:11:57.616469 systemd-logind[1584]: New session 5 of user core. Oct 28 05:11:57.628946 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 28 05:11:57.638184 sshd[1767]: Connection closed by 10.0.0.1 port 35070 Oct 28 05:11:57.638542 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Oct 28 05:11:57.651668 systemd[1]: sshd@4-10.0.0.49:22-10.0.0.1:35070.service: Deactivated successfully. Oct 28 05:11:57.654006 systemd[1]: session-5.scope: Deactivated successfully. Oct 28 05:11:57.654873 systemd-logind[1584]: Session 5 logged out. Waiting for processes to exit. Oct 28 05:11:57.658423 systemd[1]: Started sshd@5-10.0.0.49:22-10.0.0.1:35076.service - OpenSSH per-connection server daemon (10.0.0.1:35076). Oct 28 05:11:57.659114 systemd-logind[1584]: Removed session 5. Oct 28 05:11:57.706671 sshd[1773]: Accepted publickey for core from 10.0.0.1 port 35076 ssh2: RSA SHA256:fnPxJp9OOcM7toOTW/sODQxaZsmsBo9HTVuuDohs1/Q Oct 28 05:11:57.708294 sshd-session[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:11:57.712907 systemd-logind[1584]: New session 6 of user core. Oct 28 05:11:57.722910 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 28 05:11:57.735813 sshd[1777]: Connection closed by 10.0.0.1 port 35076 Oct 28 05:11:57.736072 sshd-session[1773]: pam_unix(sshd:session): session closed for user core Oct 28 05:11:57.746146 systemd[1]: sshd@5-10.0.0.49:22-10.0.0.1:35076.service: Deactivated successfully. Oct 28 05:11:57.747998 systemd[1]: session-6.scope: Deactivated successfully. Oct 28 05:11:57.748724 systemd-logind[1584]: Session 6 logged out. Waiting for processes to exit. Oct 28 05:11:57.751198 systemd[1]: Started sshd@6-10.0.0.49:22-10.0.0.1:35082.service - OpenSSH per-connection server daemon (10.0.0.1:35082). Oct 28 05:11:57.751899 systemd-logind[1584]: Removed session 6. Oct 28 05:11:57.810224 sshd[1783]: Accepted publickey for core from 10.0.0.1 port 35082 ssh2: RSA SHA256:fnPxJp9OOcM7toOTW/sODQxaZsmsBo9HTVuuDohs1/Q Oct 28 05:11:57.811556 sshd-session[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:11:57.816121 systemd-logind[1584]: New session 7 of user core. Oct 28 05:11:57.825954 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 28 05:11:57.850350 sudo[1788]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 28 05:11:57.850676 sudo[1788]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 28 05:11:57.866471 sudo[1788]: pam_unix(sudo:session): session closed for user root Oct 28 05:11:57.868630 sshd[1787]: Connection closed by 10.0.0.1 port 35082 Oct 28 05:11:57.868954 sshd-session[1783]: pam_unix(sshd:session): session closed for user core Oct 28 05:11:57.881533 systemd[1]: sshd@6-10.0.0.49:22-10.0.0.1:35082.service: Deactivated successfully. Oct 28 05:11:57.884010 systemd[1]: session-7.scope: Deactivated successfully. Oct 28 05:11:57.884822 systemd-logind[1584]: Session 7 logged out. Waiting for processes to exit. Oct 28 05:11:57.888303 systemd[1]: Started sshd@7-10.0.0.49:22-10.0.0.1:35096.service - OpenSSH per-connection server daemon (10.0.0.1:35096). Oct 28 05:11:57.889144 systemd-logind[1584]: Removed session 7. Oct 28 05:11:57.946673 sshd[1794]: Accepted publickey for core from 10.0.0.1 port 35096 ssh2: RSA SHA256:fnPxJp9OOcM7toOTW/sODQxaZsmsBo9HTVuuDohs1/Q Oct 28 05:11:57.948858 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:11:57.954878 systemd-logind[1584]: New session 8 of user core. Oct 28 05:11:57.964998 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 28 05:11:57.982282 sudo[1800]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 28 05:11:57.982664 sudo[1800]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 28 05:11:57.989655 sudo[1800]: pam_unix(sudo:session): session closed for user root Oct 28 05:11:57.999573 sudo[1799]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 28 05:11:57.999986 sudo[1799]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 28 05:11:58.013447 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 28 05:11:58.073213 augenrules[1822]: No rules Oct 28 05:11:58.075196 systemd[1]: audit-rules.service: Deactivated successfully. Oct 28 05:11:58.075506 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 28 05:11:58.076773 sudo[1799]: pam_unix(sudo:session): session closed for user root Oct 28 05:11:58.078773 sshd[1798]: Connection closed by 10.0.0.1 port 35096 Oct 28 05:11:58.079122 sshd-session[1794]: pam_unix(sshd:session): session closed for user core Oct 28 05:11:58.093645 systemd[1]: sshd@7-10.0.0.49:22-10.0.0.1:35096.service: Deactivated successfully. Oct 28 05:11:58.095380 systemd[1]: session-8.scope: Deactivated successfully. Oct 28 05:11:58.096134 systemd-logind[1584]: Session 8 logged out. Waiting for processes to exit. Oct 28 05:11:58.098764 systemd[1]: Started sshd@8-10.0.0.49:22-10.0.0.1:35102.service - OpenSSH per-connection server daemon (10.0.0.1:35102). Oct 28 05:11:58.099387 systemd-logind[1584]: Removed session 8. Oct 28 05:11:58.150384 sshd[1831]: Accepted publickey for core from 10.0.0.1 port 35102 ssh2: RSA SHA256:fnPxJp9OOcM7toOTW/sODQxaZsmsBo9HTVuuDohs1/Q Oct 28 05:11:58.152395 sshd-session[1831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:11:58.157936 systemd-logind[1584]: New session 9 of user core. Oct 28 05:11:58.167950 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 28 05:11:58.184055 sudo[1835]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 28 05:11:58.184456 sudo[1835]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 28 05:11:58.651534 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 28 05:11:58.653633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 05:11:58.791391 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 28 05:11:58.806293 (dockerd)[1859]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 28 05:11:58.996654 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 05:11:59.011173 (kubelet)[1865]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 28 05:11:59.206306 kubelet[1865]: E1028 05:11:59.206212 1865 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 28 05:11:59.213894 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 28 05:11:59.214155 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 28 05:11:59.214606 systemd[1]: kubelet.service: Consumed 470ms CPU time, 110.7M memory peak. Oct 28 05:11:59.328459 dockerd[1859]: time="2025-10-28T05:11:59.328297265Z" level=info msg="Starting up" Oct 28 05:11:59.329116 dockerd[1859]: time="2025-10-28T05:11:59.329087777Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 28 05:11:59.344983 dockerd[1859]: time="2025-10-28T05:11:59.344927122Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 28 05:11:59.622028 dockerd[1859]: time="2025-10-28T05:11:59.621900075Z" level=info msg="Loading containers: start." Oct 28 05:11:59.633821 kernel: Initializing XFRM netlink socket Oct 28 05:11:59.895280 systemd-networkd[1506]: docker0: Link UP Oct 28 05:11:59.899694 dockerd[1859]: time="2025-10-28T05:11:59.899659924Z" level=info msg="Loading containers: done." Oct 28 05:11:59.913619 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3829910725-merged.mount: Deactivated successfully. Oct 28 05:11:59.915147 dockerd[1859]: time="2025-10-28T05:11:59.915108305Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 28 05:11:59.915215 dockerd[1859]: time="2025-10-28T05:11:59.915204145Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 28 05:11:59.915292 dockerd[1859]: time="2025-10-28T05:11:59.915279005Z" level=info msg="Initializing buildkit" Oct 28 05:11:59.943502 dockerd[1859]: time="2025-10-28T05:11:59.943471195Z" level=info msg="Completed buildkit initialization" Oct 28 05:11:59.949368 dockerd[1859]: time="2025-10-28T05:11:59.949339951Z" level=info msg="Daemon has completed initialization" Oct 28 05:11:59.949506 dockerd[1859]: time="2025-10-28T05:11:59.949405154Z" level=info msg="API listen on /run/docker.sock" Oct 28 05:11:59.949639 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 28 05:12:00.982206 containerd[1611]: time="2025-10-28T05:12:00.982127166Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Oct 28 05:12:01.637239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2622373311.mount: Deactivated successfully. Oct 28 05:12:02.831723 containerd[1611]: time="2025-10-28T05:12:02.831632348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:12:02.832358 containerd[1611]: time="2025-10-28T05:12:02.832302615Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Oct 28 05:12:02.833476 containerd[1611]: time="2025-10-28T05:12:02.833440880Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:12:02.836356 containerd[1611]: time="2025-10-28T05:12:02.836316172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:12:02.837340 containerd[1611]: time="2025-10-28T05:12:02.837303083Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 1.855105124s" Oct 28 05:12:02.837382 containerd[1611]: time="2025-10-28T05:12:02.837355401Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Oct 28 05:12:02.838357 containerd[1611]: time="2025-10-28T05:12:02.838328155Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Oct 28 05:12:04.313145 containerd[1611]: time="2025-10-28T05:12:04.313062679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:12:04.315737 containerd[1611]: time="2025-10-28T05:12:04.315682162Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Oct 28 05:12:04.319198 containerd[1611]: time="2025-10-28T05:12:04.319136770Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:12:04.323053 containerd[1611]: time="2025-10-28T05:12:04.323000506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:12:04.324208 containerd[1611]: time="2025-10-28T05:12:04.324139813Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.485776862s" Oct 28 05:12:04.324208 containerd[1611]: time="2025-10-28T05:12:04.324192522Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Oct 28 05:12:04.324881 containerd[1611]: time="2025-10-28T05:12:04.324854673Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Oct 28 05:12:05.833779 containerd[1611]: time="2025-10-28T05:12:05.833689798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:12:05.834448 containerd[1611]: time="2025-10-28T05:12:05.834418604Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Oct 28 05:12:05.835560 containerd[1611]: time="2025-10-28T05:12:05.835526392Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:12:05.838278 containerd[1611]: time="2025-10-28T05:12:05.838220324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:12:05.839140 containerd[1611]: time="2025-10-28T05:12:05.839103340Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.514217368s" Oct 28 05:12:05.839205 containerd[1611]: time="2025-10-28T05:12:05.839141081Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Oct 28 05:12:05.839710 containerd[1611]: time="2025-10-28T05:12:05.839685492Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Oct 28 05:12:07.388211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2735584699.mount: Deactivated successfully. Oct 28 05:12:08.003358 containerd[1611]: time="2025-10-28T05:12:08.003293699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:12:08.003954 containerd[1611]: time="2025-10-28T05:12:08.003890679Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Oct 28 05:12:08.004947 containerd[1611]: time="2025-10-28T05:12:08.004917284Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:12:08.006820 containerd[1611]: time="2025-10-28T05:12:08.006773626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:12:08.007254 containerd[1611]: time="2025-10-28T05:12:08.007201368Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 2.167484167s" Oct 28 05:12:08.007254 containerd[1611]: time="2025-10-28T05:12:08.007249478Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Oct 28 05:12:08.007803 containerd[1611]: time="2025-10-28T05:12:08.007765676Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Oct 28 05:12:09.067939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3964919441.mount: Deactivated successfully. Oct 28 05:12:09.401610 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 28 05:12:09.406441 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 05:12:09.714039 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 05:12:09.726174 (kubelet)[2198]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 28 05:12:09.803468 kubelet[2198]: E1028 05:12:09.803351 2198 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 28 05:12:09.807305 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 28 05:12:09.807515 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 28 05:12:09.807930 systemd[1]: kubelet.service: Consumed 355ms CPU time, 111.3M memory peak. Oct 28 05:12:10.318749 containerd[1611]: time="2025-10-28T05:12:10.318685375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:12:10.319364 containerd[1611]: time="2025-10-28T05:12:10.319335625Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Oct 28 05:12:10.320492 containerd[1611]: time="2025-10-28T05:12:10.320444675Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:12:10.323313 containerd[1611]: time="2025-10-28T05:12:10.323262209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:12:10.324231 containerd[1611]: time="2025-10-28T05:12:10.324191451Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.316381793s" Oct 28 05:12:10.324231 containerd[1611]: time="2025-10-28T05:12:10.324225966Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Oct 28 05:12:10.324823 containerd[1611]: time="2025-10-28T05:12:10.324777340Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 28 05:12:10.935423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3987565912.mount: Deactivated successfully. Oct 28 05:12:10.941610 containerd[1611]: time="2025-10-28T05:12:10.941562462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 28 05:12:10.942344 containerd[1611]: time="2025-10-28T05:12:10.942303732Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 28 05:12:10.943588 containerd[1611]: time="2025-10-28T05:12:10.943528910Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 28 05:12:10.945496 containerd[1611]: time="2025-10-28T05:12:10.945451295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 28 05:12:10.946090 containerd[1611]: time="2025-10-28T05:12:10.946028287Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 621.19391ms" Oct 28 05:12:10.946090 containerd[1611]: time="2025-10-28T05:12:10.946072831Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 28 05:12:10.946576 containerd[1611]: time="2025-10-28T05:12:10.946537091Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Oct 28 05:12:11.474031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2633436065.mount: Deactivated successfully. Oct 28 05:12:13.548641 containerd[1611]: time="2025-10-28T05:12:13.548550856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:12:13.549377 containerd[1611]: time="2025-10-28T05:12:13.549316752Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Oct 28 05:12:13.550513 containerd[1611]: time="2025-10-28T05:12:13.550472570Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:12:13.553116 containerd[1611]: time="2025-10-28T05:12:13.553073858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:12:13.554263 containerd[1611]: time="2025-10-28T05:12:13.554222542Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.607644123s" Oct 28 05:12:13.554320 containerd[1611]: time="2025-10-28T05:12:13.554263690Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Oct 28 05:12:17.096735 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 05:12:17.096917 systemd[1]: kubelet.service: Consumed 355ms CPU time, 111.3M memory peak. Oct 28 05:12:17.099054 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 05:12:17.128058 systemd[1]: Reload requested from client PID 2326 ('systemctl') (unit session-9.scope)... Oct 28 05:12:17.128077 systemd[1]: Reloading... Oct 28 05:12:17.204003 zram_generator::config[2369]: No configuration found. Oct 28 05:12:17.506489 systemd[1]: Reloading finished in 377 ms. Oct 28 05:12:17.570446 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 28 05:12:17.570540 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 28 05:12:17.570855 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 05:12:17.570897 systemd[1]: kubelet.service: Consumed 154ms CPU time, 98.2M memory peak. Oct 28 05:12:17.572448 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 05:12:17.746905 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 05:12:17.752606 (kubelet)[2417]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 28 05:12:17.889945 kubelet[2417]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 28 05:12:17.889945 kubelet[2417]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 28 05:12:17.889945 kubelet[2417]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 28 05:12:17.889945 kubelet[2417]: I1028 05:12:17.889918 2417 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 28 05:12:18.975832 kubelet[2417]: I1028 05:12:18.975741 2417 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 28 05:12:18.975832 kubelet[2417]: I1028 05:12:18.975782 2417 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 28 05:12:18.976290 kubelet[2417]: I1028 05:12:18.976073 2417 server.go:956] "Client rotation is on, will bootstrap in background" Oct 28 05:12:19.011624 kubelet[2417]: I1028 05:12:19.011472 2417 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 28 05:12:19.011624 kubelet[2417]: E1028 05:12:19.011606 2417 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 28 05:12:19.017622 kubelet[2417]: I1028 05:12:19.017582 2417 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 28 05:12:19.024061 kubelet[2417]: I1028 05:12:19.024023 2417 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 28 05:12:19.024363 kubelet[2417]: I1028 05:12:19.024334 2417 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 28 05:12:19.024599 kubelet[2417]: I1028 05:12:19.024357 2417 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 28 05:12:19.024599 kubelet[2417]: I1028 05:12:19.024596 2417 topology_manager.go:138] "Creating topology manager with none policy" Oct 28 05:12:19.024866 kubelet[2417]: I1028 05:12:19.024610 2417 container_manager_linux.go:303] "Creating device plugin manager" Oct 28 05:12:19.024866 kubelet[2417]: I1028 05:12:19.024781 2417 state_mem.go:36] "Initialized new in-memory state store" Oct 28 05:12:19.026851 kubelet[2417]: I1028 05:12:19.026807 2417 kubelet.go:480] "Attempting to sync node with API server" Oct 28 05:12:19.026851 kubelet[2417]: I1028 05:12:19.026847 2417 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 28 05:12:19.026936 kubelet[2417]: I1028 05:12:19.026890 2417 kubelet.go:386] "Adding apiserver pod source" Oct 28 05:12:19.026936 kubelet[2417]: I1028 05:12:19.026916 2417 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 28 05:12:19.035862 kubelet[2417]: E1028 05:12:19.035206 2417 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 28 05:12:19.035862 kubelet[2417]: I1028 05:12:19.035568 2417 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 28 05:12:19.036377 kubelet[2417]: I1028 05:12:19.036337 2417 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 28 05:12:19.037422 kubelet[2417]: E1028 05:12:19.037392 2417 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 28 05:12:19.037686 kubelet[2417]: W1028 05:12:19.037651 2417 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 28 05:12:19.041577 kubelet[2417]: I1028 05:12:19.041526 2417 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 28 05:12:19.041659 kubelet[2417]: I1028 05:12:19.041599 2417 server.go:1289] "Started kubelet" Oct 28 05:12:19.043808 kubelet[2417]: I1028 05:12:19.042400 2417 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 28 05:12:19.043808 kubelet[2417]: I1028 05:12:19.042864 2417 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 28 05:12:19.043808 kubelet[2417]: I1028 05:12:19.043360 2417 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 28 05:12:19.043808 kubelet[2417]: I1028 05:12:19.043551 2417 server.go:317] "Adding debug handlers to kubelet server" Oct 28 05:12:19.043808 kubelet[2417]: I1028 05:12:19.043708 2417 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 28 05:12:19.045949 kubelet[2417]: I1028 05:12:19.045918 2417 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 28 05:12:19.048107 kubelet[2417]: E1028 05:12:19.047466 2417 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 05:12:19.048107 kubelet[2417]: I1028 05:12:19.047532 2417 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 28 05:12:19.048107 kubelet[2417]: I1028 05:12:19.047771 2417 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 28 05:12:19.048107 kubelet[2417]: I1028 05:12:19.047857 2417 reconciler.go:26] "Reconciler: start to sync state" Oct 28 05:12:19.049082 kubelet[2417]: E1028 05:12:19.048262 2417 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 28 05:12:19.049082 kubelet[2417]: I1028 05:12:19.048440 2417 factory.go:223] Registration of the systemd container factory successfully Oct 28 05:12:19.049082 kubelet[2417]: I1028 05:12:19.048510 2417 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 28 05:12:19.049279 kubelet[2417]: E1028 05:12:19.047956 2417 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.49:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.49:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18728fa40ed57300 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-28 05:12:19.041563392 +0000 UTC m=+1.284450150,LastTimestamp:2025-10-28 05:12:19.041563392 +0000 UTC m=+1.284450150,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 28 05:12:19.049429 kubelet[2417]: E1028 05:12:19.049404 2417 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 28 05:12:19.049838 kubelet[2417]: I1028 05:12:19.049807 2417 factory.go:223] Registration of the containerd container factory successfully Oct 28 05:12:19.049897 kubelet[2417]: E1028 05:12:19.049817 2417 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="200ms" Oct 28 05:12:19.065954 kubelet[2417]: I1028 05:12:19.065900 2417 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 28 05:12:19.067581 kubelet[2417]: I1028 05:12:19.067549 2417 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 28 05:12:19.067622 kubelet[2417]: I1028 05:12:19.067585 2417 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 28 05:12:19.067622 kubelet[2417]: I1028 05:12:19.067613 2417 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 28 05:12:19.067679 kubelet[2417]: I1028 05:12:19.067628 2417 kubelet.go:2436] "Starting kubelet main sync loop" Oct 28 05:12:19.067715 kubelet[2417]: E1028 05:12:19.067676 2417 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 28 05:12:19.068257 kubelet[2417]: E1028 05:12:19.068194 2417 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 28 05:12:19.069609 kubelet[2417]: I1028 05:12:19.069594 2417 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 28 05:12:19.069706 kubelet[2417]: I1028 05:12:19.069679 2417 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 28 05:12:19.069805 kubelet[2417]: I1028 05:12:19.069766 2417 state_mem.go:36] "Initialized new in-memory state store" Oct 28 05:12:19.073258 kubelet[2417]: I1028 05:12:19.073231 2417 policy_none.go:49] "None policy: Start" Oct 28 05:12:19.073330 kubelet[2417]: I1028 05:12:19.073260 2417 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 28 05:12:19.073330 kubelet[2417]: I1028 05:12:19.073284 2417 state_mem.go:35] "Initializing new in-memory state store" Oct 28 05:12:19.079231 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 28 05:12:19.091900 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 28 05:12:19.094997 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 28 05:12:19.115079 kubelet[2417]: E1028 05:12:19.115044 2417 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 28 05:12:19.115429 kubelet[2417]: I1028 05:12:19.115414 2417 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 28 05:12:19.115533 kubelet[2417]: I1028 05:12:19.115485 2417 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 28 05:12:19.116114 kubelet[2417]: I1028 05:12:19.116077 2417 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 28 05:12:19.117197 kubelet[2417]: E1028 05:12:19.117172 2417 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 28 05:12:19.117325 kubelet[2417]: E1028 05:12:19.117233 2417 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 28 05:12:19.179223 systemd[1]: Created slice kubepods-burstable-pod8828e46a30833728672040be37f99ff1.slice - libcontainer container kubepods-burstable-pod8828e46a30833728672040be37f99ff1.slice. Oct 28 05:12:19.210475 kubelet[2417]: E1028 05:12:19.210388 2417 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 05:12:19.213706 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Oct 28 05:12:19.215542 kubelet[2417]: E1028 05:12:19.215516 2417 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 05:12:19.216634 kubelet[2417]: I1028 05:12:19.216614 2417 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 05:12:19.217026 kubelet[2417]: E1028 05:12:19.217003 2417 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Oct 28 05:12:19.218612 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Oct 28 05:12:19.220178 kubelet[2417]: E1028 05:12:19.220155 2417 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 05:12:19.249455 kubelet[2417]: I1028 05:12:19.249367 2417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8828e46a30833728672040be37f99ff1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8828e46a30833728672040be37f99ff1\") " pod="kube-system/kube-apiserver-localhost" Oct 28 05:12:19.249455 kubelet[2417]: I1028 05:12:19.249396 2417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8828e46a30833728672040be37f99ff1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8828e46a30833728672040be37f99ff1\") " pod="kube-system/kube-apiserver-localhost" Oct 28 05:12:19.249455 kubelet[2417]: I1028 05:12:19.249416 2417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 05:12:19.249455 kubelet[2417]: I1028 05:12:19.249430 2417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 05:12:19.249455 kubelet[2417]: I1028 05:12:19.249446 2417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 05:12:19.249652 kubelet[2417]: I1028 05:12:19.249460 2417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 28 05:12:19.249652 kubelet[2417]: I1028 05:12:19.249473 2417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8828e46a30833728672040be37f99ff1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8828e46a30833728672040be37f99ff1\") " pod="kube-system/kube-apiserver-localhost" Oct 28 05:12:19.249652 kubelet[2417]: I1028 05:12:19.249485 2417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 05:12:19.249652 kubelet[2417]: I1028 05:12:19.249498 2417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 05:12:19.250942 kubelet[2417]: E1028 05:12:19.250657 2417 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="400ms" Oct 28 05:12:19.419300 kubelet[2417]: I1028 05:12:19.419255 2417 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 05:12:19.419655 kubelet[2417]: E1028 05:12:19.419621 2417 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Oct 28 05:12:19.511455 kubelet[2417]: E1028 05:12:19.511339 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:19.512144 containerd[1611]: time="2025-10-28T05:12:19.512091336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8828e46a30833728672040be37f99ff1,Namespace:kube-system,Attempt:0,}" Oct 28 05:12:19.516150 kubelet[2417]: E1028 05:12:19.516132 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:19.516531 containerd[1611]: time="2025-10-28T05:12:19.516426831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Oct 28 05:12:19.520800 kubelet[2417]: E1028 05:12:19.520752 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:19.521104 containerd[1611]: time="2025-10-28T05:12:19.521079099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Oct 28 05:12:19.566668 containerd[1611]: time="2025-10-28T05:12:19.566593208Z" level=info msg="connecting to shim 16085f98add37dc0a73a9596d34375ad770def55f1760c48f6988544d1e10d51" address="unix:///run/containerd/s/157c24ef0e9640253d27497c2b4beaa78bf052445a12294098d36a6999acaa01" namespace=k8s.io protocol=ttrpc version=3 Oct 28 05:12:19.575457 containerd[1611]: time="2025-10-28T05:12:19.575397548Z" level=info msg="connecting to shim 59d7a7091ae1470921e2860dd54f50bda04d0739eaadf9d8d8a7d08f868fb8fc" address="unix:///run/containerd/s/87de0abc50bfc6707b1aaa5f9702d9f030174599d74a0e8ce1223e287229eb9c" namespace=k8s.io protocol=ttrpc version=3 Oct 28 05:12:19.578033 containerd[1611]: time="2025-10-28T05:12:19.577962158Z" level=info msg="connecting to shim 1a5c5355e27ff7bf113b8db820417e31563495db9e6e14ccb97c16ded48a91ee" address="unix:///run/containerd/s/a4ff4fa36a74b404e1670c00d4af97d734b7a778be0b20381f622171123bd9e8" namespace=k8s.io protocol=ttrpc version=3 Oct 28 05:12:19.619951 systemd[1]: Started cri-containerd-1a5c5355e27ff7bf113b8db820417e31563495db9e6e14ccb97c16ded48a91ee.scope - libcontainer container 1a5c5355e27ff7bf113b8db820417e31563495db9e6e14ccb97c16ded48a91ee. Oct 28 05:12:19.625528 systemd[1]: Started cri-containerd-16085f98add37dc0a73a9596d34375ad770def55f1760c48f6988544d1e10d51.scope - libcontainer container 16085f98add37dc0a73a9596d34375ad770def55f1760c48f6988544d1e10d51. Oct 28 05:12:19.628606 systemd[1]: Started cri-containerd-59d7a7091ae1470921e2860dd54f50bda04d0739eaadf9d8d8a7d08f868fb8fc.scope - libcontainer container 59d7a7091ae1470921e2860dd54f50bda04d0739eaadf9d8d8a7d08f868fb8fc. Oct 28 05:12:19.651804 kubelet[2417]: E1028 05:12:19.651741 2417 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="800ms" Oct 28 05:12:19.696667 containerd[1611]: time="2025-10-28T05:12:19.696608884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a5c5355e27ff7bf113b8db820417e31563495db9e6e14ccb97c16ded48a91ee\"" Oct 28 05:12:19.698612 kubelet[2417]: E1028 05:12:19.698592 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:19.705733 containerd[1611]: time="2025-10-28T05:12:19.705693194Z" level=info msg="CreateContainer within sandbox \"1a5c5355e27ff7bf113b8db820417e31563495db9e6e14ccb97c16ded48a91ee\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 28 05:12:19.707567 containerd[1611]: time="2025-10-28T05:12:19.707537090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"59d7a7091ae1470921e2860dd54f50bda04d0739eaadf9d8d8a7d08f868fb8fc\"" Oct 28 05:12:19.708089 kubelet[2417]: E1028 05:12:19.708067 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:19.708895 containerd[1611]: time="2025-10-28T05:12:19.708871892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8828e46a30833728672040be37f99ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"16085f98add37dc0a73a9596d34375ad770def55f1760c48f6988544d1e10d51\"" Oct 28 05:12:19.709371 kubelet[2417]: E1028 05:12:19.709341 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:19.713468 containerd[1611]: time="2025-10-28T05:12:19.713355664Z" level=info msg="CreateContainer within sandbox \"59d7a7091ae1470921e2860dd54f50bda04d0739eaadf9d8d8a7d08f868fb8fc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 28 05:12:19.716466 containerd[1611]: time="2025-10-28T05:12:19.716433216Z" level=info msg="CreateContainer within sandbox \"16085f98add37dc0a73a9596d34375ad770def55f1760c48f6988544d1e10d51\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 28 05:12:19.720814 containerd[1611]: time="2025-10-28T05:12:19.720774723Z" level=info msg="Container fd8dec5723c2f2f33ae57b8bee5d247fcec4ca11d496108429d1297e41817bf3: CDI devices from CRI Config.CDIDevices: []" Oct 28 05:12:19.729731 containerd[1611]: time="2025-10-28T05:12:19.729668605Z" level=info msg="Container 2488b6c1e96ed6ff2130ee6705b2732f813bc8c496d1d1a1b1bad73a80d328ec: CDI devices from CRI Config.CDIDevices: []" Oct 28 05:12:19.736820 containerd[1611]: time="2025-10-28T05:12:19.736771032Z" level=info msg="CreateContainer within sandbox \"1a5c5355e27ff7bf113b8db820417e31563495db9e6e14ccb97c16ded48a91ee\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fd8dec5723c2f2f33ae57b8bee5d247fcec4ca11d496108429d1297e41817bf3\"" Oct 28 05:12:19.737775 containerd[1611]: time="2025-10-28T05:12:19.737755676Z" level=info msg="StartContainer for \"fd8dec5723c2f2f33ae57b8bee5d247fcec4ca11d496108429d1297e41817bf3\"" Oct 28 05:12:19.739043 containerd[1611]: time="2025-10-28T05:12:19.738208262Z" level=info msg="Container 0aec5d91703b6c054f71c729fb6cf3d87a8a537d3379bd7f8dd3037d767f784a: CDI devices from CRI Config.CDIDevices: []" Oct 28 05:12:19.739358 containerd[1611]: time="2025-10-28T05:12:19.739336383Z" level=info msg="connecting to shim fd8dec5723c2f2f33ae57b8bee5d247fcec4ca11d496108429d1297e41817bf3" address="unix:///run/containerd/s/a4ff4fa36a74b404e1670c00d4af97d734b7a778be0b20381f622171123bd9e8" protocol=ttrpc version=3 Oct 28 05:12:19.751178 containerd[1611]: time="2025-10-28T05:12:19.751115345Z" level=info msg="CreateContainer within sandbox \"59d7a7091ae1470921e2860dd54f50bda04d0739eaadf9d8d8a7d08f868fb8fc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2488b6c1e96ed6ff2130ee6705b2732f813bc8c496d1d1a1b1bad73a80d328ec\"" Oct 28 05:12:19.751519 containerd[1611]: time="2025-10-28T05:12:19.751462286Z" level=info msg="CreateContainer within sandbox \"16085f98add37dc0a73a9596d34375ad770def55f1760c48f6988544d1e10d51\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0aec5d91703b6c054f71c729fb6cf3d87a8a537d3379bd7f8dd3037d767f784a\"" Oct 28 05:12:19.752005 containerd[1611]: time="2025-10-28T05:12:19.751970560Z" level=info msg="StartContainer for \"2488b6c1e96ed6ff2130ee6705b2732f813bc8c496d1d1a1b1bad73a80d328ec\"" Oct 28 05:12:19.752229 containerd[1611]: time="2025-10-28T05:12:19.751970630Z" level=info msg="StartContainer for \"0aec5d91703b6c054f71c729fb6cf3d87a8a537d3379bd7f8dd3037d767f784a\"" Oct 28 05:12:19.753263 containerd[1611]: time="2025-10-28T05:12:19.753220236Z" level=info msg="connecting to shim 2488b6c1e96ed6ff2130ee6705b2732f813bc8c496d1d1a1b1bad73a80d328ec" address="unix:///run/containerd/s/87de0abc50bfc6707b1aaa5f9702d9f030174599d74a0e8ce1223e287229eb9c" protocol=ttrpc version=3 Oct 28 05:12:19.755024 containerd[1611]: time="2025-10-28T05:12:19.754979038Z" level=info msg="connecting to shim 0aec5d91703b6c054f71c729fb6cf3d87a8a537d3379bd7f8dd3037d767f784a" address="unix:///run/containerd/s/157c24ef0e9640253d27497c2b4beaa78bf052445a12294098d36a6999acaa01" protocol=ttrpc version=3 Oct 28 05:12:19.810140 systemd[1]: Started cri-containerd-fd8dec5723c2f2f33ae57b8bee5d247fcec4ca11d496108429d1297e41817bf3.scope - libcontainer container fd8dec5723c2f2f33ae57b8bee5d247fcec4ca11d496108429d1297e41817bf3. Oct 28 05:12:19.821704 kubelet[2417]: I1028 05:12:19.821669 2417 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 05:12:19.821996 systemd[1]: Started cri-containerd-0aec5d91703b6c054f71c729fb6cf3d87a8a537d3379bd7f8dd3037d767f784a.scope - libcontainer container 0aec5d91703b6c054f71c729fb6cf3d87a8a537d3379bd7f8dd3037d767f784a. Oct 28 05:12:19.823440 kubelet[2417]: E1028 05:12:19.822901 2417 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Oct 28 05:12:19.825701 systemd[1]: Started cri-containerd-2488b6c1e96ed6ff2130ee6705b2732f813bc8c496d1d1a1b1bad73a80d328ec.scope - libcontainer container 2488b6c1e96ed6ff2130ee6705b2732f813bc8c496d1d1a1b1bad73a80d328ec. Oct 28 05:12:19.908162 containerd[1611]: time="2025-10-28T05:12:19.907934181Z" level=info msg="StartContainer for \"2488b6c1e96ed6ff2130ee6705b2732f813bc8c496d1d1a1b1bad73a80d328ec\" returns successfully" Oct 28 05:12:19.951441 containerd[1611]: time="2025-10-28T05:12:19.951399657Z" level=info msg="StartContainer for \"fd8dec5723c2f2f33ae57b8bee5d247fcec4ca11d496108429d1297e41817bf3\" returns successfully" Oct 28 05:12:19.960884 containerd[1611]: time="2025-10-28T05:12:19.960734763Z" level=info msg="StartContainer for \"0aec5d91703b6c054f71c729fb6cf3d87a8a537d3379bd7f8dd3037d767f784a\" returns successfully" Oct 28 05:12:20.076306 kubelet[2417]: E1028 05:12:20.075635 2417 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 05:12:20.076306 kubelet[2417]: E1028 05:12:20.075767 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:20.076306 kubelet[2417]: E1028 05:12:20.075958 2417 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 05:12:20.076306 kubelet[2417]: E1028 05:12:20.076038 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:20.080123 kubelet[2417]: E1028 05:12:20.080103 2417 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 05:12:20.080521 kubelet[2417]: E1028 05:12:20.080478 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:20.627157 kubelet[2417]: I1028 05:12:20.626644 2417 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 05:12:21.082543 kubelet[2417]: E1028 05:12:21.082491 2417 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 05:12:21.082543 kubelet[2417]: E1028 05:12:21.082669 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:21.083550 kubelet[2417]: E1028 05:12:21.083517 2417 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 05:12:21.083693 kubelet[2417]: E1028 05:12:21.083650 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:22.085545 kubelet[2417]: E1028 05:12:22.085306 2417 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 05:12:22.085545 kubelet[2417]: E1028 05:12:22.085435 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:22.239354 kubelet[2417]: E1028 05:12:22.239267 2417 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 28 05:12:22.321537 kubelet[2417]: I1028 05:12:22.321473 2417 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 28 05:12:22.350087 kubelet[2417]: I1028 05:12:22.349967 2417 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 28 05:12:22.357210 kubelet[2417]: E1028 05:12:22.357163 2417 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 28 05:12:22.357210 kubelet[2417]: I1028 05:12:22.357192 2417 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 28 05:12:22.358701 kubelet[2417]: E1028 05:12:22.358667 2417 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 28 05:12:22.358701 kubelet[2417]: I1028 05:12:22.358690 2417 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 28 05:12:22.360110 kubelet[2417]: E1028 05:12:22.360086 2417 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 28 05:12:23.033898 kubelet[2417]: I1028 05:12:23.033837 2417 apiserver.go:52] "Watching apiserver" Oct 28 05:12:23.048196 kubelet[2417]: I1028 05:12:23.048155 2417 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 28 05:12:25.723446 systemd[1]: Reload requested from client PID 2704 ('systemctl') (unit session-9.scope)... Oct 28 05:12:25.723462 systemd[1]: Reloading... Oct 28 05:12:25.799911 zram_generator::config[2749]: No configuration found. Oct 28 05:12:26.029110 systemd[1]: Reloading finished in 305 ms. Oct 28 05:12:26.063520 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 05:12:26.082500 systemd[1]: kubelet.service: Deactivated successfully. Oct 28 05:12:26.082772 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 05:12:26.082837 systemd[1]: kubelet.service: Consumed 1.494s CPU time, 131.8M memory peak. Oct 28 05:12:26.085404 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 05:12:26.312584 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 05:12:26.325303 (kubelet)[2793]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 28 05:12:26.375919 kubelet[2793]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 28 05:12:26.375919 kubelet[2793]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 28 05:12:26.375919 kubelet[2793]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 28 05:12:26.376517 kubelet[2793]: I1028 05:12:26.375989 2793 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 28 05:12:26.384392 kubelet[2793]: I1028 05:12:26.384340 2793 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 28 05:12:26.384392 kubelet[2793]: I1028 05:12:26.384375 2793 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 28 05:12:26.384653 kubelet[2793]: I1028 05:12:26.384625 2793 server.go:956] "Client rotation is on, will bootstrap in background" Oct 28 05:12:26.386251 kubelet[2793]: I1028 05:12:26.386222 2793 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 28 05:12:26.389112 kubelet[2793]: I1028 05:12:26.389007 2793 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 28 05:12:26.396374 kubelet[2793]: I1028 05:12:26.396341 2793 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 28 05:12:26.403885 kubelet[2793]: I1028 05:12:26.403858 2793 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 28 05:12:26.404206 kubelet[2793]: I1028 05:12:26.404155 2793 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 28 05:12:26.404382 kubelet[2793]: I1028 05:12:26.404188 2793 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 28 05:12:26.404509 kubelet[2793]: I1028 05:12:26.404392 2793 topology_manager.go:138] "Creating topology manager with none policy" Oct 28 05:12:26.404509 kubelet[2793]: I1028 05:12:26.404412 2793 container_manager_linux.go:303] "Creating device plugin manager" Oct 28 05:12:26.405319 kubelet[2793]: I1028 05:12:26.405283 2793 state_mem.go:36] "Initialized new in-memory state store" Oct 28 05:12:26.405508 kubelet[2793]: I1028 05:12:26.405494 2793 kubelet.go:480] "Attempting to sync node with API server" Oct 28 05:12:26.405508 kubelet[2793]: I1028 05:12:26.405509 2793 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 28 05:12:26.405603 kubelet[2793]: I1028 05:12:26.405535 2793 kubelet.go:386] "Adding apiserver pod source" Oct 28 05:12:26.407591 kubelet[2793]: I1028 05:12:26.407294 2793 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 28 05:12:26.411319 kubelet[2793]: I1028 05:12:26.411296 2793 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 28 05:12:26.412121 kubelet[2793]: I1028 05:12:26.412085 2793 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 28 05:12:26.418687 kubelet[2793]: I1028 05:12:26.418181 2793 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 28 05:12:26.418687 kubelet[2793]: I1028 05:12:26.418238 2793 server.go:1289] "Started kubelet" Oct 28 05:12:26.419671 kubelet[2793]: I1028 05:12:26.419636 2793 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 28 05:12:26.420873 kubelet[2793]: I1028 05:12:26.420822 2793 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 28 05:12:26.425244 kubelet[2793]: I1028 05:12:26.425208 2793 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 28 05:12:26.426216 kubelet[2793]: I1028 05:12:26.426176 2793 server.go:317] "Adding debug handlers to kubelet server" Oct 28 05:12:26.428533 kubelet[2793]: I1028 05:12:26.428478 2793 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 28 05:12:26.431816 kubelet[2793]: I1028 05:12:26.430238 2793 factory.go:223] Registration of the systemd container factory successfully Oct 28 05:12:26.431816 kubelet[2793]: I1028 05:12:26.430363 2793 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 28 05:12:26.431816 kubelet[2793]: I1028 05:12:26.431127 2793 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 28 05:12:26.431816 kubelet[2793]: I1028 05:12:26.431297 2793 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 28 05:12:26.431816 kubelet[2793]: I1028 05:12:26.431492 2793 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 28 05:12:26.431816 kubelet[2793]: I1028 05:12:26.431691 2793 reconciler.go:26] "Reconciler: start to sync state" Oct 28 05:12:26.434630 kubelet[2793]: E1028 05:12:26.433885 2793 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 28 05:12:26.436825 kubelet[2793]: I1028 05:12:26.435185 2793 factory.go:223] Registration of the containerd container factory successfully Oct 28 05:12:26.458727 kubelet[2793]: I1028 05:12:26.458474 2793 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 28 05:12:26.462151 kubelet[2793]: I1028 05:12:26.462045 2793 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 28 05:12:26.462151 kubelet[2793]: I1028 05:12:26.462080 2793 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 28 05:12:26.462151 kubelet[2793]: I1028 05:12:26.462109 2793 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 28 05:12:26.462151 kubelet[2793]: I1028 05:12:26.462118 2793 kubelet.go:2436] "Starting kubelet main sync loop" Oct 28 05:12:26.462404 kubelet[2793]: E1028 05:12:26.462185 2793 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 28 05:12:26.489209 kubelet[2793]: I1028 05:12:26.489162 2793 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 28 05:12:26.489209 kubelet[2793]: I1028 05:12:26.489192 2793 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 28 05:12:26.489209 kubelet[2793]: I1028 05:12:26.489213 2793 state_mem.go:36] "Initialized new in-memory state store" Oct 28 05:12:26.489408 kubelet[2793]: I1028 05:12:26.489385 2793 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 28 05:12:26.489466 kubelet[2793]: I1028 05:12:26.489408 2793 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 28 05:12:26.489466 kubelet[2793]: I1028 05:12:26.489431 2793 policy_none.go:49] "None policy: Start" Oct 28 05:12:26.489466 kubelet[2793]: I1028 05:12:26.489443 2793 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 28 05:12:26.489466 kubelet[2793]: I1028 05:12:26.489455 2793 state_mem.go:35] "Initializing new in-memory state store" Oct 28 05:12:26.489588 kubelet[2793]: I1028 05:12:26.489568 2793 state_mem.go:75] "Updated machine memory state" Oct 28 05:12:26.494325 kubelet[2793]: E1028 05:12:26.494298 2793 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 28 05:12:26.494576 kubelet[2793]: I1028 05:12:26.494546 2793 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 28 05:12:26.494632 kubelet[2793]: I1028 05:12:26.494565 2793 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 28 05:12:26.494895 kubelet[2793]: I1028 05:12:26.494875 2793 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 28 05:12:26.496775 kubelet[2793]: E1028 05:12:26.496745 2793 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 28 05:12:26.564176 kubelet[2793]: I1028 05:12:26.564018 2793 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 28 05:12:26.564176 kubelet[2793]: I1028 05:12:26.564151 2793 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 28 05:12:26.565185 kubelet[2793]: I1028 05:12:26.564335 2793 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 28 05:12:26.604677 kubelet[2793]: I1028 05:12:26.604606 2793 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 05:12:26.611414 kubelet[2793]: I1028 05:12:26.611369 2793 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 28 05:12:26.611579 kubelet[2793]: I1028 05:12:26.611489 2793 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 28 05:12:26.633111 kubelet[2793]: I1028 05:12:26.633064 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 05:12:26.633111 kubelet[2793]: I1028 05:12:26.633105 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 05:12:26.633327 kubelet[2793]: I1028 05:12:26.633131 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 28 05:12:26.633327 kubelet[2793]: I1028 05:12:26.633174 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8828e46a30833728672040be37f99ff1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8828e46a30833728672040be37f99ff1\") " pod="kube-system/kube-apiserver-localhost" Oct 28 05:12:26.633327 kubelet[2793]: I1028 05:12:26.633203 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8828e46a30833728672040be37f99ff1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8828e46a30833728672040be37f99ff1\") " pod="kube-system/kube-apiserver-localhost" Oct 28 05:12:26.633327 kubelet[2793]: I1028 05:12:26.633221 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8828e46a30833728672040be37f99ff1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8828e46a30833728672040be37f99ff1\") " pod="kube-system/kube-apiserver-localhost" Oct 28 05:12:26.633327 kubelet[2793]: I1028 05:12:26.633237 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 05:12:26.633482 kubelet[2793]: I1028 05:12:26.633254 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 05:12:26.633482 kubelet[2793]: I1028 05:12:26.633276 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 05:12:26.870765 kubelet[2793]: E1028 05:12:26.870601 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:26.871741 kubelet[2793]: E1028 05:12:26.871629 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:26.871839 kubelet[2793]: E1028 05:12:26.871777 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:27.407907 kubelet[2793]: I1028 05:12:27.407838 2793 apiserver.go:52] "Watching apiserver" Oct 28 05:12:27.432365 kubelet[2793]: I1028 05:12:27.432318 2793 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 28 05:12:27.479114 kubelet[2793]: I1028 05:12:27.479084 2793 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 28 05:12:27.479289 kubelet[2793]: I1028 05:12:27.479206 2793 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 28 05:12:27.479416 kubelet[2793]: I1028 05:12:27.479367 2793 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 28 05:12:27.486258 kubelet[2793]: E1028 05:12:27.485766 2793 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 28 05:12:27.486258 kubelet[2793]: E1028 05:12:27.485937 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:27.486411 kubelet[2793]: E1028 05:12:27.486237 2793 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 28 05:12:27.486411 kubelet[2793]: E1028 05:12:27.486406 2793 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 28 05:12:27.486534 kubelet[2793]: E1028 05:12:27.486499 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:27.486534 kubelet[2793]: E1028 05:12:27.486525 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:27.498065 kubelet[2793]: I1028 05:12:27.497999 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.497976887 podStartE2EDuration="1.497976887s" podCreationTimestamp="2025-10-28 05:12:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 05:12:27.497289905 +0000 UTC m=+1.166815986" watchObservedRunningTime="2025-10-28 05:12:27.497976887 +0000 UTC m=+1.167502958" Oct 28 05:12:27.509589 kubelet[2793]: I1028 05:12:27.509520 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.509498088 podStartE2EDuration="1.509498088s" podCreationTimestamp="2025-10-28 05:12:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 05:12:27.502879253 +0000 UTC m=+1.172405324" watchObservedRunningTime="2025-10-28 05:12:27.509498088 +0000 UTC m=+1.179024159" Oct 28 05:12:27.516377 kubelet[2793]: I1028 05:12:27.516248 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.516233956 podStartE2EDuration="1.516233956s" podCreationTimestamp="2025-10-28 05:12:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 05:12:27.509639658 +0000 UTC m=+1.179165729" watchObservedRunningTime="2025-10-28 05:12:27.516233956 +0000 UTC m=+1.185760027" Oct 28 05:12:28.481246 kubelet[2793]: E1028 05:12:28.481204 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:28.481246 kubelet[2793]: E1028 05:12:28.481213 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:28.481874 kubelet[2793]: E1028 05:12:28.481507 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:29.647346 kubelet[2793]: E1028 05:12:29.647259 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:30.316989 kubelet[2793]: I1028 05:12:30.316931 2793 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 28 05:12:30.317406 containerd[1611]: time="2025-10-28T05:12:30.317365615Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 28 05:12:30.317743 kubelet[2793]: I1028 05:12:30.317639 2793 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 28 05:12:30.484423 kubelet[2793]: E1028 05:12:30.484382 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:30.713962 update_engine[1585]: I20251028 05:12:30.713855 1585 update_attempter.cc:509] Updating boot flags... Oct 28 05:12:31.320563 systemd[1]: Created slice kubepods-besteffort-pod6de824be_f453_44c9_8d0d_6a6c73e09571.slice - libcontainer container kubepods-besteffort-pod6de824be_f453_44c9_8d0d_6a6c73e09571.slice. Oct 28 05:12:31.359531 kubelet[2793]: I1028 05:12:31.359440 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6de824be-f453-44c9-8d0d-6a6c73e09571-xtables-lock\") pod \"kube-proxy-s6xtg\" (UID: \"6de824be-f453-44c9-8d0d-6a6c73e09571\") " pod="kube-system/kube-proxy-s6xtg" Oct 28 05:12:31.359531 kubelet[2793]: I1028 05:12:31.359505 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6de824be-f453-44c9-8d0d-6a6c73e09571-lib-modules\") pod \"kube-proxy-s6xtg\" (UID: \"6de824be-f453-44c9-8d0d-6a6c73e09571\") " pod="kube-system/kube-proxy-s6xtg" Oct 28 05:12:31.359531 kubelet[2793]: I1028 05:12:31.359533 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwrbr\" (UniqueName: \"kubernetes.io/projected/6de824be-f453-44c9-8d0d-6a6c73e09571-kube-api-access-xwrbr\") pod \"kube-proxy-s6xtg\" (UID: \"6de824be-f453-44c9-8d0d-6a6c73e09571\") " pod="kube-system/kube-proxy-s6xtg" Oct 28 05:12:31.360195 kubelet[2793]: I1028 05:12:31.359558 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6de824be-f453-44c9-8d0d-6a6c73e09571-kube-proxy\") pod \"kube-proxy-s6xtg\" (UID: \"6de824be-f453-44c9-8d0d-6a6c73e09571\") " pod="kube-system/kube-proxy-s6xtg" Oct 28 05:12:31.422345 systemd[1]: Created slice kubepods-besteffort-pod368893ab_ac35_4f35_a353_587f936c114e.slice - libcontainer container kubepods-besteffort-pod368893ab_ac35_4f35_a353_587f936c114e.slice. Oct 28 05:12:31.461066 kubelet[2793]: I1028 05:12:31.460900 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/368893ab-ac35-4f35-a353-587f936c114e-var-lib-calico\") pod \"tigera-operator-7dcd859c48-22brb\" (UID: \"368893ab-ac35-4f35-a353-587f936c114e\") " pod="tigera-operator/tigera-operator-7dcd859c48-22brb" Oct 28 05:12:31.461287 kubelet[2793]: I1028 05:12:31.461100 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkg8v\" (UniqueName: \"kubernetes.io/projected/368893ab-ac35-4f35-a353-587f936c114e-kube-api-access-nkg8v\") pod \"tigera-operator-7dcd859c48-22brb\" (UID: \"368893ab-ac35-4f35-a353-587f936c114e\") " pod="tigera-operator/tigera-operator-7dcd859c48-22brb" Oct 28 05:12:31.634513 kubelet[2793]: E1028 05:12:31.634353 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:31.635374 containerd[1611]: time="2025-10-28T05:12:31.635300291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s6xtg,Uid:6de824be-f453-44c9-8d0d-6a6c73e09571,Namespace:kube-system,Attempt:0,}" Oct 28 05:12:31.682581 containerd[1611]: time="2025-10-28T05:12:31.682516357Z" level=info msg="connecting to shim cef240cd34507afbcefe6e8d5d236a3b3bb31e3cb2c2c0734a07f54739ac090b" address="unix:///run/containerd/s/14f1a0e6d8a0d99d9d056acb544c39e10f2b4db81469933134e789e9ac2a29d9" namespace=k8s.io protocol=ttrpc version=3 Oct 28 05:12:31.726916 containerd[1611]: time="2025-10-28T05:12:31.726853986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-22brb,Uid:368893ab-ac35-4f35-a353-587f936c114e,Namespace:tigera-operator,Attempt:0,}" Oct 28 05:12:31.749015 containerd[1611]: time="2025-10-28T05:12:31.748929644Z" level=info msg="connecting to shim 522646b20d742be61240756adde9a16b64a4c81233d51902c9355a9c5e145f2a" address="unix:///run/containerd/s/2ac1862758e16019b261de4987a31800c0585050d867741742a9f210f850b62c" namespace=k8s.io protocol=ttrpc version=3 Oct 28 05:12:31.756056 systemd[1]: Started cri-containerd-cef240cd34507afbcefe6e8d5d236a3b3bb31e3cb2c2c0734a07f54739ac090b.scope - libcontainer container cef240cd34507afbcefe6e8d5d236a3b3bb31e3cb2c2c0734a07f54739ac090b. Oct 28 05:12:31.774517 kubelet[2793]: E1028 05:12:31.774463 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:31.780112 systemd[1]: Started cri-containerd-522646b20d742be61240756adde9a16b64a4c81233d51902c9355a9c5e145f2a.scope - libcontainer container 522646b20d742be61240756adde9a16b64a4c81233d51902c9355a9c5e145f2a. Oct 28 05:12:31.797875 containerd[1611]: time="2025-10-28T05:12:31.797827519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s6xtg,Uid:6de824be-f453-44c9-8d0d-6a6c73e09571,Namespace:kube-system,Attempt:0,} returns sandbox id \"cef240cd34507afbcefe6e8d5d236a3b3bb31e3cb2c2c0734a07f54739ac090b\"" Oct 28 05:12:31.798805 kubelet[2793]: E1028 05:12:31.798770 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:31.805130 containerd[1611]: time="2025-10-28T05:12:31.805086569Z" level=info msg="CreateContainer within sandbox \"cef240cd34507afbcefe6e8d5d236a3b3bb31e3cb2c2c0734a07f54739ac090b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 28 05:12:31.817426 containerd[1611]: time="2025-10-28T05:12:31.817380438Z" level=info msg="Container 239f74a6d87273629d5e861b71b06fdfadecd67b2a85d510b321878a6ce09ae2: CDI devices from CRI Config.CDIDevices: []" Oct 28 05:12:31.828137 containerd[1611]: time="2025-10-28T05:12:31.828094460Z" level=info msg="CreateContainer within sandbox \"cef240cd34507afbcefe6e8d5d236a3b3bb31e3cb2c2c0734a07f54739ac090b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"239f74a6d87273629d5e861b71b06fdfadecd67b2a85d510b321878a6ce09ae2\"" Oct 28 05:12:31.828934 containerd[1611]: time="2025-10-28T05:12:31.828902187Z" level=info msg="StartContainer for \"239f74a6d87273629d5e861b71b06fdfadecd67b2a85d510b321878a6ce09ae2\"" Oct 28 05:12:31.831224 containerd[1611]: time="2025-10-28T05:12:31.831187424Z" level=info msg="connecting to shim 239f74a6d87273629d5e861b71b06fdfadecd67b2a85d510b321878a6ce09ae2" address="unix:///run/containerd/s/14f1a0e6d8a0d99d9d056acb544c39e10f2b4db81469933134e789e9ac2a29d9" protocol=ttrpc version=3 Oct 28 05:12:31.849114 containerd[1611]: time="2025-10-28T05:12:31.848969905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-22brb,Uid:368893ab-ac35-4f35-a353-587f936c114e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"522646b20d742be61240756adde9a16b64a4c81233d51902c9355a9c5e145f2a\"" Oct 28 05:12:31.852414 containerd[1611]: time="2025-10-28T05:12:31.852066196Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 28 05:12:31.871031 systemd[1]: Started cri-containerd-239f74a6d87273629d5e861b71b06fdfadecd67b2a85d510b321878a6ce09ae2.scope - libcontainer container 239f74a6d87273629d5e861b71b06fdfadecd67b2a85d510b321878a6ce09ae2. Oct 28 05:12:31.916563 containerd[1611]: time="2025-10-28T05:12:31.916511288Z" level=info msg="StartContainer for \"239f74a6d87273629d5e861b71b06fdfadecd67b2a85d510b321878a6ce09ae2\" returns successfully" Oct 28 05:12:32.492237 kubelet[2793]: E1028 05:12:32.492144 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:32.492680 kubelet[2793]: E1028 05:12:32.492257 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:32.511643 kubelet[2793]: I1028 05:12:32.511474 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s6xtg" podStartSLOduration=1.51145553 podStartE2EDuration="1.51145553s" podCreationTimestamp="2025-10-28 05:12:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 05:12:32.510645832 +0000 UTC m=+6.180171903" watchObservedRunningTime="2025-10-28 05:12:32.51145553 +0000 UTC m=+6.180981601" Oct 28 05:12:33.499528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1789351141.mount: Deactivated successfully. Oct 28 05:12:33.988656 containerd[1611]: time="2025-10-28T05:12:33.988596937Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:12:33.989582 containerd[1611]: time="2025-10-28T05:12:33.989553113Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 28 05:12:33.990836 containerd[1611]: time="2025-10-28T05:12:33.990810882Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:12:33.993228 containerd[1611]: time="2025-10-28T05:12:33.993204497Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:12:33.993968 containerd[1611]: time="2025-10-28T05:12:33.993903565Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.141798315s" Oct 28 05:12:33.994005 containerd[1611]: time="2025-10-28T05:12:33.993970512Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 28 05:12:33.999680 containerd[1611]: time="2025-10-28T05:12:33.999623638Z" level=info msg="CreateContainer within sandbox \"522646b20d742be61240756adde9a16b64a4c81233d51902c9355a9c5e145f2a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 28 05:12:34.009306 containerd[1611]: time="2025-10-28T05:12:34.009243135Z" level=info msg="Container 7ef8326694fdc1721c2c11c556892204be96d16b3cbecaa9f825a3d5124a51af: CDI devices from CRI Config.CDIDevices: []" Oct 28 05:12:34.018744 containerd[1611]: time="2025-10-28T05:12:34.018680424Z" level=info msg="CreateContainer within sandbox \"522646b20d742be61240756adde9a16b64a4c81233d51902c9355a9c5e145f2a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7ef8326694fdc1721c2c11c556892204be96d16b3cbecaa9f825a3d5124a51af\"" Oct 28 05:12:34.019256 containerd[1611]: time="2025-10-28T05:12:34.019225558Z" level=info msg="StartContainer for \"7ef8326694fdc1721c2c11c556892204be96d16b3cbecaa9f825a3d5124a51af\"" Oct 28 05:12:34.020331 containerd[1611]: time="2025-10-28T05:12:34.020233250Z" level=info msg="connecting to shim 7ef8326694fdc1721c2c11c556892204be96d16b3cbecaa9f825a3d5124a51af" address="unix:///run/containerd/s/2ac1862758e16019b261de4987a31800c0585050d867741742a9f210f850b62c" protocol=ttrpc version=3 Oct 28 05:12:34.049036 systemd[1]: Started cri-containerd-7ef8326694fdc1721c2c11c556892204be96d16b3cbecaa9f825a3d5124a51af.scope - libcontainer container 7ef8326694fdc1721c2c11c556892204be96d16b3cbecaa9f825a3d5124a51af. Oct 28 05:12:34.087731 containerd[1611]: time="2025-10-28T05:12:34.087688639Z" level=info msg="StartContainer for \"7ef8326694fdc1721c2c11c556892204be96d16b3cbecaa9f825a3d5124a51af\" returns successfully" Oct 28 05:12:35.485049 kubelet[2793]: E1028 05:12:35.484942 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:35.500652 kubelet[2793]: E1028 05:12:35.500601 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:35.507557 kubelet[2793]: I1028 05:12:35.507345 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-22brb" podStartSLOduration=2.363600502 podStartE2EDuration="4.507319909s" podCreationTimestamp="2025-10-28 05:12:31 +0000 UTC" firstStartedPulling="2025-10-28 05:12:31.851161574 +0000 UTC m=+5.520687645" lastFinishedPulling="2025-10-28 05:12:33.994880991 +0000 UTC m=+7.664407052" observedRunningTime="2025-10-28 05:12:34.505909536 +0000 UTC m=+8.175435607" watchObservedRunningTime="2025-10-28 05:12:35.507319909 +0000 UTC m=+9.176845980" Oct 28 05:12:39.797873 sudo[1835]: pam_unix(sudo:session): session closed for user root Oct 28 05:12:39.799898 sshd[1834]: Connection closed by 10.0.0.1 port 35102 Oct 28 05:12:39.802362 sshd-session[1831]: pam_unix(sshd:session): session closed for user core Oct 28 05:12:39.808188 systemd[1]: sshd@8-10.0.0.49:22-10.0.0.1:35102.service: Deactivated successfully. Oct 28 05:12:39.810400 systemd[1]: session-9.scope: Deactivated successfully. Oct 28 05:12:39.810629 systemd[1]: session-9.scope: Consumed 6.106s CPU time, 215.6M memory peak. Oct 28 05:12:39.812959 systemd-logind[1584]: Session 9 logged out. Waiting for processes to exit. Oct 28 05:12:39.816118 systemd-logind[1584]: Removed session 9. Oct 28 05:12:44.172701 systemd[1]: Created slice kubepods-besteffort-pod41ddec48_d50a_433d_99f5_e3a2701f8ff1.slice - libcontainer container kubepods-besteffort-pod41ddec48_d50a_433d_99f5_e3a2701f8ff1.slice. Oct 28 05:12:44.246642 kubelet[2793]: I1028 05:12:44.246584 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41ddec48-d50a-433d-99f5-e3a2701f8ff1-tigera-ca-bundle\") pod \"calico-typha-86c849df8c-dvf5t\" (UID: \"41ddec48-d50a-433d-99f5-e3a2701f8ff1\") " pod="calico-system/calico-typha-86c849df8c-dvf5t" Oct 28 05:12:44.246642 kubelet[2793]: I1028 05:12:44.246637 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/41ddec48-d50a-433d-99f5-e3a2701f8ff1-typha-certs\") pod \"calico-typha-86c849df8c-dvf5t\" (UID: \"41ddec48-d50a-433d-99f5-e3a2701f8ff1\") " pod="calico-system/calico-typha-86c849df8c-dvf5t" Oct 28 05:12:44.246642 kubelet[2793]: I1028 05:12:44.246665 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9bbs\" (UniqueName: \"kubernetes.io/projected/41ddec48-d50a-433d-99f5-e3a2701f8ff1-kube-api-access-g9bbs\") pod \"calico-typha-86c849df8c-dvf5t\" (UID: \"41ddec48-d50a-433d-99f5-e3a2701f8ff1\") " pod="calico-system/calico-typha-86c849df8c-dvf5t" Oct 28 05:12:44.281472 systemd[1]: Created slice kubepods-besteffort-podcbfc1adb_2a68_49b7_bf04_bf226a5de5ef.slice - libcontainer container kubepods-besteffort-podcbfc1adb_2a68_49b7_bf04_bf226a5de5ef.slice. Oct 28 05:12:44.347913 kubelet[2793]: I1028 05:12:44.347718 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cbfc1adb-2a68-49b7-bf04-bf226a5de5ef-lib-modules\") pod \"calico-node-c4zlz\" (UID: \"cbfc1adb-2a68-49b7-bf04-bf226a5de5ef\") " pod="calico-system/calico-node-c4zlz" Oct 28 05:12:44.347913 kubelet[2793]: I1028 05:12:44.347782 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cwln\" (UniqueName: \"kubernetes.io/projected/cbfc1adb-2a68-49b7-bf04-bf226a5de5ef-kube-api-access-5cwln\") pod \"calico-node-c4zlz\" (UID: \"cbfc1adb-2a68-49b7-bf04-bf226a5de5ef\") " pod="calico-system/calico-node-c4zlz" Oct 28 05:12:44.347913 kubelet[2793]: I1028 05:12:44.347824 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/cbfc1adb-2a68-49b7-bf04-bf226a5de5ef-node-certs\") pod \"calico-node-c4zlz\" (UID: \"cbfc1adb-2a68-49b7-bf04-bf226a5de5ef\") " pod="calico-system/calico-node-c4zlz" Oct 28 05:12:44.347913 kubelet[2793]: I1028 05:12:44.347837 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/cbfc1adb-2a68-49b7-bf04-bf226a5de5ef-policysync\") pod \"calico-node-c4zlz\" (UID: \"cbfc1adb-2a68-49b7-bf04-bf226a5de5ef\") " pod="calico-system/calico-node-c4zlz" Oct 28 05:12:44.347913 kubelet[2793]: I1028 05:12:44.347856 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/cbfc1adb-2a68-49b7-bf04-bf226a5de5ef-cni-log-dir\") pod \"calico-node-c4zlz\" (UID: \"cbfc1adb-2a68-49b7-bf04-bf226a5de5ef\") " pod="calico-system/calico-node-c4zlz" Oct 28 05:12:44.348249 kubelet[2793]: I1028 05:12:44.347869 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/cbfc1adb-2a68-49b7-bf04-bf226a5de5ef-flexvol-driver-host\") pod \"calico-node-c4zlz\" (UID: \"cbfc1adb-2a68-49b7-bf04-bf226a5de5ef\") " pod="calico-system/calico-node-c4zlz" Oct 28 05:12:44.348249 kubelet[2793]: I1028 05:12:44.348015 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cbfc1adb-2a68-49b7-bf04-bf226a5de5ef-xtables-lock\") pod \"calico-node-c4zlz\" (UID: \"cbfc1adb-2a68-49b7-bf04-bf226a5de5ef\") " pod="calico-system/calico-node-c4zlz" Oct 28 05:12:44.348249 kubelet[2793]: I1028 05:12:44.348033 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbfc1adb-2a68-49b7-bf04-bf226a5de5ef-tigera-ca-bundle\") pod \"calico-node-c4zlz\" (UID: \"cbfc1adb-2a68-49b7-bf04-bf226a5de5ef\") " pod="calico-system/calico-node-c4zlz" Oct 28 05:12:44.348249 kubelet[2793]: I1028 05:12:44.348046 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/cbfc1adb-2a68-49b7-bf04-bf226a5de5ef-var-run-calico\") pod \"calico-node-c4zlz\" (UID: \"cbfc1adb-2a68-49b7-bf04-bf226a5de5ef\") " pod="calico-system/calico-node-c4zlz" Oct 28 05:12:44.348249 kubelet[2793]: I1028 05:12:44.348082 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/cbfc1adb-2a68-49b7-bf04-bf226a5de5ef-cni-bin-dir\") pod \"calico-node-c4zlz\" (UID: \"cbfc1adb-2a68-49b7-bf04-bf226a5de5ef\") " pod="calico-system/calico-node-c4zlz" Oct 28 05:12:44.348401 kubelet[2793]: I1028 05:12:44.348098 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/cbfc1adb-2a68-49b7-bf04-bf226a5de5ef-cni-net-dir\") pod \"calico-node-c4zlz\" (UID: \"cbfc1adb-2a68-49b7-bf04-bf226a5de5ef\") " pod="calico-system/calico-node-c4zlz" Oct 28 05:12:44.348401 kubelet[2793]: I1028 05:12:44.348114 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cbfc1adb-2a68-49b7-bf04-bf226a5de5ef-var-lib-calico\") pod \"calico-node-c4zlz\" (UID: \"cbfc1adb-2a68-49b7-bf04-bf226a5de5ef\") " pod="calico-system/calico-node-c4zlz" Oct 28 05:12:44.456193 kubelet[2793]: E1028 05:12:44.456063 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.456193 kubelet[2793]: W1028 05:12:44.456093 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.461514 kubelet[2793]: E1028 05:12:44.461463 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.462182 kubelet[2793]: E1028 05:12:44.462153 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.462340 kubelet[2793]: W1028 05:12:44.462284 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.462340 kubelet[2793]: E1028 05:12:44.462310 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.466907 kubelet[2793]: E1028 05:12:44.466847 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.466907 kubelet[2793]: W1028 05:12:44.466881 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.466907 kubelet[2793]: E1028 05:12:44.466906 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.477813 kubelet[2793]: E1028 05:12:44.477520 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:44.479610 containerd[1611]: time="2025-10-28T05:12:44.479553139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86c849df8c-dvf5t,Uid:41ddec48-d50a-433d-99f5-e3a2701f8ff1,Namespace:calico-system,Attempt:0,}" Oct 28 05:12:44.480846 kubelet[2793]: E1028 05:12:44.480755 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xsl75" podUID="d86ce3dc-83d4-402a-b381-76ea2d723abb" Oct 28 05:12:44.504778 containerd[1611]: time="2025-10-28T05:12:44.504703089Z" level=info msg="connecting to shim 46b7f75e136164309e7e715c219bdc2e1bd7b5e83c7ed09dfcf85a1fec395310" address="unix:///run/containerd/s/fdb4254e7b0142e668655a7cb30ed9b735a8ce6a78eb4621c70296b3279a4a01" namespace=k8s.io protocol=ttrpc version=3 Oct 28 05:12:44.524700 kubelet[2793]: E1028 05:12:44.524648 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.524700 kubelet[2793]: W1028 05:12:44.524669 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.524700 kubelet[2793]: E1028 05:12:44.524692 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.525099 kubelet[2793]: E1028 05:12:44.524943 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.525099 kubelet[2793]: W1028 05:12:44.524951 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.525099 kubelet[2793]: E1028 05:12:44.524959 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.525190 kubelet[2793]: E1028 05:12:44.525117 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.525190 kubelet[2793]: W1028 05:12:44.525123 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.525190 kubelet[2793]: E1028 05:12:44.525130 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.526982 kubelet[2793]: E1028 05:12:44.526852 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.526982 kubelet[2793]: W1028 05:12:44.526868 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.526982 kubelet[2793]: E1028 05:12:44.526878 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.527344 kubelet[2793]: E1028 05:12:44.527277 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.527380 kubelet[2793]: W1028 05:12:44.527336 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.527424 kubelet[2793]: E1028 05:12:44.527372 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.527888 kubelet[2793]: E1028 05:12:44.527862 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.527888 kubelet[2793]: W1028 05:12:44.527880 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.527941 kubelet[2793]: E1028 05:12:44.527893 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.528266 kubelet[2793]: E1028 05:12:44.528200 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.528266 kubelet[2793]: W1028 05:12:44.528218 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.528266 kubelet[2793]: E1028 05:12:44.528232 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.528722 kubelet[2793]: E1028 05:12:44.528687 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.528722 kubelet[2793]: W1028 05:12:44.528700 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.528857 kubelet[2793]: E1028 05:12:44.528843 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.529227 kubelet[2793]: E1028 05:12:44.529165 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.529227 kubelet[2793]: W1028 05:12:44.529177 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.529227 kubelet[2793]: E1028 05:12:44.529188 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.529587 kubelet[2793]: E1028 05:12:44.529526 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.529587 kubelet[2793]: W1028 05:12:44.529538 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.529587 kubelet[2793]: E1028 05:12:44.529549 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.529917 kubelet[2793]: E1028 05:12:44.529905 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.530094 kubelet[2793]: W1028 05:12:44.529981 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.530094 kubelet[2793]: E1028 05:12:44.529996 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.530280 kubelet[2793]: E1028 05:12:44.530267 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.530352 kubelet[2793]: W1028 05:12:44.530338 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.530421 kubelet[2793]: E1028 05:12:44.530407 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.530805 kubelet[2793]: E1028 05:12:44.530719 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.530805 kubelet[2793]: W1028 05:12:44.530733 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.530805 kubelet[2793]: E1028 05:12:44.530746 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.531151 kubelet[2793]: E1028 05:12:44.531136 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.531231 kubelet[2793]: W1028 05:12:44.531217 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.531303 kubelet[2793]: E1028 05:12:44.531289 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.531663 kubelet[2793]: E1028 05:12:44.531588 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.531663 kubelet[2793]: W1028 05:12:44.531602 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.531663 kubelet[2793]: E1028 05:12:44.531614 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.532044 kubelet[2793]: E1028 05:12:44.532030 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.532119 kubelet[2793]: W1028 05:12:44.532104 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.532201 kubelet[2793]: E1028 05:12:44.532186 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.532514 kubelet[2793]: E1028 05:12:44.532499 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.532643 kubelet[2793]: W1028 05:12:44.532580 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.532643 kubelet[2793]: E1028 05:12:44.532598 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.533020 kubelet[2793]: E1028 05:12:44.532948 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.533020 kubelet[2793]: W1028 05:12:44.532962 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.533020 kubelet[2793]: E1028 05:12:44.532974 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.533339 kubelet[2793]: E1028 05:12:44.533321 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.533943 kubelet[2793]: W1028 05:12:44.533744 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.533943 kubelet[2793]: E1028 05:12:44.533761 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.534179 kubelet[2793]: E1028 05:12:44.534164 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.534321 kubelet[2793]: W1028 05:12:44.534276 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.534471 kubelet[2793]: E1028 05:12:44.534372 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.537993 systemd[1]: Started cri-containerd-46b7f75e136164309e7e715c219bdc2e1bd7b5e83c7ed09dfcf85a1fec395310.scope - libcontainer container 46b7f75e136164309e7e715c219bdc2e1bd7b5e83c7ed09dfcf85a1fec395310. Oct 28 05:12:44.551326 kubelet[2793]: E1028 05:12:44.551257 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.552118 kubelet[2793]: W1028 05:12:44.551413 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.552118 kubelet[2793]: E1028 05:12:44.551439 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.552118 kubelet[2793]: I1028 05:12:44.551600 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d86ce3dc-83d4-402a-b381-76ea2d723abb-varrun\") pod \"csi-node-driver-xsl75\" (UID: \"d86ce3dc-83d4-402a-b381-76ea2d723abb\") " pod="calico-system/csi-node-driver-xsl75" Oct 28 05:12:44.552745 kubelet[2793]: E1028 05:12:44.552617 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.552745 kubelet[2793]: W1028 05:12:44.552685 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.552745 kubelet[2793]: E1028 05:12:44.552699 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.553213 kubelet[2793]: I1028 05:12:44.552855 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4d7f\" (UniqueName: \"kubernetes.io/projected/d86ce3dc-83d4-402a-b381-76ea2d723abb-kube-api-access-r4d7f\") pod \"csi-node-driver-xsl75\" (UID: \"d86ce3dc-83d4-402a-b381-76ea2d723abb\") " pod="calico-system/csi-node-driver-xsl75" Oct 28 05:12:44.553710 kubelet[2793]: E1028 05:12:44.553656 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.553865 kubelet[2793]: W1028 05:12:44.553821 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.553865 kubelet[2793]: E1028 05:12:44.553835 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.554187 kubelet[2793]: I1028 05:12:44.554169 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d86ce3dc-83d4-402a-b381-76ea2d723abb-registration-dir\") pod \"csi-node-driver-xsl75\" (UID: \"d86ce3dc-83d4-402a-b381-76ea2d723abb\") " pod="calico-system/csi-node-driver-xsl75" Oct 28 05:12:44.554714 kubelet[2793]: E1028 05:12:44.554633 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.554714 kubelet[2793]: W1028 05:12:44.554647 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.554714 kubelet[2793]: E1028 05:12:44.554660 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.555229 kubelet[2793]: E1028 05:12:44.555215 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.555329 kubelet[2793]: W1028 05:12:44.555317 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.555442 kubelet[2793]: E1028 05:12:44.555363 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.555904 kubelet[2793]: E1028 05:12:44.555834 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.555904 kubelet[2793]: W1028 05:12:44.555846 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.555904 kubelet[2793]: E1028 05:12:44.555857 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.556346 kubelet[2793]: E1028 05:12:44.556323 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.556461 kubelet[2793]: W1028 05:12:44.556430 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.556461 kubelet[2793]: E1028 05:12:44.556446 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.556737 kubelet[2793]: I1028 05:12:44.556712 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d86ce3dc-83d4-402a-b381-76ea2d723abb-kubelet-dir\") pod \"csi-node-driver-xsl75\" (UID: \"d86ce3dc-83d4-402a-b381-76ea2d723abb\") " pod="calico-system/csi-node-driver-xsl75" Oct 28 05:12:44.557339 kubelet[2793]: E1028 05:12:44.557323 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.557646 kubelet[2793]: W1028 05:12:44.557522 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.557646 kubelet[2793]: E1028 05:12:44.557535 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.558056 kubelet[2793]: E1028 05:12:44.558016 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.558056 kubelet[2793]: W1028 05:12:44.558027 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.558056 kubelet[2793]: E1028 05:12:44.558041 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.558583 kubelet[2793]: E1028 05:12:44.558550 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.558642 kubelet[2793]: W1028 05:12:44.558580 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.558642 kubelet[2793]: E1028 05:12:44.558609 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.559040 kubelet[2793]: E1028 05:12:44.558920 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.559130 kubelet[2793]: W1028 05:12:44.559053 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.559130 kubelet[2793]: E1028 05:12:44.559065 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.559604 kubelet[2793]: E1028 05:12:44.559534 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.559604 kubelet[2793]: W1028 05:12:44.559548 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.559846 kubelet[2793]: E1028 05:12:44.559560 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.560584 kubelet[2793]: E1028 05:12:44.560539 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.560584 kubelet[2793]: W1028 05:12:44.560560 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.560865 kubelet[2793]: E1028 05:12:44.560812 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.560865 kubelet[2793]: I1028 05:12:44.560845 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d86ce3dc-83d4-402a-b381-76ea2d723abb-socket-dir\") pod \"csi-node-driver-xsl75\" (UID: \"d86ce3dc-83d4-402a-b381-76ea2d723abb\") " pod="calico-system/csi-node-driver-xsl75" Oct 28 05:12:44.562261 kubelet[2793]: E1028 05:12:44.562031 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.562261 kubelet[2793]: W1028 05:12:44.562200 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.562261 kubelet[2793]: E1028 05:12:44.562214 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.563304 kubelet[2793]: E1028 05:12:44.563205 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.563513 kubelet[2793]: W1028 05:12:44.563461 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.563513 kubelet[2793]: E1028 05:12:44.563479 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.585609 kubelet[2793]: E1028 05:12:44.585040 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:44.586220 containerd[1611]: time="2025-10-28T05:12:44.586015009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-c4zlz,Uid:cbfc1adb-2a68-49b7-bf04-bf226a5de5ef,Namespace:calico-system,Attempt:0,}" Oct 28 05:12:44.605571 containerd[1611]: time="2025-10-28T05:12:44.605425679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86c849df8c-dvf5t,Uid:41ddec48-d50a-433d-99f5-e3a2701f8ff1,Namespace:calico-system,Attempt:0,} returns sandbox id \"46b7f75e136164309e7e715c219bdc2e1bd7b5e83c7ed09dfcf85a1fec395310\"" Oct 28 05:12:44.606367 kubelet[2793]: E1028 05:12:44.606333 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:44.607995 containerd[1611]: time="2025-10-28T05:12:44.607962626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 28 05:12:44.618799 containerd[1611]: time="2025-10-28T05:12:44.618733080Z" level=info msg="connecting to shim 6beb0e8b93baaf8773e94cc29a3fd0bb4cfd60ca0486ca759ee2ca6c491aa9cf" address="unix:///run/containerd/s/53cedad7140df970cda1cfae759a7ee50329bcdb45b8d7db32e0ae7b17222791" namespace=k8s.io protocol=ttrpc version=3 Oct 28 05:12:44.643937 systemd[1]: Started cri-containerd-6beb0e8b93baaf8773e94cc29a3fd0bb4cfd60ca0486ca759ee2ca6c491aa9cf.scope - libcontainer container 6beb0e8b93baaf8773e94cc29a3fd0bb4cfd60ca0486ca759ee2ca6c491aa9cf. Oct 28 05:12:44.662685 kubelet[2793]: E1028 05:12:44.662649 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.662685 kubelet[2793]: W1028 05:12:44.662669 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.662685 kubelet[2793]: E1028 05:12:44.662689 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.663021 kubelet[2793]: E1028 05:12:44.663004 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.663021 kubelet[2793]: W1028 05:12:44.663016 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.663097 kubelet[2793]: E1028 05:12:44.663051 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.663322 kubelet[2793]: E1028 05:12:44.663305 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.663322 kubelet[2793]: W1028 05:12:44.663317 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.663392 kubelet[2793]: E1028 05:12:44.663326 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.663643 kubelet[2793]: E1028 05:12:44.663626 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.663643 kubelet[2793]: W1028 05:12:44.663639 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.663710 kubelet[2793]: E1028 05:12:44.663649 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.664306 kubelet[2793]: E1028 05:12:44.664289 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.664306 kubelet[2793]: W1028 05:12:44.664301 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.664390 kubelet[2793]: E1028 05:12:44.664311 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.664703 kubelet[2793]: E1028 05:12:44.664685 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.664703 kubelet[2793]: W1028 05:12:44.664702 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.664946 kubelet[2793]: E1028 05:12:44.664712 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.665102 kubelet[2793]: E1028 05:12:44.665064 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.665137 kubelet[2793]: W1028 05:12:44.665104 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.665137 kubelet[2793]: E1028 05:12:44.665117 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.665860 kubelet[2793]: E1028 05:12:44.665685 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.666258 kubelet[2793]: W1028 05:12:44.665994 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.666258 kubelet[2793]: E1028 05:12:44.666007 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.666682 kubelet[2793]: E1028 05:12:44.666649 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.666682 kubelet[2793]: W1028 05:12:44.666663 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.666682 kubelet[2793]: E1028 05:12:44.666674 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.667186 kubelet[2793]: E1028 05:12:44.667161 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.667186 kubelet[2793]: W1028 05:12:44.667175 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.667264 kubelet[2793]: E1028 05:12:44.667212 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.667502 kubelet[2793]: E1028 05:12:44.667486 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.667502 kubelet[2793]: W1028 05:12:44.667497 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.667563 kubelet[2793]: E1028 05:12:44.667507 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.667798 kubelet[2793]: E1028 05:12:44.667759 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.667844 kubelet[2793]: W1028 05:12:44.667781 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.667844 kubelet[2793]: E1028 05:12:44.667821 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.668277 kubelet[2793]: E1028 05:12:44.668251 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.668277 kubelet[2793]: W1028 05:12:44.668263 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.668277 kubelet[2793]: E1028 05:12:44.668272 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.668506 kubelet[2793]: E1028 05:12:44.668488 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.668506 kubelet[2793]: W1028 05:12:44.668499 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.668506 kubelet[2793]: E1028 05:12:44.668507 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.668739 kubelet[2793]: E1028 05:12:44.668722 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.668836 kubelet[2793]: W1028 05:12:44.668753 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.668836 kubelet[2793]: E1028 05:12:44.668776 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.669074 kubelet[2793]: E1028 05:12:44.669047 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.669074 kubelet[2793]: W1028 05:12:44.669069 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.669146 kubelet[2793]: E1028 05:12:44.669079 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.670647 kubelet[2793]: E1028 05:12:44.670609 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.670647 kubelet[2793]: W1028 05:12:44.670623 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.670647 kubelet[2793]: E1028 05:12:44.670633 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.671159 kubelet[2793]: E1028 05:12:44.671126 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.671159 kubelet[2793]: W1028 05:12:44.671138 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.671159 kubelet[2793]: E1028 05:12:44.671147 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.671771 kubelet[2793]: E1028 05:12:44.671722 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.671771 kubelet[2793]: W1028 05:12:44.671735 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.671771 kubelet[2793]: E1028 05:12:44.671747 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.672108 kubelet[2793]: E1028 05:12:44.672096 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.672238 kubelet[2793]: W1028 05:12:44.672159 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.672238 kubelet[2793]: E1028 05:12:44.672181 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.672580 kubelet[2793]: E1028 05:12:44.672569 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.672654 kubelet[2793]: W1028 05:12:44.672629 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.672654 kubelet[2793]: E1028 05:12:44.672643 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.673090 kubelet[2793]: E1028 05:12:44.673059 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.673090 kubelet[2793]: W1028 05:12:44.673070 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.673090 kubelet[2793]: E1028 05:12:44.673078 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.673474 kubelet[2793]: E1028 05:12:44.673463 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.673564 kubelet[2793]: W1028 05:12:44.673520 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.673564 kubelet[2793]: E1028 05:12:44.673531 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.675024 kubelet[2793]: E1028 05:12:44.674935 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.675320 containerd[1611]: time="2025-10-28T05:12:44.675124093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-c4zlz,Uid:cbfc1adb-2a68-49b7-bf04-bf226a5de5ef,Namespace:calico-system,Attempt:0,} returns sandbox id \"6beb0e8b93baaf8773e94cc29a3fd0bb4cfd60ca0486ca759ee2ca6c491aa9cf\"" Oct 28 05:12:44.675377 kubelet[2793]: W1028 05:12:44.675175 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.675377 kubelet[2793]: E1028 05:12:44.675188 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.676243 kubelet[2793]: E1028 05:12:44.676205 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.676243 kubelet[2793]: W1028 05:12:44.676238 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.676243 kubelet[2793]: E1028 05:12:44.676264 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:44.676759 kubelet[2793]: E1028 05:12:44.676491 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:44.684710 kubelet[2793]: E1028 05:12:44.684668 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:44.684710 kubelet[2793]: W1028 05:12:44.684694 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:44.684872 kubelet[2793]: E1028 05:12:44.684720 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:45.968626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount87894353.mount: Deactivated successfully. Oct 28 05:12:46.463613 kubelet[2793]: E1028 05:12:46.463206 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xsl75" podUID="d86ce3dc-83d4-402a-b381-76ea2d723abb" Oct 28 05:12:46.593035 containerd[1611]: time="2025-10-28T05:12:46.592972601Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:12:46.593811 containerd[1611]: time="2025-10-28T05:12:46.593755898Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Oct 28 05:12:46.594884 containerd[1611]: time="2025-10-28T05:12:46.594845723Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:12:46.596853 containerd[1611]: time="2025-10-28T05:12:46.596817251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:12:46.597388 containerd[1611]: time="2025-10-28T05:12:46.597348652Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.989355438s" Oct 28 05:12:46.597428 containerd[1611]: time="2025-10-28T05:12:46.597387266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 28 05:12:46.598397 containerd[1611]: time="2025-10-28T05:12:46.598356533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 28 05:12:46.609994 containerd[1611]: time="2025-10-28T05:12:46.609930157Z" level=info msg="CreateContainer within sandbox \"46b7f75e136164309e7e715c219bdc2e1bd7b5e83c7ed09dfcf85a1fec395310\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 28 05:12:46.618827 containerd[1611]: time="2025-10-28T05:12:46.618271215Z" level=info msg="Container b9517a4d6b229e93b763a2867c61940b1d1c5e381089175570edac95cd3957f4: CDI devices from CRI Config.CDIDevices: []" Oct 28 05:12:46.629401 containerd[1611]: time="2025-10-28T05:12:46.629348493Z" level=info msg="CreateContainer within sandbox \"46b7f75e136164309e7e715c219bdc2e1bd7b5e83c7ed09dfcf85a1fec395310\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b9517a4d6b229e93b763a2867c61940b1d1c5e381089175570edac95cd3957f4\"" Oct 28 05:12:46.631780 containerd[1611]: time="2025-10-28T05:12:46.629956620Z" level=info msg="StartContainer for \"b9517a4d6b229e93b763a2867c61940b1d1c5e381089175570edac95cd3957f4\"" Oct 28 05:12:46.631780 containerd[1611]: time="2025-10-28T05:12:46.631301075Z" level=info msg="connecting to shim b9517a4d6b229e93b763a2867c61940b1d1c5e381089175570edac95cd3957f4" address="unix:///run/containerd/s/fdb4254e7b0142e668655a7cb30ed9b735a8ce6a78eb4621c70296b3279a4a01" protocol=ttrpc version=3 Oct 28 05:12:46.662111 systemd[1]: Started cri-containerd-b9517a4d6b229e93b763a2867c61940b1d1c5e381089175570edac95cd3957f4.scope - libcontainer container b9517a4d6b229e93b763a2867c61940b1d1c5e381089175570edac95cd3957f4. Oct 28 05:12:46.722358 containerd[1611]: time="2025-10-28T05:12:46.722234684Z" level=info msg="StartContainer for \"b9517a4d6b229e93b763a2867c61940b1d1c5e381089175570edac95cd3957f4\" returns successfully" Oct 28 05:12:47.522778 kubelet[2793]: E1028 05:12:47.522673 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:47.557487 kubelet[2793]: E1028 05:12:47.557442 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:47.557487 kubelet[2793]: W1028 05:12:47.557459 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:47.557487 kubelet[2793]: E1028 05:12:47.557478 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:47.557711 kubelet[2793]: E1028 05:12:47.557651 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:47.557711 kubelet[2793]: W1028 05:12:47.557659 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:47.557711 kubelet[2793]: E1028 05:12:47.557667 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:47.557865 kubelet[2793]: E1028 05:12:47.557843 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:47.557865 kubelet[2793]: W1028 05:12:47.557855 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:47.557865 kubelet[2793]: E1028 05:12:47.557864 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:47.558104 kubelet[2793]: E1028 05:12:47.558088 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:47.558104 kubelet[2793]: W1028 05:12:47.558098 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:47.558160 kubelet[2793]: E1028 05:12:47.558107 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:47.558288 kubelet[2793]: E1028 05:12:47.558265 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:47.558288 kubelet[2793]: W1028 05:12:47.558278 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:47.558288 kubelet[2793]: E1028 05:12:47.558286 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:47.558451 kubelet[2793]: E1028 05:12:47.558437 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:47.558451 kubelet[2793]: W1028 05:12:47.558448 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:47.558497 kubelet[2793]: E1028 05:12:47.558456 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:47.558623 kubelet[2793]: E1028 05:12:47.558609 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:47.558623 kubelet[2793]: W1028 05:12:47.558618 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:47.558759 kubelet[2793]: E1028 05:12:47.558625 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:47.558842 kubelet[2793]: E1028 05:12:47.558826 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:47.558842 kubelet[2793]: W1028 05:12:47.558837 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:47.558897 kubelet[2793]: E1028 05:12:47.558846 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:47.559066 kubelet[2793]: E1028 05:12:47.559037 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:47.559066 kubelet[2793]: W1028 05:12:47.559048 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:47.559066 kubelet[2793]: E1028 05:12:47.559056 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:47.559280 kubelet[2793]: E1028 05:12:47.559214 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:47.559280 kubelet[2793]: W1028 05:12:47.559221 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:47.559280 kubelet[2793]: E1028 05:12:47.559228 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:47.559388 kubelet[2793]: E1028 05:12:47.559373 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:47.559388 kubelet[2793]: W1028 05:12:47.559382 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:47.559442 kubelet[2793]: E1028 05:12:47.559389 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:47.559600 kubelet[2793]: E1028 05:12:47.559577 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:47.559600 kubelet[2793]: W1028 05:12:47.559589 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:47.559600 kubelet[2793]: E1028 05:12:47.559599 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:47.559820 kubelet[2793]: E1028 05:12:47.559780 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:47.559820 kubelet[2793]: W1028 05:12:47.559803 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:47.559820 kubelet[2793]: E1028 05:12:47.559811 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:47.559992 kubelet[2793]: E1028 05:12:47.559975 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:47.559992 kubelet[2793]: W1028 05:12:47.559985 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:47.559992 kubelet[2793]: E1028 05:12:47.559991 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:47.560222 kubelet[2793]: E1028 05:12:47.560196 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:47.560262 kubelet[2793]: W1028 05:12:47.560220 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:47.560262 kubelet[2793]: E1028 05:12:47.560245 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:47.582689 kubelet[2793]: E1028 05:12:47.582655 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:47.582689 kubelet[2793]: W1028 05:12:47.582678 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:47.582811 kubelet[2793]: E1028 05:12:47.582711 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:47.583015 kubelet[2793]: E1028 05:12:47.582992 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:47.583015 kubelet[2793]: W1028 05:12:47.583008 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:47.583077 kubelet[2793]: E1028 05:12:47.583022 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:47.583283 kubelet[2793]: E1028 05:12:47.583257 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:47.583283 kubelet[2793]: W1028 05:12:47.583274 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:47.583347 kubelet[2793]: E1028 05:12:47.583284 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:47.583484 kubelet[2793]: E1028 05:12:47.583469 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:47.583484 kubelet[2793]: W1028 05:12:47.583479 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:47.583548 kubelet[2793]: E1028 05:12:47.583486 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:47.583668 kubelet[2793]: E1028 05:12:47.583653 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:47.583668 kubelet[2793]: W1028 05:12:47.583663 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:47.583742 kubelet[2793]: E1028 05:12:47.583672 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:47.583900 kubelet[2793]: E1028 05:12:47.583885 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:47.583900 kubelet[2793]: W1028 05:12:47.583896 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:47.583955 kubelet[2793]: E1028 05:12:47.583904 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:47.584242 kubelet[2793]: E1028 05:12:47.584220 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:47.584242 kubelet[2793]: W1028 05:12:47.584232 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:47.584290 kubelet[2793]: E1028 05:12:47.584241 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:47.584453 kubelet[2793]: E1028 05:12:47.584436 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:47.584453 kubelet[2793]: W1028 05:12:47.584448 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:47.584517 kubelet[2793]: E1028 05:12:47.584458 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:47.584671 kubelet[2793]: E1028 05:12:47.584648 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:47.584671 kubelet[2793]: W1028 05:12:47.584659 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:47.584671 kubelet[2793]: E1028 05:12:47.584667 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:47.584892 kubelet[2793]: E1028 05:12:47.584876 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:47.584892 kubelet[2793]: W1028 05:12:47.584886 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:47.584892 kubelet[2793]: E1028 05:12:47.584894 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:47.585076 kubelet[2793]: E1028 05:12:47.585062 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:47.585076 kubelet[2793]: W1028 05:12:47.585071 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:47.585120 kubelet[2793]: E1028 05:12:47.585079 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:47.585266 kubelet[2793]: E1028 05:12:47.585251 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:47.585266 kubelet[2793]: W1028 05:12:47.585262 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:47.585331 kubelet[2793]: E1028 05:12:47.585270 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:47.585571 kubelet[2793]: E1028 05:12:47.585544 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:47.585571 kubelet[2793]: W1028 05:12:47.585559 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:47.585629 kubelet[2793]: E1028 05:12:47.585569 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:47.585769 kubelet[2793]: E1028 05:12:47.585748 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:47.585817 kubelet[2793]: W1028 05:12:47.585770 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:47.585817 kubelet[2793]: E1028 05:12:47.585778 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:47.585980 kubelet[2793]: E1028 05:12:47.585966 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:47.585980 kubelet[2793]: W1028 05:12:47.585975 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:47.586028 kubelet[2793]: E1028 05:12:47.585983 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:47.586179 kubelet[2793]: E1028 05:12:47.586165 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:47.586179 kubelet[2793]: W1028 05:12:47.586174 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:47.586231 kubelet[2793]: E1028 05:12:47.586182 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:47.586435 kubelet[2793]: E1028 05:12:47.586418 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:47.586435 kubelet[2793]: W1028 05:12:47.586432 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:47.586483 kubelet[2793]: E1028 05:12:47.586441 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:47.586630 kubelet[2793]: E1028 05:12:47.586620 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 05:12:47.586630 kubelet[2793]: W1028 05:12:47.586628 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 05:12:47.586678 kubelet[2793]: E1028 05:12:47.586636 2793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 05:12:47.598450 kubelet[2793]: I1028 05:12:47.598224 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-86c849df8c-dvf5t" podStartSLOduration=1.607645427 podStartE2EDuration="3.598204176s" podCreationTimestamp="2025-10-28 05:12:44 +0000 UTC" firstStartedPulling="2025-10-28 05:12:44.607621993 +0000 UTC m=+18.277148064" lastFinishedPulling="2025-10-28 05:12:46.598180742 +0000 UTC m=+20.267706813" observedRunningTime="2025-10-28 05:12:47.597893701 +0000 UTC m=+21.267419802" watchObservedRunningTime="2025-10-28 05:12:47.598204176 +0000 UTC m=+21.267730277" Oct 28 05:12:47.841336 containerd[1611]: time="2025-10-28T05:12:47.841227716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:12:47.842166 containerd[1611]: time="2025-10-28T05:12:47.842139134Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Oct 28 05:12:47.843412 containerd[1611]: time="2025-10-28T05:12:47.843379322Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:12:47.845200 containerd[1611]: time="2025-10-28T05:12:47.845176298Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:12:47.845687 containerd[1611]: time="2025-10-28T05:12:47.845657606Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.247268882s" Oct 28 05:12:47.845748 containerd[1611]: time="2025-10-28T05:12:47.845687172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 28 05:12:47.849431 containerd[1611]: time="2025-10-28T05:12:47.849394540Z" level=info msg="CreateContainer within sandbox \"6beb0e8b93baaf8773e94cc29a3fd0bb4cfd60ca0486ca759ee2ca6c491aa9cf\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 28 05:12:47.857670 containerd[1611]: time="2025-10-28T05:12:47.857613723Z" level=info msg="Container 1700b4a800cebf6f2c123675b1fed18e84aeb400d1aee4c75d961398f3dab166: CDI devices from CRI Config.CDIDevices: []" Oct 28 05:12:47.869489 containerd[1611]: time="2025-10-28T05:12:47.869443202Z" level=info msg="CreateContainer within sandbox \"6beb0e8b93baaf8773e94cc29a3fd0bb4cfd60ca0486ca759ee2ca6c491aa9cf\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1700b4a800cebf6f2c123675b1fed18e84aeb400d1aee4c75d961398f3dab166\"" Oct 28 05:12:47.870138 containerd[1611]: time="2025-10-28T05:12:47.870109808Z" level=info msg="StartContainer for \"1700b4a800cebf6f2c123675b1fed18e84aeb400d1aee4c75d961398f3dab166\"" Oct 28 05:12:47.872746 containerd[1611]: time="2025-10-28T05:12:47.872691855Z" level=info msg="connecting to shim 1700b4a800cebf6f2c123675b1fed18e84aeb400d1aee4c75d961398f3dab166" address="unix:///run/containerd/s/53cedad7140df970cda1cfae759a7ee50329bcdb45b8d7db32e0ae7b17222791" protocol=ttrpc version=3 Oct 28 05:12:47.899965 systemd[1]: Started cri-containerd-1700b4a800cebf6f2c123675b1fed18e84aeb400d1aee4c75d961398f3dab166.scope - libcontainer container 1700b4a800cebf6f2c123675b1fed18e84aeb400d1aee4c75d961398f3dab166. Oct 28 05:12:47.949267 containerd[1611]: time="2025-10-28T05:12:47.949218969Z" level=info msg="StartContainer for \"1700b4a800cebf6f2c123675b1fed18e84aeb400d1aee4c75d961398f3dab166\" returns successfully" Oct 28 05:12:47.960514 systemd[1]: cri-containerd-1700b4a800cebf6f2c123675b1fed18e84aeb400d1aee4c75d961398f3dab166.scope: Deactivated successfully. Oct 28 05:12:47.962219 containerd[1611]: time="2025-10-28T05:12:47.962187736Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1700b4a800cebf6f2c123675b1fed18e84aeb400d1aee4c75d961398f3dab166\" id:\"1700b4a800cebf6f2c123675b1fed18e84aeb400d1aee4c75d961398f3dab166\" pid:3506 exited_at:{seconds:1761628367 nanos:961719202}" Oct 28 05:12:47.962336 containerd[1611]: time="2025-10-28T05:12:47.962298114Z" level=info msg="received exit event container_id:\"1700b4a800cebf6f2c123675b1fed18e84aeb400d1aee4c75d961398f3dab166\" id:\"1700b4a800cebf6f2c123675b1fed18e84aeb400d1aee4c75d961398f3dab166\" pid:3506 exited_at:{seconds:1761628367 nanos:961719202}" Oct 28 05:12:47.986513 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1700b4a800cebf6f2c123675b1fed18e84aeb400d1aee4c75d961398f3dab166-rootfs.mount: Deactivated successfully. Oct 28 05:12:48.463122 kubelet[2793]: E1028 05:12:48.463049 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xsl75" podUID="d86ce3dc-83d4-402a-b381-76ea2d723abb" Oct 28 05:12:48.526294 kubelet[2793]: I1028 05:12:48.526256 2793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 28 05:12:48.526763 kubelet[2793]: E1028 05:12:48.526497 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:48.526763 kubelet[2793]: E1028 05:12:48.526728 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:48.527406 containerd[1611]: time="2025-10-28T05:12:48.527373955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 28 05:12:50.463349 kubelet[2793]: E1028 05:12:50.463289 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xsl75" podUID="d86ce3dc-83d4-402a-b381-76ea2d723abb" Oct 28 05:12:50.998407 containerd[1611]: time="2025-10-28T05:12:50.998339390Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:12:50.999036 containerd[1611]: time="2025-10-28T05:12:50.998984204Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 28 05:12:51.000068 containerd[1611]: time="2025-10-28T05:12:51.000030184Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:12:51.002324 containerd[1611]: time="2025-10-28T05:12:51.002287736Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:12:51.003047 containerd[1611]: time="2025-10-28T05:12:51.003017179Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.475609571s" Oct 28 05:12:51.003102 containerd[1611]: time="2025-10-28T05:12:51.003054428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 28 05:12:51.007167 containerd[1611]: time="2025-10-28T05:12:51.007122659Z" level=info msg="CreateContainer within sandbox \"6beb0e8b93baaf8773e94cc29a3fd0bb4cfd60ca0486ca759ee2ca6c491aa9cf\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 28 05:12:51.017474 containerd[1611]: time="2025-10-28T05:12:51.017416811Z" level=info msg="Container b0e937222a66b3f0f311de57a5fd97a3b3e792defc2bd9606742cccaeaf5482e: CDI devices from CRI Config.CDIDevices: []" Oct 28 05:12:51.026420 containerd[1611]: time="2025-10-28T05:12:51.026361872Z" level=info msg="CreateContainer within sandbox \"6beb0e8b93baaf8773e94cc29a3fd0bb4cfd60ca0486ca759ee2ca6c491aa9cf\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b0e937222a66b3f0f311de57a5fd97a3b3e792defc2bd9606742cccaeaf5482e\"" Oct 28 05:12:51.027063 containerd[1611]: time="2025-10-28T05:12:51.026979986Z" level=info msg="StartContainer for \"b0e937222a66b3f0f311de57a5fd97a3b3e792defc2bd9606742cccaeaf5482e\"" Oct 28 05:12:51.028436 containerd[1611]: time="2025-10-28T05:12:51.028405070Z" level=info msg="connecting to shim b0e937222a66b3f0f311de57a5fd97a3b3e792defc2bd9606742cccaeaf5482e" address="unix:///run/containerd/s/53cedad7140df970cda1cfae759a7ee50329bcdb45b8d7db32e0ae7b17222791" protocol=ttrpc version=3 Oct 28 05:12:51.051936 systemd[1]: Started cri-containerd-b0e937222a66b3f0f311de57a5fd97a3b3e792defc2bd9606742cccaeaf5482e.scope - libcontainer container b0e937222a66b3f0f311de57a5fd97a3b3e792defc2bd9606742cccaeaf5482e. Oct 28 05:12:51.095419 containerd[1611]: time="2025-10-28T05:12:51.095381974Z" level=info msg="StartContainer for \"b0e937222a66b3f0f311de57a5fd97a3b3e792defc2bd9606742cccaeaf5482e\" returns successfully" Oct 28 05:12:51.545403 kubelet[2793]: E1028 05:12:51.545199 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:52.167530 systemd[1]: cri-containerd-b0e937222a66b3f0f311de57a5fd97a3b3e792defc2bd9606742cccaeaf5482e.scope: Deactivated successfully. Oct 28 05:12:52.168089 systemd[1]: cri-containerd-b0e937222a66b3f0f311de57a5fd97a3b3e792defc2bd9606742cccaeaf5482e.scope: Consumed 634ms CPU time, 177.8M memory peak, 3.8M read from disk, 171.3M written to disk. Oct 28 05:12:52.169063 containerd[1611]: time="2025-10-28T05:12:52.169024147Z" level=info msg="received exit event container_id:\"b0e937222a66b3f0f311de57a5fd97a3b3e792defc2bd9606742cccaeaf5482e\" id:\"b0e937222a66b3f0f311de57a5fd97a3b3e792defc2bd9606742cccaeaf5482e\" pid:3563 exited_at:{seconds:1761628372 nanos:168210735}" Oct 28 05:12:52.169360 containerd[1611]: time="2025-10-28T05:12:52.169261223Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0e937222a66b3f0f311de57a5fd97a3b3e792defc2bd9606742cccaeaf5482e\" id:\"b0e937222a66b3f0f311de57a5fd97a3b3e792defc2bd9606742cccaeaf5482e\" pid:3563 exited_at:{seconds:1761628372 nanos:168210735}" Oct 28 05:12:52.197390 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0e937222a66b3f0f311de57a5fd97a3b3e792defc2bd9606742cccaeaf5482e-rootfs.mount: Deactivated successfully. Oct 28 05:12:52.237210 kubelet[2793]: I1028 05:12:52.237153 2793 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 28 05:12:52.334682 systemd[1]: Created slice kubepods-burstable-podedaaa83e_dc34_4a96_9305_4f3c94df0a41.slice - libcontainer container kubepods-burstable-podedaaa83e_dc34_4a96_9305_4f3c94df0a41.slice. Oct 28 05:12:52.348114 systemd[1]: Created slice kubepods-besteffort-pod17014fb8_50b3_4fb0_83a0_7ba96dfdd77b.slice - libcontainer container kubepods-besteffort-pod17014fb8_50b3_4fb0_83a0_7ba96dfdd77b.slice. Oct 28 05:12:52.356033 systemd[1]: Created slice kubepods-besteffort-pode27c6b1a_18a7_411c_9050_0f3b48a38781.slice - libcontainer container kubepods-besteffort-pode27c6b1a_18a7_411c_9050_0f3b48a38781.slice. Oct 28 05:12:52.362556 systemd[1]: Created slice kubepods-besteffort-podf96a244b_47a4_4d81_b08b_34e49822ec81.slice - libcontainer container kubepods-besteffort-podf96a244b_47a4_4d81_b08b_34e49822ec81.slice. Oct 28 05:12:52.370135 systemd[1]: Created slice kubepods-burstable-pod1c414253_aa48_4b05_96cc_7f6614bb7635.slice - libcontainer container kubepods-burstable-pod1c414253_aa48_4b05_96cc_7f6614bb7635.slice. Oct 28 05:12:52.377813 systemd[1]: Created slice kubepods-besteffort-pod49813e29_063b_4ee4_bacc_cb4ec7eba7e0.slice - libcontainer container kubepods-besteffort-pod49813e29_063b_4ee4_bacc_cb4ec7eba7e0.slice. Oct 28 05:12:52.383364 systemd[1]: Created slice kubepods-besteffort-pod4bd50f09_b032_49f4_8f6f_5043dcd6661f.slice - libcontainer container kubepods-besteffort-pod4bd50f09_b032_49f4_8f6f_5043dcd6661f.slice. Oct 28 05:12:52.416569 kubelet[2793]: I1028 05:12:52.416486 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/edaaa83e-dc34-4a96-9305-4f3c94df0a41-config-volume\") pod \"coredns-674b8bbfcf-4mflf\" (UID: \"edaaa83e-dc34-4a96-9305-4f3c94df0a41\") " pod="kube-system/coredns-674b8bbfcf-4mflf" Oct 28 05:12:52.416569 kubelet[2793]: I1028 05:12:52.416552 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17014fb8-50b3-4fb0-83a0-7ba96dfdd77b-tigera-ca-bundle\") pod \"calico-kube-controllers-5c6c464845-gchcn\" (UID: \"17014fb8-50b3-4fb0-83a0-7ba96dfdd77b\") " pod="calico-system/calico-kube-controllers-5c6c464845-gchcn" Oct 28 05:12:52.416569 kubelet[2793]: I1028 05:12:52.416581 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvswr\" (UniqueName: \"kubernetes.io/projected/17014fb8-50b3-4fb0-83a0-7ba96dfdd77b-kube-api-access-fvswr\") pod \"calico-kube-controllers-5c6c464845-gchcn\" (UID: \"17014fb8-50b3-4fb0-83a0-7ba96dfdd77b\") " pod="calico-system/calico-kube-controllers-5c6c464845-gchcn" Oct 28 05:12:52.416857 kubelet[2793]: I1028 05:12:52.416610 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4bd50f09-b032-49f4-8f6f-5043dcd6661f-calico-apiserver-certs\") pod \"calico-apiserver-84854587bd-trjrh\" (UID: \"4bd50f09-b032-49f4-8f6f-5043dcd6661f\") " pod="calico-apiserver/calico-apiserver-84854587bd-trjrh" Oct 28 05:12:52.416857 kubelet[2793]: I1028 05:12:52.416722 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f96a244b-47a4-4d81-b08b-34e49822ec81-whisker-ca-bundle\") pod \"whisker-67d7794bb6-wdl9j\" (UID: \"f96a244b-47a4-4d81-b08b-34e49822ec81\") " pod="calico-system/whisker-67d7794bb6-wdl9j" Oct 28 05:12:52.416857 kubelet[2793]: I1028 05:12:52.416820 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjfcl\" (UniqueName: \"kubernetes.io/projected/f96a244b-47a4-4d81-b08b-34e49822ec81-kube-api-access-sjfcl\") pod \"whisker-67d7794bb6-wdl9j\" (UID: \"f96a244b-47a4-4d81-b08b-34e49822ec81\") " pod="calico-system/whisker-67d7794bb6-wdl9j" Oct 28 05:12:52.417005 kubelet[2793]: I1028 05:12:52.416865 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zs97\" (UniqueName: \"kubernetes.io/projected/49813e29-063b-4ee4-bacc-cb4ec7eba7e0-kube-api-access-9zs97\") pod \"goldmane-666569f655-k9fjz\" (UID: \"49813e29-063b-4ee4-bacc-cb4ec7eba7e0\") " pod="calico-system/goldmane-666569f655-k9fjz" Oct 28 05:12:52.417005 kubelet[2793]: I1028 05:12:52.416893 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e27c6b1a-18a7-411c-9050-0f3b48a38781-calico-apiserver-certs\") pod \"calico-apiserver-84854587bd-9b74s\" (UID: \"e27c6b1a-18a7-411c-9050-0f3b48a38781\") " pod="calico-apiserver/calico-apiserver-84854587bd-9b74s" Oct 28 05:12:52.417005 kubelet[2793]: I1028 05:12:52.416936 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8hs2\" (UniqueName: \"kubernetes.io/projected/e27c6b1a-18a7-411c-9050-0f3b48a38781-kube-api-access-j8hs2\") pod \"calico-apiserver-84854587bd-9b74s\" (UID: \"e27c6b1a-18a7-411c-9050-0f3b48a38781\") " pod="calico-apiserver/calico-apiserver-84854587bd-9b74s" Oct 28 05:12:52.417093 kubelet[2793]: I1028 05:12:52.417039 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/49813e29-063b-4ee4-bacc-cb4ec7eba7e0-goldmane-key-pair\") pod \"goldmane-666569f655-k9fjz\" (UID: \"49813e29-063b-4ee4-bacc-cb4ec7eba7e0\") " pod="calico-system/goldmane-666569f655-k9fjz" Oct 28 05:12:52.417093 kubelet[2793]: I1028 05:12:52.417078 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f96a244b-47a4-4d81-b08b-34e49822ec81-whisker-backend-key-pair\") pod \"whisker-67d7794bb6-wdl9j\" (UID: \"f96a244b-47a4-4d81-b08b-34e49822ec81\") " pod="calico-system/whisker-67d7794bb6-wdl9j" Oct 28 05:12:52.417140 kubelet[2793]: I1028 05:12:52.417100 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49813e29-063b-4ee4-bacc-cb4ec7eba7e0-goldmane-ca-bundle\") pod \"goldmane-666569f655-k9fjz\" (UID: \"49813e29-063b-4ee4-bacc-cb4ec7eba7e0\") " pod="calico-system/goldmane-666569f655-k9fjz" Oct 28 05:12:52.417223 kubelet[2793]: I1028 05:12:52.417118 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq5hb\" (UniqueName: \"kubernetes.io/projected/1c414253-aa48-4b05-96cc-7f6614bb7635-kube-api-access-cq5hb\") pod \"coredns-674b8bbfcf-6kw6d\" (UID: \"1c414253-aa48-4b05-96cc-7f6614bb7635\") " pod="kube-system/coredns-674b8bbfcf-6kw6d" Oct 28 05:12:52.417223 kubelet[2793]: I1028 05:12:52.417167 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm55k\" (UniqueName: \"kubernetes.io/projected/edaaa83e-dc34-4a96-9305-4f3c94df0a41-kube-api-access-gm55k\") pod \"coredns-674b8bbfcf-4mflf\" (UID: \"edaaa83e-dc34-4a96-9305-4f3c94df0a41\") " pod="kube-system/coredns-674b8bbfcf-4mflf" Oct 28 05:12:52.417223 kubelet[2793]: I1028 05:12:52.417187 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wgcd\" (UniqueName: \"kubernetes.io/projected/4bd50f09-b032-49f4-8f6f-5043dcd6661f-kube-api-access-7wgcd\") pod \"calico-apiserver-84854587bd-trjrh\" (UID: \"4bd50f09-b032-49f4-8f6f-5043dcd6661f\") " pod="calico-apiserver/calico-apiserver-84854587bd-trjrh" Oct 28 05:12:52.417223 kubelet[2793]: I1028 05:12:52.417208 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49813e29-063b-4ee4-bacc-cb4ec7eba7e0-config\") pod \"goldmane-666569f655-k9fjz\" (UID: \"49813e29-063b-4ee4-bacc-cb4ec7eba7e0\") " pod="calico-system/goldmane-666569f655-k9fjz" Oct 28 05:12:52.417371 kubelet[2793]: I1028 05:12:52.417229 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c414253-aa48-4b05-96cc-7f6614bb7635-config-volume\") pod \"coredns-674b8bbfcf-6kw6d\" (UID: \"1c414253-aa48-4b05-96cc-7f6614bb7635\") " pod="kube-system/coredns-674b8bbfcf-6kw6d" Oct 28 05:12:52.471306 systemd[1]: Created slice kubepods-besteffort-podd86ce3dc_83d4_402a_b381_76ea2d723abb.slice - libcontainer container kubepods-besteffort-podd86ce3dc_83d4_402a_b381_76ea2d723abb.slice. Oct 28 05:12:52.474292 containerd[1611]: time="2025-10-28T05:12:52.474236140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xsl75,Uid:d86ce3dc-83d4-402a-b381-76ea2d723abb,Namespace:calico-system,Attempt:0,}" Oct 28 05:12:52.565400 kubelet[2793]: E1028 05:12:52.565208 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:52.568495 containerd[1611]: time="2025-10-28T05:12:52.568000036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 28 05:12:52.629101 containerd[1611]: time="2025-10-28T05:12:52.629029324Z" level=error msg="Failed to destroy network for sandbox \"0b6d8aea7082816fa07fccccb867d604133bea23b3c2d1dec62863204276d94e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:12:52.630639 containerd[1611]: time="2025-10-28T05:12:52.630574933Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xsl75,Uid:d86ce3dc-83d4-402a-b381-76ea2d723abb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b6d8aea7082816fa07fccccb867d604133bea23b3c2d1dec62863204276d94e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:12:52.630875 kubelet[2793]: E1028 05:12:52.630834 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b6d8aea7082816fa07fccccb867d604133bea23b3c2d1dec62863204276d94e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:12:52.630937 kubelet[2793]: E1028 05:12:52.630895 2793 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b6d8aea7082816fa07fccccb867d604133bea23b3c2d1dec62863204276d94e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xsl75" Oct 28 05:12:52.630937 kubelet[2793]: E1028 05:12:52.630919 2793 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b6d8aea7082816fa07fccccb867d604133bea23b3c2d1dec62863204276d94e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xsl75" Oct 28 05:12:52.631017 kubelet[2793]: E1028 05:12:52.630969 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xsl75_calico-system(d86ce3dc-83d4-402a-b381-76ea2d723abb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xsl75_calico-system(d86ce3dc-83d4-402a-b381-76ea2d723abb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0b6d8aea7082816fa07fccccb867d604133bea23b3c2d1dec62863204276d94e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xsl75" podUID="d86ce3dc-83d4-402a-b381-76ea2d723abb" Oct 28 05:12:52.642465 kubelet[2793]: E1028 05:12:52.642424 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:52.643113 containerd[1611]: time="2025-10-28T05:12:52.643075944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4mflf,Uid:edaaa83e-dc34-4a96-9305-4f3c94df0a41,Namespace:kube-system,Attempt:0,}" Oct 28 05:12:52.651514 containerd[1611]: time="2025-10-28T05:12:52.651475355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c6c464845-gchcn,Uid:17014fb8-50b3-4fb0-83a0-7ba96dfdd77b,Namespace:calico-system,Attempt:0,}" Oct 28 05:12:52.660959 containerd[1611]: time="2025-10-28T05:12:52.660918049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84854587bd-9b74s,Uid:e27c6b1a-18a7-411c-9050-0f3b48a38781,Namespace:calico-apiserver,Attempt:0,}" Oct 28 05:12:52.667376 containerd[1611]: time="2025-10-28T05:12:52.666525814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67d7794bb6-wdl9j,Uid:f96a244b-47a4-4d81-b08b-34e49822ec81,Namespace:calico-system,Attempt:0,}" Oct 28 05:12:52.674269 kubelet[2793]: E1028 05:12:52.674228 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:12:52.677285 containerd[1611]: time="2025-10-28T05:12:52.677251714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6kw6d,Uid:1c414253-aa48-4b05-96cc-7f6614bb7635,Namespace:kube-system,Attempt:0,}" Oct 28 05:12:52.681423 containerd[1611]: time="2025-10-28T05:12:52.681389643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-k9fjz,Uid:49813e29-063b-4ee4-bacc-cb4ec7eba7e0,Namespace:calico-system,Attempt:0,}" Oct 28 05:12:52.690311 containerd[1611]: time="2025-10-28T05:12:52.690269638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84854587bd-trjrh,Uid:4bd50f09-b032-49f4-8f6f-5043dcd6661f,Namespace:calico-apiserver,Attempt:0,}" Oct 28 05:12:52.740504 containerd[1611]: time="2025-10-28T05:12:52.740350678Z" level=error msg="Failed to destroy network for sandbox \"5564ef153a82c39efd02ce0e622f508e40cbc240112c31cdb42e629c2740a333\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:12:52.747824 containerd[1611]: time="2025-10-28T05:12:52.747748143Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4mflf,Uid:edaaa83e-dc34-4a96-9305-4f3c94df0a41,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5564ef153a82c39efd02ce0e622f508e40cbc240112c31cdb42e629c2740a333\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:12:52.752614 kubelet[2793]: E1028 05:12:52.752547 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5564ef153a82c39efd02ce0e622f508e40cbc240112c31cdb42e629c2740a333\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:12:52.752714 kubelet[2793]: E1028 05:12:52.752643 2793 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5564ef153a82c39efd02ce0e622f508e40cbc240112c31cdb42e629c2740a333\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4mflf" Oct 28 05:12:52.752714 kubelet[2793]: E1028 05:12:52.752668 2793 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5564ef153a82c39efd02ce0e622f508e40cbc240112c31cdb42e629c2740a333\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4mflf" Oct 28 05:12:52.752849 kubelet[2793]: E1028 05:12:52.752724 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-4mflf_kube-system(edaaa83e-dc34-4a96-9305-4f3c94df0a41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-4mflf_kube-system(edaaa83e-dc34-4a96-9305-4f3c94df0a41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5564ef153a82c39efd02ce0e622f508e40cbc240112c31cdb42e629c2740a333\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-4mflf" podUID="edaaa83e-dc34-4a96-9305-4f3c94df0a41" Oct 28 05:12:52.770500 containerd[1611]: time="2025-10-28T05:12:52.770321805Z" level=error msg="Failed to destroy network for sandbox \"fd28b0cacbcccfe337cdb4c2171bd7ce977abd672847eb082f383df3f204eba0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:12:52.776066 containerd[1611]: time="2025-10-28T05:12:52.776015852Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6kw6d,Uid:1c414253-aa48-4b05-96cc-7f6614bb7635,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd28b0cacbcccfe337cdb4c2171bd7ce977abd672847eb082f383df3f204eba0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:12:52.776528 kubelet[2793]: E1028 05:12:52.776466 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd28b0cacbcccfe337cdb4c2171bd7ce977abd672847eb082f383df3f204eba0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:12:52.776838 kubelet[2793]: E1028 05:12:52.776561 2793 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd28b0cacbcccfe337cdb4c2171bd7ce977abd672847eb082f383df3f204eba0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6kw6d" Oct 28 05:12:52.776838 kubelet[2793]: E1028 05:12:52.776594 2793 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd28b0cacbcccfe337cdb4c2171bd7ce977abd672847eb082f383df3f204eba0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6kw6d" Oct 28 05:12:52.776838 kubelet[2793]: E1028 05:12:52.776653 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6kw6d_kube-system(1c414253-aa48-4b05-96cc-7f6614bb7635)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6kw6d_kube-system(1c414253-aa48-4b05-96cc-7f6614bb7635)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd28b0cacbcccfe337cdb4c2171bd7ce977abd672847eb082f383df3f204eba0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6kw6d" podUID="1c414253-aa48-4b05-96cc-7f6614bb7635" Oct 28 05:12:52.792480 containerd[1611]: time="2025-10-28T05:12:52.792407046Z" level=error msg="Failed to destroy network for sandbox \"d071a36566ae981d2d2755380ebf21728c0c5c81860b08359345cf997409ba3f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:12:52.793087 containerd[1611]: time="2025-10-28T05:12:52.792407036Z" level=error msg="Failed to destroy network for sandbox \"de2bc37a25cc68a3ea7a9f3ac8dccd66480d02bcdf7a6a416e6ba65ce8cfc14c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:12:52.794225 containerd[1611]: time="2025-10-28T05:12:52.794082851Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c6c464845-gchcn,Uid:17014fb8-50b3-4fb0-83a0-7ba96dfdd77b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d071a36566ae981d2d2755380ebf21728c0c5c81860b08359345cf997409ba3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:12:52.794708 kubelet[2793]: E1028 05:12:52.794523 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d071a36566ae981d2d2755380ebf21728c0c5c81860b08359345cf997409ba3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:12:52.794708 kubelet[2793]: E1028 05:12:52.794593 2793 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d071a36566ae981d2d2755380ebf21728c0c5c81860b08359345cf997409ba3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c6c464845-gchcn" Oct 28 05:12:52.794708 kubelet[2793]: E1028 05:12:52.794622 2793 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d071a36566ae981d2d2755380ebf21728c0c5c81860b08359345cf997409ba3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c6c464845-gchcn" Oct 28 05:12:52.794850 kubelet[2793]: E1028 05:12:52.794670 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5c6c464845-gchcn_calico-system(17014fb8-50b3-4fb0-83a0-7ba96dfdd77b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5c6c464845-gchcn_calico-system(17014fb8-50b3-4fb0-83a0-7ba96dfdd77b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d071a36566ae981d2d2755380ebf21728c0c5c81860b08359345cf997409ba3f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c6c464845-gchcn" podUID="17014fb8-50b3-4fb0-83a0-7ba96dfdd77b" Oct 28 05:12:52.795725 containerd[1611]: time="2025-10-28T05:12:52.795694765Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67d7794bb6-wdl9j,Uid:f96a244b-47a4-4d81-b08b-34e49822ec81,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"de2bc37a25cc68a3ea7a9f3ac8dccd66480d02bcdf7a6a416e6ba65ce8cfc14c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:12:52.795963 kubelet[2793]: E1028 05:12:52.795942 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de2bc37a25cc68a3ea7a9f3ac8dccd66480d02bcdf7a6a416e6ba65ce8cfc14c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:12:52.796236 kubelet[2793]: E1028 05:12:52.796217 2793 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de2bc37a25cc68a3ea7a9f3ac8dccd66480d02bcdf7a6a416e6ba65ce8cfc14c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-67d7794bb6-wdl9j" Oct 28 05:12:52.796309 kubelet[2793]: E1028 05:12:52.796239 2793 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de2bc37a25cc68a3ea7a9f3ac8dccd66480d02bcdf7a6a416e6ba65ce8cfc14c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-67d7794bb6-wdl9j" Oct 28 05:12:52.796309 kubelet[2793]: E1028 05:12:52.796274 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-67d7794bb6-wdl9j_calico-system(f96a244b-47a4-4d81-b08b-34e49822ec81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-67d7794bb6-wdl9j_calico-system(f96a244b-47a4-4d81-b08b-34e49822ec81)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de2bc37a25cc68a3ea7a9f3ac8dccd66480d02bcdf7a6a416e6ba65ce8cfc14c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-67d7794bb6-wdl9j" podUID="f96a244b-47a4-4d81-b08b-34e49822ec81" Oct 28 05:12:52.801161 containerd[1611]: time="2025-10-28T05:12:52.801104177Z" level=error msg="Failed to destroy network for sandbox \"c5876d4bc09980368929386fe146723bf0eafe2fe771700f35edd86055c47e10\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:12:52.802907 containerd[1611]: time="2025-10-28T05:12:52.802822952Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84854587bd-trjrh,Uid:4bd50f09-b032-49f4-8f6f-5043dcd6661f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5876d4bc09980368929386fe146723bf0eafe2fe771700f35edd86055c47e10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:12:52.803735 kubelet[2793]: E1028 05:12:52.803361 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5876d4bc09980368929386fe146723bf0eafe2fe771700f35edd86055c47e10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:12:52.803735 kubelet[2793]: E1028 05:12:52.803449 2793 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5876d4bc09980368929386fe146723bf0eafe2fe771700f35edd86055c47e10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84854587bd-trjrh" Oct 28 05:12:52.803735 kubelet[2793]: E1028 05:12:52.803472 2793 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5876d4bc09980368929386fe146723bf0eafe2fe771700f35edd86055c47e10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84854587bd-trjrh" Oct 28 05:12:52.804024 kubelet[2793]: E1028 05:12:52.803522 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84854587bd-trjrh_calico-apiserver(4bd50f09-b032-49f4-8f6f-5043dcd6661f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84854587bd-trjrh_calico-apiserver(4bd50f09-b032-49f4-8f6f-5043dcd6661f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5876d4bc09980368929386fe146723bf0eafe2fe771700f35edd86055c47e10\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84854587bd-trjrh" podUID="4bd50f09-b032-49f4-8f6f-5043dcd6661f" Oct 28 05:12:52.810096 containerd[1611]: time="2025-10-28T05:12:52.810032994Z" level=error msg="Failed to destroy network for sandbox \"3d2501a84f87249713e0a982ddf81efd2b4daecdb0853d02b757182674876e06\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:12:52.815906 containerd[1611]: time="2025-10-28T05:12:52.815855875Z" level=error msg="Failed to destroy network for sandbox \"03c42decfbed81ecc36610029bd3466207c27a10aecba611354b634290f1fa30\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:12:52.816154 containerd[1611]: time="2025-10-28T05:12:52.816097510Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84854587bd-9b74s,Uid:e27c6b1a-18a7-411c-9050-0f3b48a38781,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d2501a84f87249713e0a982ddf81efd2b4daecdb0853d02b757182674876e06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:12:52.816401 kubelet[2793]: E1028 05:12:52.816360 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d2501a84f87249713e0a982ddf81efd2b4daecdb0853d02b757182674876e06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:12:52.816460 kubelet[2793]: E1028 05:12:52.816431 2793 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d2501a84f87249713e0a982ddf81efd2b4daecdb0853d02b757182674876e06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84854587bd-9b74s" Oct 28 05:12:52.816460 kubelet[2793]: E1028 05:12:52.816453 2793 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d2501a84f87249713e0a982ddf81efd2b4daecdb0853d02b757182674876e06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84854587bd-9b74s" Oct 28 05:12:52.816545 kubelet[2793]: E1028 05:12:52.816514 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84854587bd-9b74s_calico-apiserver(e27c6b1a-18a7-411c-9050-0f3b48a38781)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84854587bd-9b74s_calico-apiserver(e27c6b1a-18a7-411c-9050-0f3b48a38781)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d2501a84f87249713e0a982ddf81efd2b4daecdb0853d02b757182674876e06\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84854587bd-9b74s" podUID="e27c6b1a-18a7-411c-9050-0f3b48a38781" Oct 28 05:12:52.817953 containerd[1611]: time="2025-10-28T05:12:52.817886286Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-k9fjz,Uid:49813e29-063b-4ee4-bacc-cb4ec7eba7e0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"03c42decfbed81ecc36610029bd3466207c27a10aecba611354b634290f1fa30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:12:52.818225 kubelet[2793]: E1028 05:12:52.818152 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03c42decfbed81ecc36610029bd3466207c27a10aecba611354b634290f1fa30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 05:12:52.818278 kubelet[2793]: E1028 05:12:52.818245 2793 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03c42decfbed81ecc36610029bd3466207c27a10aecba611354b634290f1fa30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-k9fjz" Oct 28 05:12:52.818278 kubelet[2793]: E1028 05:12:52.818271 2793 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03c42decfbed81ecc36610029bd3466207c27a10aecba611354b634290f1fa30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-k9fjz" Oct 28 05:12:52.818366 kubelet[2793]: E1028 05:12:52.818331 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-k9fjz_calico-system(49813e29-063b-4ee4-bacc-cb4ec7eba7e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-k9fjz_calico-system(49813e29-063b-4ee4-bacc-cb4ec7eba7e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03c42decfbed81ecc36610029bd3466207c27a10aecba611354b634290f1fa30\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-k9fjz" podUID="49813e29-063b-4ee4-bacc-cb4ec7eba7e0" Oct 28 05:12:53.205996 systemd[1]: run-netns-cni\x2d7f28d036\x2de5cd\x2dfb6d\x2d69ff\x2d1182d9aed8d6.mount: Deactivated successfully. Oct 28 05:13:01.064373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3041712026.mount: Deactivated successfully. Oct 28 05:13:02.389504 containerd[1611]: time="2025-10-28T05:13:02.389416968Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:13:02.414960 containerd[1611]: time="2025-10-28T05:13:02.390429982Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 28 05:13:02.414960 containerd[1611]: time="2025-10-28T05:13:02.391949427Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:13:02.415189 containerd[1611]: time="2025-10-28T05:13:02.411928087Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 9.843873249s" Oct 28 05:13:02.415189 containerd[1611]: time="2025-10-28T05:13:02.415085501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 28 05:13:02.415777 containerd[1611]: time="2025-10-28T05:13:02.415740872Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 05:13:02.438349 containerd[1611]: time="2025-10-28T05:13:02.438275365Z" level=info msg="CreateContainer within sandbox \"6beb0e8b93baaf8773e94cc29a3fd0bb4cfd60ca0486ca759ee2ca6c491aa9cf\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 28 05:13:02.461555 containerd[1611]: time="2025-10-28T05:13:02.461502429Z" level=info msg="Container 839ed2195728776404f5631c78b9d3e10de4d32504541cab9c85c3a15ef68076: CDI devices from CRI Config.CDIDevices: []" Oct 28 05:13:02.473923 containerd[1611]: time="2025-10-28T05:13:02.473866971Z" level=info msg="CreateContainer within sandbox \"6beb0e8b93baaf8773e94cc29a3fd0bb4cfd60ca0486ca759ee2ca6c491aa9cf\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"839ed2195728776404f5631c78b9d3e10de4d32504541cab9c85c3a15ef68076\"" Oct 28 05:13:02.474360 containerd[1611]: time="2025-10-28T05:13:02.474332136Z" level=info msg="StartContainer for \"839ed2195728776404f5631c78b9d3e10de4d32504541cab9c85c3a15ef68076\"" Oct 28 05:13:02.476644 containerd[1611]: time="2025-10-28T05:13:02.476607721Z" level=info msg="connecting to shim 839ed2195728776404f5631c78b9d3e10de4d32504541cab9c85c3a15ef68076" address="unix:///run/containerd/s/53cedad7140df970cda1cfae759a7ee50329bcdb45b8d7db32e0ae7b17222791" protocol=ttrpc version=3 Oct 28 05:13:02.561988 systemd[1]: Started cri-containerd-839ed2195728776404f5631c78b9d3e10de4d32504541cab9c85c3a15ef68076.scope - libcontainer container 839ed2195728776404f5631c78b9d3e10de4d32504541cab9c85c3a15ef68076. Oct 28 05:13:02.678609 containerd[1611]: time="2025-10-28T05:13:02.678563932Z" level=info msg="StartContainer for \"839ed2195728776404f5631c78b9d3e10de4d32504541cab9c85c3a15ef68076\" returns successfully" Oct 28 05:13:02.748064 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 28 05:13:02.749337 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 28 05:13:02.890595 kubelet[2793]: I1028 05:13:02.890526 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f96a244b-47a4-4d81-b08b-34e49822ec81-whisker-backend-key-pair\") pod \"f96a244b-47a4-4d81-b08b-34e49822ec81\" (UID: \"f96a244b-47a4-4d81-b08b-34e49822ec81\") " Oct 28 05:13:02.891114 kubelet[2793]: I1028 05:13:02.890612 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjfcl\" (UniqueName: \"kubernetes.io/projected/f96a244b-47a4-4d81-b08b-34e49822ec81-kube-api-access-sjfcl\") pod \"f96a244b-47a4-4d81-b08b-34e49822ec81\" (UID: \"f96a244b-47a4-4d81-b08b-34e49822ec81\") " Oct 28 05:13:02.891114 kubelet[2793]: I1028 05:13:02.890642 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f96a244b-47a4-4d81-b08b-34e49822ec81-whisker-ca-bundle\") pod \"f96a244b-47a4-4d81-b08b-34e49822ec81\" (UID: \"f96a244b-47a4-4d81-b08b-34e49822ec81\") " Oct 28 05:13:02.891217 kubelet[2793]: I1028 05:13:02.891179 2793 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f96a244b-47a4-4d81-b08b-34e49822ec81-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f96a244b-47a4-4d81-b08b-34e49822ec81" (UID: "f96a244b-47a4-4d81-b08b-34e49822ec81"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 28 05:13:02.895857 kubelet[2793]: I1028 05:13:02.895809 2793 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f96a244b-47a4-4d81-b08b-34e49822ec81-kube-api-access-sjfcl" (OuterVolumeSpecName: "kube-api-access-sjfcl") pod "f96a244b-47a4-4d81-b08b-34e49822ec81" (UID: "f96a244b-47a4-4d81-b08b-34e49822ec81"). InnerVolumeSpecName "kube-api-access-sjfcl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 28 05:13:02.896495 kubelet[2793]: I1028 05:13:02.896468 2793 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f96a244b-47a4-4d81-b08b-34e49822ec81-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f96a244b-47a4-4d81-b08b-34e49822ec81" (UID: "f96a244b-47a4-4d81-b08b-34e49822ec81"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 28 05:13:02.991887 kubelet[2793]: I1028 05:13:02.991662 2793 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f96a244b-47a4-4d81-b08b-34e49822ec81-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 28 05:13:02.991887 kubelet[2793]: I1028 05:13:02.991816 2793 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sjfcl\" (UniqueName: \"kubernetes.io/projected/f96a244b-47a4-4d81-b08b-34e49822ec81-kube-api-access-sjfcl\") on node \"localhost\" DevicePath \"\"" Oct 28 05:13:02.991887 kubelet[2793]: I1028 05:13:02.991829 2793 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f96a244b-47a4-4d81-b08b-34e49822ec81-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 28 05:13:03.422167 systemd[1]: var-lib-kubelet-pods-f96a244b\x2d47a4\x2d4d81\x2db08b\x2d34e49822ec81-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsjfcl.mount: Deactivated successfully. Oct 28 05:13:03.422308 systemd[1]: var-lib-kubelet-pods-f96a244b\x2d47a4\x2d4d81\x2db08b\x2d34e49822ec81-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 28 05:13:03.463453 containerd[1611]: time="2025-10-28T05:13:03.463384700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-k9fjz,Uid:49813e29-063b-4ee4-bacc-cb4ec7eba7e0,Namespace:calico-system,Attempt:0,}" Oct 28 05:13:03.464066 kubelet[2793]: E1028 05:13:03.463811 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:13:03.464454 containerd[1611]: time="2025-10-28T05:13:03.464400579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6kw6d,Uid:1c414253-aa48-4b05-96cc-7f6614bb7635,Namespace:kube-system,Attempt:0,}" Oct 28 05:13:03.605330 kubelet[2793]: E1028 05:13:03.605032 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:13:03.615741 systemd[1]: Removed slice kubepods-besteffort-podf96a244b_47a4_4d81_b08b_34e49822ec81.slice - libcontainer container kubepods-besteffort-podf96a244b_47a4_4d81_b08b_34e49822ec81.slice. Oct 28 05:13:03.629349 kubelet[2793]: I1028 05:13:03.628686 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-c4zlz" podStartSLOduration=1.890662608 podStartE2EDuration="19.628667642s" podCreationTimestamp="2025-10-28 05:12:44 +0000 UTC" firstStartedPulling="2025-10-28 05:12:44.678544818 +0000 UTC m=+18.348070889" lastFinishedPulling="2025-10-28 05:13:02.416549852 +0000 UTC m=+36.086075923" observedRunningTime="2025-10-28 05:13:03.624172837 +0000 UTC m=+37.293698908" watchObservedRunningTime="2025-10-28 05:13:03.628667642 +0000 UTC m=+37.298193713" Oct 28 05:13:03.666636 systemd-networkd[1506]: cali81611f092ce: Link UP Oct 28 05:13:03.666896 systemd-networkd[1506]: cali81611f092ce: Gained carrier Oct 28 05:13:03.695664 containerd[1611]: 2025-10-28 05:13:03.497 [INFO][3947] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 28 05:13:03.695664 containerd[1611]: 2025-10-28 05:13:03.517 [INFO][3947] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--6kw6d-eth0 coredns-674b8bbfcf- kube-system 1c414253-aa48-4b05-96cc-7f6614bb7635 871 0 2025-10-28 05:12:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-6kw6d eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali81611f092ce [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ffcc0ac167828fe05871d2eef32e473110abae0d520a9deee64bf710897da080" Namespace="kube-system" Pod="coredns-674b8bbfcf-6kw6d" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6kw6d-" Oct 28 05:13:03.695664 containerd[1611]: 2025-10-28 05:13:03.517 [INFO][3947] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ffcc0ac167828fe05871d2eef32e473110abae0d520a9deee64bf710897da080" Namespace="kube-system" Pod="coredns-674b8bbfcf-6kw6d" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6kw6d-eth0" Oct 28 05:13:03.695664 containerd[1611]: 2025-10-28 05:13:03.590 [INFO][3968] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ffcc0ac167828fe05871d2eef32e473110abae0d520a9deee64bf710897da080" HandleID="k8s-pod-network.ffcc0ac167828fe05871d2eef32e473110abae0d520a9deee64bf710897da080" Workload="localhost-k8s-coredns--674b8bbfcf--6kw6d-eth0" Oct 28 05:13:03.695992 containerd[1611]: 2025-10-28 05:13:03.591 [INFO][3968] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ffcc0ac167828fe05871d2eef32e473110abae0d520a9deee64bf710897da080" HandleID="k8s-pod-network.ffcc0ac167828fe05871d2eef32e473110abae0d520a9deee64bf710897da080" Workload="localhost-k8s-coredns--674b8bbfcf--6kw6d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000532830), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-6kw6d", "timestamp":"2025-10-28 05:13:03.590152279 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 05:13:03.695992 containerd[1611]: 2025-10-28 05:13:03.591 [INFO][3968] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 05:13:03.695992 containerd[1611]: 2025-10-28 05:13:03.591 [INFO][3968] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 05:13:03.695992 containerd[1611]: 2025-10-28 05:13:03.591 [INFO][3968] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 05:13:03.695992 containerd[1611]: 2025-10-28 05:13:03.599 [INFO][3968] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ffcc0ac167828fe05871d2eef32e473110abae0d520a9deee64bf710897da080" host="localhost" Oct 28 05:13:03.695992 containerd[1611]: 2025-10-28 05:13:03.609 [INFO][3968] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 05:13:03.695992 containerd[1611]: 2025-10-28 05:13:03.617 [INFO][3968] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 05:13:03.695992 containerd[1611]: 2025-10-28 05:13:03.619 [INFO][3968] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 05:13:03.695992 containerd[1611]: 2025-10-28 05:13:03.624 [INFO][3968] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 05:13:03.695992 containerd[1611]: 2025-10-28 05:13:03.624 [INFO][3968] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ffcc0ac167828fe05871d2eef32e473110abae0d520a9deee64bf710897da080" host="localhost" Oct 28 05:13:03.696276 containerd[1611]: 2025-10-28 05:13:03.627 [INFO][3968] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ffcc0ac167828fe05871d2eef32e473110abae0d520a9deee64bf710897da080 Oct 28 05:13:03.696276 containerd[1611]: 2025-10-28 05:13:03.634 [INFO][3968] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ffcc0ac167828fe05871d2eef32e473110abae0d520a9deee64bf710897da080" host="localhost" Oct 28 05:13:03.696276 containerd[1611]: 2025-10-28 05:13:03.641 [INFO][3968] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.ffcc0ac167828fe05871d2eef32e473110abae0d520a9deee64bf710897da080" host="localhost" Oct 28 05:13:03.696276 containerd[1611]: 2025-10-28 05:13:03.642 [INFO][3968] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.ffcc0ac167828fe05871d2eef32e473110abae0d520a9deee64bf710897da080" host="localhost" Oct 28 05:13:03.696276 containerd[1611]: 2025-10-28 05:13:03.642 [INFO][3968] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 05:13:03.696276 containerd[1611]: 2025-10-28 05:13:03.642 [INFO][3968] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="ffcc0ac167828fe05871d2eef32e473110abae0d520a9deee64bf710897da080" HandleID="k8s-pod-network.ffcc0ac167828fe05871d2eef32e473110abae0d520a9deee64bf710897da080" Workload="localhost-k8s-coredns--674b8bbfcf--6kw6d-eth0" Oct 28 05:13:03.696469 containerd[1611]: 2025-10-28 05:13:03.653 [INFO][3947] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ffcc0ac167828fe05871d2eef32e473110abae0d520a9deee64bf710897da080" Namespace="kube-system" Pod="coredns-674b8bbfcf-6kw6d" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6kw6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--6kw6d-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1c414253-aa48-4b05-96cc-7f6614bb7635", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 5, 12, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-6kw6d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali81611f092ce", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 05:13:03.696559 containerd[1611]: 2025-10-28 05:13:03.655 [INFO][3947] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="ffcc0ac167828fe05871d2eef32e473110abae0d520a9deee64bf710897da080" Namespace="kube-system" Pod="coredns-674b8bbfcf-6kw6d" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6kw6d-eth0" Oct 28 05:13:03.696559 containerd[1611]: 2025-10-28 05:13:03.655 [INFO][3947] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali81611f092ce ContainerID="ffcc0ac167828fe05871d2eef32e473110abae0d520a9deee64bf710897da080" Namespace="kube-system" Pod="coredns-674b8bbfcf-6kw6d" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6kw6d-eth0" Oct 28 05:13:03.696559 containerd[1611]: 2025-10-28 05:13:03.666 [INFO][3947] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ffcc0ac167828fe05871d2eef32e473110abae0d520a9deee64bf710897da080" Namespace="kube-system" Pod="coredns-674b8bbfcf-6kw6d" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6kw6d-eth0" Oct 28 05:13:03.696658 containerd[1611]: 2025-10-28 05:13:03.667 [INFO][3947] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ffcc0ac167828fe05871d2eef32e473110abae0d520a9deee64bf710897da080" Namespace="kube-system" Pod="coredns-674b8bbfcf-6kw6d" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6kw6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--6kw6d-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1c414253-aa48-4b05-96cc-7f6614bb7635", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 5, 12, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ffcc0ac167828fe05871d2eef32e473110abae0d520a9deee64bf710897da080", Pod:"coredns-674b8bbfcf-6kw6d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali81611f092ce", MAC:"b2:9f:a4:ed:0e:34", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 05:13:03.696658 containerd[1611]: 2025-10-28 05:13:03.686 [INFO][3947] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ffcc0ac167828fe05871d2eef32e473110abae0d520a9deee64bf710897da080" Namespace="kube-system" Pod="coredns-674b8bbfcf-6kw6d" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6kw6d-eth0" Oct 28 05:13:03.894766 systemd-networkd[1506]: cali8b03af24a63: Link UP Oct 28 05:13:03.895896 systemd-networkd[1506]: cali8b03af24a63: Gained carrier Oct 28 05:13:03.904041 systemd[1]: Created slice kubepods-besteffort-pod8192e129_0d18_4558_9c03_84afd7a7f848.slice - libcontainer container kubepods-besteffort-pod8192e129_0d18_4558_9c03_84afd7a7f848.slice. Oct 28 05:13:03.940860 containerd[1611]: 2025-10-28 05:13:03.494 [INFO][3941] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 28 05:13:03.940860 containerd[1611]: 2025-10-28 05:13:03.517 [INFO][3941] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--k9fjz-eth0 goldmane-666569f655- calico-system 49813e29-063b-4ee4-bacc-cb4ec7eba7e0 875 0 2025-10-28 05:12:42 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-k9fjz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8b03af24a63 [] [] }} ContainerID="83cbe3ef185bd510ac3d80907715994f43406f9c72e515dbb4ae06a9bf24057d" Namespace="calico-system" Pod="goldmane-666569f655-k9fjz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--k9fjz-" Oct 28 05:13:03.940860 containerd[1611]: 2025-10-28 05:13:03.517 [INFO][3941] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="83cbe3ef185bd510ac3d80907715994f43406f9c72e515dbb4ae06a9bf24057d" Namespace="calico-system" Pod="goldmane-666569f655-k9fjz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--k9fjz-eth0" Oct 28 05:13:03.940860 containerd[1611]: 2025-10-28 05:13:03.590 [INFO][3970] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="83cbe3ef185bd510ac3d80907715994f43406f9c72e515dbb4ae06a9bf24057d" HandleID="k8s-pod-network.83cbe3ef185bd510ac3d80907715994f43406f9c72e515dbb4ae06a9bf24057d" Workload="localhost-k8s-goldmane--666569f655--k9fjz-eth0" Oct 28 05:13:03.940860 containerd[1611]: 2025-10-28 05:13:03.591 [INFO][3970] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="83cbe3ef185bd510ac3d80907715994f43406f9c72e515dbb4ae06a9bf24057d" HandleID="k8s-pod-network.83cbe3ef185bd510ac3d80907715994f43406f9c72e515dbb4ae06a9bf24057d" Workload="localhost-k8s-goldmane--666569f655--k9fjz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001276b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-k9fjz", "timestamp":"2025-10-28 05:13:03.590543756 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 05:13:03.940860 containerd[1611]: 2025-10-28 05:13:03.591 [INFO][3970] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 05:13:03.940860 containerd[1611]: 2025-10-28 05:13:03.642 [INFO][3970] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 05:13:03.940860 containerd[1611]: 2025-10-28 05:13:03.643 [INFO][3970] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 05:13:03.940860 containerd[1611]: 2025-10-28 05:13:03.700 [INFO][3970] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.83cbe3ef185bd510ac3d80907715994f43406f9c72e515dbb4ae06a9bf24057d" host="localhost" Oct 28 05:13:03.940860 containerd[1611]: 2025-10-28 05:13:03.714 [INFO][3970] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 05:13:03.940860 containerd[1611]: 2025-10-28 05:13:03.760 [INFO][3970] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 05:13:03.940860 containerd[1611]: 2025-10-28 05:13:03.762 [INFO][3970] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 05:13:03.940860 containerd[1611]: 2025-10-28 05:13:03.765 [INFO][3970] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 05:13:03.940860 containerd[1611]: 2025-10-28 05:13:03.765 [INFO][3970] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.83cbe3ef185bd510ac3d80907715994f43406f9c72e515dbb4ae06a9bf24057d" host="localhost" Oct 28 05:13:03.940860 containerd[1611]: 2025-10-28 05:13:03.766 [INFO][3970] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.83cbe3ef185bd510ac3d80907715994f43406f9c72e515dbb4ae06a9bf24057d Oct 28 05:13:03.940860 containerd[1611]: 2025-10-28 05:13:03.791 [INFO][3970] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.83cbe3ef185bd510ac3d80907715994f43406f9c72e515dbb4ae06a9bf24057d" host="localhost" Oct 28 05:13:03.940860 containerd[1611]: 2025-10-28 05:13:03.887 [INFO][3970] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.83cbe3ef185bd510ac3d80907715994f43406f9c72e515dbb4ae06a9bf24057d" host="localhost" Oct 28 05:13:03.940860 containerd[1611]: 2025-10-28 05:13:03.887 [INFO][3970] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.83cbe3ef185bd510ac3d80907715994f43406f9c72e515dbb4ae06a9bf24057d" host="localhost" Oct 28 05:13:03.940860 containerd[1611]: 2025-10-28 05:13:03.887 [INFO][3970] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 05:13:03.940860 containerd[1611]: 2025-10-28 05:13:03.887 [INFO][3970] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="83cbe3ef185bd510ac3d80907715994f43406f9c72e515dbb4ae06a9bf24057d" HandleID="k8s-pod-network.83cbe3ef185bd510ac3d80907715994f43406f9c72e515dbb4ae06a9bf24057d" Workload="localhost-k8s-goldmane--666569f655--k9fjz-eth0" Oct 28 05:13:03.941547 containerd[1611]: 2025-10-28 05:13:03.890 [INFO][3941] cni-plugin/k8s.go 418: Populated endpoint ContainerID="83cbe3ef185bd510ac3d80907715994f43406f9c72e515dbb4ae06a9bf24057d" Namespace="calico-system" Pod="goldmane-666569f655-k9fjz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--k9fjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--k9fjz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"49813e29-063b-4ee4-bacc-cb4ec7eba7e0", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 5, 12, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-k9fjz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8b03af24a63", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 05:13:03.941547 containerd[1611]: 2025-10-28 05:13:03.890 [INFO][3941] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="83cbe3ef185bd510ac3d80907715994f43406f9c72e515dbb4ae06a9bf24057d" Namespace="calico-system" Pod="goldmane-666569f655-k9fjz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--k9fjz-eth0" Oct 28 05:13:03.941547 containerd[1611]: 2025-10-28 05:13:03.890 [INFO][3941] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8b03af24a63 ContainerID="83cbe3ef185bd510ac3d80907715994f43406f9c72e515dbb4ae06a9bf24057d" Namespace="calico-system" Pod="goldmane-666569f655-k9fjz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--k9fjz-eth0" Oct 28 05:13:03.941547 containerd[1611]: 2025-10-28 05:13:03.896 [INFO][3941] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="83cbe3ef185bd510ac3d80907715994f43406f9c72e515dbb4ae06a9bf24057d" Namespace="calico-system" Pod="goldmane-666569f655-k9fjz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--k9fjz-eth0" Oct 28 05:13:03.941547 containerd[1611]: 2025-10-28 05:13:03.898 [INFO][3941] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="83cbe3ef185bd510ac3d80907715994f43406f9c72e515dbb4ae06a9bf24057d" Namespace="calico-system" Pod="goldmane-666569f655-k9fjz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--k9fjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--k9fjz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"49813e29-063b-4ee4-bacc-cb4ec7eba7e0", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 5, 12, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"83cbe3ef185bd510ac3d80907715994f43406f9c72e515dbb4ae06a9bf24057d", Pod:"goldmane-666569f655-k9fjz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8b03af24a63", MAC:"9a:7a:0e:8a:c0:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 05:13:03.941547 containerd[1611]: 2025-10-28 05:13:03.937 [INFO][3941] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="83cbe3ef185bd510ac3d80907715994f43406f9c72e515dbb4ae06a9bf24057d" Namespace="calico-system" Pod="goldmane-666569f655-k9fjz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--k9fjz-eth0" Oct 28 05:13:04.000162 kubelet[2793]: I1028 05:13:03.999983 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8192e129-0d18-4558-9c03-84afd7a7f848-whisker-ca-bundle\") pod \"whisker-664d46d96d-xvk5c\" (UID: \"8192e129-0d18-4558-9c03-84afd7a7f848\") " pod="calico-system/whisker-664d46d96d-xvk5c" Oct 28 05:13:04.000162 kubelet[2793]: I1028 05:13:04.000086 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkk5s\" (UniqueName: \"kubernetes.io/projected/8192e129-0d18-4558-9c03-84afd7a7f848-kube-api-access-lkk5s\") pod \"whisker-664d46d96d-xvk5c\" (UID: \"8192e129-0d18-4558-9c03-84afd7a7f848\") " pod="calico-system/whisker-664d46d96d-xvk5c" Oct 28 05:13:04.000162 kubelet[2793]: I1028 05:13:04.000140 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8192e129-0d18-4558-9c03-84afd7a7f848-whisker-backend-key-pair\") pod \"whisker-664d46d96d-xvk5c\" (UID: \"8192e129-0d18-4558-9c03-84afd7a7f848\") " pod="calico-system/whisker-664d46d96d-xvk5c" Oct 28 05:13:04.208079 containerd[1611]: time="2025-10-28T05:13:04.207764560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-664d46d96d-xvk5c,Uid:8192e129-0d18-4558-9c03-84afd7a7f848,Namespace:calico-system,Attempt:0,}" Oct 28 05:13:04.255034 containerd[1611]: time="2025-10-28T05:13:04.254836298Z" level=info msg="connecting to shim ffcc0ac167828fe05871d2eef32e473110abae0d520a9deee64bf710897da080" address="unix:///run/containerd/s/da0d4eefc91bfd09b326d5dd9d2087d9330ecef79382c8961238e73a895a6a88" namespace=k8s.io protocol=ttrpc version=3 Oct 28 05:13:04.257957 containerd[1611]: time="2025-10-28T05:13:04.257759168Z" level=info msg="connecting to shim 83cbe3ef185bd510ac3d80907715994f43406f9c72e515dbb4ae06a9bf24057d" address="unix:///run/containerd/s/f0a952f7c8503c511ba2e2c81130584222ebf65c7795f6f056301585bd35db9b" namespace=k8s.io protocol=ttrpc version=3 Oct 28 05:13:04.285944 systemd[1]: Started cri-containerd-ffcc0ac167828fe05871d2eef32e473110abae0d520a9deee64bf710897da080.scope - libcontainer container ffcc0ac167828fe05871d2eef32e473110abae0d520a9deee64bf710897da080. Oct 28 05:13:04.289717 systemd[1]: Started cri-containerd-83cbe3ef185bd510ac3d80907715994f43406f9c72e515dbb4ae06a9bf24057d.scope - libcontainer container 83cbe3ef185bd510ac3d80907715994f43406f9c72e515dbb4ae06a9bf24057d. Oct 28 05:13:04.307970 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 05:13:04.310942 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 05:13:04.342150 systemd-networkd[1506]: calia245fc810cc: Link UP Oct 28 05:13:04.343412 systemd-networkd[1506]: calia245fc810cc: Gained carrier Oct 28 05:13:04.359594 containerd[1611]: time="2025-10-28T05:13:04.359412602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6kw6d,Uid:1c414253-aa48-4b05-96cc-7f6614bb7635,Namespace:kube-system,Attempt:0,} returns sandbox id \"ffcc0ac167828fe05871d2eef32e473110abae0d520a9deee64bf710897da080\"" Oct 28 05:13:04.360979 kubelet[2793]: E1028 05:13:04.360944 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:13:04.366990 containerd[1611]: 2025-10-28 05:13:04.239 [INFO][4102] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 28 05:13:04.366990 containerd[1611]: 2025-10-28 05:13:04.255 [INFO][4102] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--664d46d96d--xvk5c-eth0 whisker-664d46d96d- calico-system 8192e129-0d18-4558-9c03-84afd7a7f848 960 0 2025-10-28 05:13:03 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:664d46d96d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-664d46d96d-xvk5c eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia245fc810cc [] [] }} ContainerID="2533ece39407b08dc273cba5800f48fbb3d93ccbf29ea612315d509d4ea0a3e5" Namespace="calico-system" Pod="whisker-664d46d96d-xvk5c" WorkloadEndpoint="localhost-k8s-whisker--664d46d96d--xvk5c-" Oct 28 05:13:04.366990 containerd[1611]: 2025-10-28 05:13:04.256 [INFO][4102] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2533ece39407b08dc273cba5800f48fbb3d93ccbf29ea612315d509d4ea0a3e5" Namespace="calico-system" Pod="whisker-664d46d96d-xvk5c" WorkloadEndpoint="localhost-k8s-whisker--664d46d96d--xvk5c-eth0" Oct 28 05:13:04.366990 containerd[1611]: 2025-10-28 05:13:04.291 [INFO][4154] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2533ece39407b08dc273cba5800f48fbb3d93ccbf29ea612315d509d4ea0a3e5" HandleID="k8s-pod-network.2533ece39407b08dc273cba5800f48fbb3d93ccbf29ea612315d509d4ea0a3e5" Workload="localhost-k8s-whisker--664d46d96d--xvk5c-eth0" Oct 28 05:13:04.366990 containerd[1611]: 2025-10-28 05:13:04.292 [INFO][4154] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2533ece39407b08dc273cba5800f48fbb3d93ccbf29ea612315d509d4ea0a3e5" HandleID="k8s-pod-network.2533ece39407b08dc273cba5800f48fbb3d93ccbf29ea612315d509d4ea0a3e5" Workload="localhost-k8s-whisker--664d46d96d--xvk5c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033b6a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-664d46d96d-xvk5c", "timestamp":"2025-10-28 05:13:04.2918765 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 05:13:04.366990 containerd[1611]: 2025-10-28 05:13:04.292 [INFO][4154] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 05:13:04.366990 containerd[1611]: 2025-10-28 05:13:04.292 [INFO][4154] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 05:13:04.366990 containerd[1611]: 2025-10-28 05:13:04.293 [INFO][4154] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 05:13:04.366990 containerd[1611]: 2025-10-28 05:13:04.301 [INFO][4154] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2533ece39407b08dc273cba5800f48fbb3d93ccbf29ea612315d509d4ea0a3e5" host="localhost" Oct 28 05:13:04.366990 containerd[1611]: 2025-10-28 05:13:04.306 [INFO][4154] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 05:13:04.366990 containerd[1611]: 2025-10-28 05:13:04.312 [INFO][4154] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 05:13:04.366990 containerd[1611]: 2025-10-28 05:13:04.314 [INFO][4154] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 05:13:04.366990 containerd[1611]: 2025-10-28 05:13:04.317 [INFO][4154] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 05:13:04.366990 containerd[1611]: 2025-10-28 05:13:04.317 [INFO][4154] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2533ece39407b08dc273cba5800f48fbb3d93ccbf29ea612315d509d4ea0a3e5" host="localhost" Oct 28 05:13:04.366990 containerd[1611]: 2025-10-28 05:13:04.319 [INFO][4154] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2533ece39407b08dc273cba5800f48fbb3d93ccbf29ea612315d509d4ea0a3e5 Oct 28 05:13:04.366990 containerd[1611]: 2025-10-28 05:13:04.324 [INFO][4154] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2533ece39407b08dc273cba5800f48fbb3d93ccbf29ea612315d509d4ea0a3e5" host="localhost" Oct 28 05:13:04.366990 containerd[1611]: 2025-10-28 05:13:04.330 [INFO][4154] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.2533ece39407b08dc273cba5800f48fbb3d93ccbf29ea612315d509d4ea0a3e5" host="localhost" Oct 28 05:13:04.366990 containerd[1611]: 2025-10-28 05:13:04.330 [INFO][4154] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.2533ece39407b08dc273cba5800f48fbb3d93ccbf29ea612315d509d4ea0a3e5" host="localhost" Oct 28 05:13:04.366990 containerd[1611]: 2025-10-28 05:13:04.330 [INFO][4154] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 05:13:04.366990 containerd[1611]: 2025-10-28 05:13:04.330 [INFO][4154] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="2533ece39407b08dc273cba5800f48fbb3d93ccbf29ea612315d509d4ea0a3e5" HandleID="k8s-pod-network.2533ece39407b08dc273cba5800f48fbb3d93ccbf29ea612315d509d4ea0a3e5" Workload="localhost-k8s-whisker--664d46d96d--xvk5c-eth0" Oct 28 05:13:04.368838 containerd[1611]: 2025-10-28 05:13:04.335 [INFO][4102] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2533ece39407b08dc273cba5800f48fbb3d93ccbf29ea612315d509d4ea0a3e5" Namespace="calico-system" Pod="whisker-664d46d96d-xvk5c" WorkloadEndpoint="localhost-k8s-whisker--664d46d96d--xvk5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--664d46d96d--xvk5c-eth0", GenerateName:"whisker-664d46d96d-", Namespace:"calico-system", SelfLink:"", UID:"8192e129-0d18-4558-9c03-84afd7a7f848", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 5, 13, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"664d46d96d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-664d46d96d-xvk5c", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia245fc810cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 05:13:04.368838 containerd[1611]: 2025-10-28 05:13:04.336 [INFO][4102] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="2533ece39407b08dc273cba5800f48fbb3d93ccbf29ea612315d509d4ea0a3e5" Namespace="calico-system" Pod="whisker-664d46d96d-xvk5c" WorkloadEndpoint="localhost-k8s-whisker--664d46d96d--xvk5c-eth0" Oct 28 05:13:04.368838 containerd[1611]: 2025-10-28 05:13:04.336 [INFO][4102] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia245fc810cc ContainerID="2533ece39407b08dc273cba5800f48fbb3d93ccbf29ea612315d509d4ea0a3e5" Namespace="calico-system" Pod="whisker-664d46d96d-xvk5c" WorkloadEndpoint="localhost-k8s-whisker--664d46d96d--xvk5c-eth0" Oct 28 05:13:04.368838 containerd[1611]: 2025-10-28 05:13:04.344 [INFO][4102] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2533ece39407b08dc273cba5800f48fbb3d93ccbf29ea612315d509d4ea0a3e5" Namespace="calico-system" Pod="whisker-664d46d96d-xvk5c" WorkloadEndpoint="localhost-k8s-whisker--664d46d96d--xvk5c-eth0" Oct 28 05:13:04.368838 containerd[1611]: 2025-10-28 05:13:04.344 [INFO][4102] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2533ece39407b08dc273cba5800f48fbb3d93ccbf29ea612315d509d4ea0a3e5" Namespace="calico-system" Pod="whisker-664d46d96d-xvk5c" WorkloadEndpoint="localhost-k8s-whisker--664d46d96d--xvk5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--664d46d96d--xvk5c-eth0", GenerateName:"whisker-664d46d96d-", Namespace:"calico-system", SelfLink:"", UID:"8192e129-0d18-4558-9c03-84afd7a7f848", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 5, 13, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"664d46d96d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2533ece39407b08dc273cba5800f48fbb3d93ccbf29ea612315d509d4ea0a3e5", Pod:"whisker-664d46d96d-xvk5c", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia245fc810cc", MAC:"26:f6:aa:52:14:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 05:13:04.368838 containerd[1611]: 2025-10-28 05:13:04.358 [INFO][4102] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2533ece39407b08dc273cba5800f48fbb3d93ccbf29ea612315d509d4ea0a3e5" Namespace="calico-system" Pod="whisker-664d46d96d-xvk5c" WorkloadEndpoint="localhost-k8s-whisker--664d46d96d--xvk5c-eth0" Oct 28 05:13:04.370448 containerd[1611]: time="2025-10-28T05:13:04.370392869Z" level=info msg="CreateContainer within sandbox \"ffcc0ac167828fe05871d2eef32e473110abae0d520a9deee64bf710897da080\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 28 05:13:04.372325 containerd[1611]: time="2025-10-28T05:13:04.372274574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-k9fjz,Uid:49813e29-063b-4ee4-bacc-cb4ec7eba7e0,Namespace:calico-system,Attempt:0,} returns sandbox id \"83cbe3ef185bd510ac3d80907715994f43406f9c72e515dbb4ae06a9bf24057d\"" Oct 28 05:13:04.374022 containerd[1611]: time="2025-10-28T05:13:04.373946435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 28 05:13:04.395834 containerd[1611]: time="2025-10-28T05:13:04.395768539Z" level=info msg="Container b497eeafd79d6bebb01ecba28f27104b9477553e491a0354b32bdb629014d099: CDI devices from CRI Config.CDIDevices: []" Oct 28 05:13:04.396847 containerd[1611]: time="2025-10-28T05:13:04.396770742Z" level=info msg="connecting to shim 2533ece39407b08dc273cba5800f48fbb3d93ccbf29ea612315d509d4ea0a3e5" address="unix:///run/containerd/s/98de4b6d5c1bf2c57576d8a8a74808071d838cdd7a6929ae880685f163348769" namespace=k8s.io protocol=ttrpc version=3 Oct 28 05:13:04.403268 containerd[1611]: time="2025-10-28T05:13:04.403217272Z" level=info msg="CreateContainer within sandbox \"ffcc0ac167828fe05871d2eef32e473110abae0d520a9deee64bf710897da080\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b497eeafd79d6bebb01ecba28f27104b9477553e491a0354b32bdb629014d099\"" Oct 28 05:13:04.404149 containerd[1611]: time="2025-10-28T05:13:04.404113676Z" level=info msg="StartContainer for \"b497eeafd79d6bebb01ecba28f27104b9477553e491a0354b32bdb629014d099\"" Oct 28 05:13:04.405500 containerd[1611]: time="2025-10-28T05:13:04.405441851Z" level=info msg="connecting to shim b497eeafd79d6bebb01ecba28f27104b9477553e491a0354b32bdb629014d099" address="unix:///run/containerd/s/da0d4eefc91bfd09b326d5dd9d2087d9330ecef79382c8961238e73a895a6a88" protocol=ttrpc version=3 Oct 28 05:13:04.424970 systemd[1]: Started cri-containerd-2533ece39407b08dc273cba5800f48fbb3d93ccbf29ea612315d509d4ea0a3e5.scope - libcontainer container 2533ece39407b08dc273cba5800f48fbb3d93ccbf29ea612315d509d4ea0a3e5. Oct 28 05:13:04.437665 systemd[1]: Started cri-containerd-b497eeafd79d6bebb01ecba28f27104b9477553e491a0354b32bdb629014d099.scope - libcontainer container b497eeafd79d6bebb01ecba28f27104b9477553e491a0354b32bdb629014d099. Oct 28 05:13:04.452996 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 05:13:04.464240 containerd[1611]: time="2025-10-28T05:13:04.464082002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84854587bd-9b74s,Uid:e27c6b1a-18a7-411c-9050-0f3b48a38781,Namespace:calico-apiserver,Attempt:0,}" Oct 28 05:13:04.466214 kubelet[2793]: I1028 05:13:04.466165 2793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f96a244b-47a4-4d81-b08b-34e49822ec81" path="/var/lib/kubelet/pods/f96a244b-47a4-4d81-b08b-34e49822ec81/volumes" Oct 28 05:13:04.503084 containerd[1611]: time="2025-10-28T05:13:04.503017274Z" level=info msg="StartContainer for \"b497eeafd79d6bebb01ecba28f27104b9477553e491a0354b32bdb629014d099\" returns successfully" Oct 28 05:13:04.508097 containerd[1611]: time="2025-10-28T05:13:04.508004563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-664d46d96d-xvk5c,Uid:8192e129-0d18-4558-9c03-84afd7a7f848,Namespace:calico-system,Attempt:0,} returns sandbox id \"2533ece39407b08dc273cba5800f48fbb3d93ccbf29ea612315d509d4ea0a3e5\"" Oct 28 05:13:04.591608 systemd-networkd[1506]: cali9ec8d47fc6e: Link UP Oct 28 05:13:04.592019 systemd-networkd[1506]: cali9ec8d47fc6e: Gained carrier Oct 28 05:13:04.605512 containerd[1611]: 2025-10-28 05:13:04.508 [INFO][4280] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 28 05:13:04.605512 containerd[1611]: 2025-10-28 05:13:04.522 [INFO][4280] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--84854587bd--9b74s-eth0 calico-apiserver-84854587bd- calico-apiserver e27c6b1a-18a7-411c-9050-0f3b48a38781 873 0 2025-10-28 05:12:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84854587bd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-84854587bd-9b74s eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9ec8d47fc6e [] [] }} ContainerID="5f8c4b46601b6cb8639944e52fa52b85c557886a9552bab0ff5eebe1d969658b" Namespace="calico-apiserver" Pod="calico-apiserver-84854587bd-9b74s" WorkloadEndpoint="localhost-k8s-calico--apiserver--84854587bd--9b74s-" Oct 28 05:13:04.605512 containerd[1611]: 2025-10-28 05:13:04.522 [INFO][4280] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5f8c4b46601b6cb8639944e52fa52b85c557886a9552bab0ff5eebe1d969658b" Namespace="calico-apiserver" Pod="calico-apiserver-84854587bd-9b74s" WorkloadEndpoint="localhost-k8s-calico--apiserver--84854587bd--9b74s-eth0" Oct 28 05:13:04.605512 containerd[1611]: 2025-10-28 05:13:04.552 [INFO][4308] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5f8c4b46601b6cb8639944e52fa52b85c557886a9552bab0ff5eebe1d969658b" HandleID="k8s-pod-network.5f8c4b46601b6cb8639944e52fa52b85c557886a9552bab0ff5eebe1d969658b" Workload="localhost-k8s-calico--apiserver--84854587bd--9b74s-eth0" Oct 28 05:13:04.605512 containerd[1611]: 2025-10-28 05:13:04.552 [INFO][4308] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5f8c4b46601b6cb8639944e52fa52b85c557886a9552bab0ff5eebe1d969658b" HandleID="k8s-pod-network.5f8c4b46601b6cb8639944e52fa52b85c557886a9552bab0ff5eebe1d969658b" Workload="localhost-k8s-calico--apiserver--84854587bd--9b74s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7080), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-84854587bd-9b74s", "timestamp":"2025-10-28 05:13:04.552412909 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 05:13:04.605512 containerd[1611]: 2025-10-28 05:13:04.552 [INFO][4308] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 05:13:04.605512 containerd[1611]: 2025-10-28 05:13:04.552 [INFO][4308] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 05:13:04.605512 containerd[1611]: 2025-10-28 05:13:04.552 [INFO][4308] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 05:13:04.605512 containerd[1611]: 2025-10-28 05:13:04.563 [INFO][4308] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5f8c4b46601b6cb8639944e52fa52b85c557886a9552bab0ff5eebe1d969658b" host="localhost" Oct 28 05:13:04.605512 containerd[1611]: 2025-10-28 05:13:04.566 [INFO][4308] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 05:13:04.605512 containerd[1611]: 2025-10-28 05:13:04.571 [INFO][4308] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 05:13:04.605512 containerd[1611]: 2025-10-28 05:13:04.572 [INFO][4308] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 05:13:04.605512 containerd[1611]: 2025-10-28 05:13:04.574 [INFO][4308] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 05:13:04.605512 containerd[1611]: 2025-10-28 05:13:04.574 [INFO][4308] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5f8c4b46601b6cb8639944e52fa52b85c557886a9552bab0ff5eebe1d969658b" host="localhost" Oct 28 05:13:04.605512 containerd[1611]: 2025-10-28 05:13:04.576 [INFO][4308] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5f8c4b46601b6cb8639944e52fa52b85c557886a9552bab0ff5eebe1d969658b Oct 28 05:13:04.605512 containerd[1611]: 2025-10-28 05:13:04.579 [INFO][4308] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5f8c4b46601b6cb8639944e52fa52b85c557886a9552bab0ff5eebe1d969658b" host="localhost" Oct 28 05:13:04.605512 containerd[1611]: 2025-10-28 05:13:04.585 [INFO][4308] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.5f8c4b46601b6cb8639944e52fa52b85c557886a9552bab0ff5eebe1d969658b" host="localhost" Oct 28 05:13:04.605512 containerd[1611]: 2025-10-28 05:13:04.585 [INFO][4308] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.5f8c4b46601b6cb8639944e52fa52b85c557886a9552bab0ff5eebe1d969658b" host="localhost" Oct 28 05:13:04.605512 containerd[1611]: 2025-10-28 05:13:04.585 [INFO][4308] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 05:13:04.605512 containerd[1611]: 2025-10-28 05:13:04.585 [INFO][4308] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="5f8c4b46601b6cb8639944e52fa52b85c557886a9552bab0ff5eebe1d969658b" HandleID="k8s-pod-network.5f8c4b46601b6cb8639944e52fa52b85c557886a9552bab0ff5eebe1d969658b" Workload="localhost-k8s-calico--apiserver--84854587bd--9b74s-eth0" Oct 28 05:13:04.606607 containerd[1611]: 2025-10-28 05:13:04.589 [INFO][4280] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5f8c4b46601b6cb8639944e52fa52b85c557886a9552bab0ff5eebe1d969658b" Namespace="calico-apiserver" Pod="calico-apiserver-84854587bd-9b74s" WorkloadEndpoint="localhost-k8s-calico--apiserver--84854587bd--9b74s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84854587bd--9b74s-eth0", GenerateName:"calico-apiserver-84854587bd-", Namespace:"calico-apiserver", SelfLink:"", UID:"e27c6b1a-18a7-411c-9050-0f3b48a38781", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 5, 12, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84854587bd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-84854587bd-9b74s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ec8d47fc6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 05:13:04.606607 containerd[1611]: 2025-10-28 05:13:04.589 [INFO][4280] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="5f8c4b46601b6cb8639944e52fa52b85c557886a9552bab0ff5eebe1d969658b" Namespace="calico-apiserver" Pod="calico-apiserver-84854587bd-9b74s" WorkloadEndpoint="localhost-k8s-calico--apiserver--84854587bd--9b74s-eth0" Oct 28 05:13:04.606607 containerd[1611]: 2025-10-28 05:13:04.589 [INFO][4280] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9ec8d47fc6e ContainerID="5f8c4b46601b6cb8639944e52fa52b85c557886a9552bab0ff5eebe1d969658b" Namespace="calico-apiserver" Pod="calico-apiserver-84854587bd-9b74s" WorkloadEndpoint="localhost-k8s-calico--apiserver--84854587bd--9b74s-eth0" Oct 28 05:13:04.606607 containerd[1611]: 2025-10-28 05:13:04.592 [INFO][4280] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5f8c4b46601b6cb8639944e52fa52b85c557886a9552bab0ff5eebe1d969658b" Namespace="calico-apiserver" Pod="calico-apiserver-84854587bd-9b74s" WorkloadEndpoint="localhost-k8s-calico--apiserver--84854587bd--9b74s-eth0" Oct 28 05:13:04.606607 containerd[1611]: 2025-10-28 05:13:04.593 [INFO][4280] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5f8c4b46601b6cb8639944e52fa52b85c557886a9552bab0ff5eebe1d969658b" Namespace="calico-apiserver" Pod="calico-apiserver-84854587bd-9b74s" WorkloadEndpoint="localhost-k8s-calico--apiserver--84854587bd--9b74s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84854587bd--9b74s-eth0", GenerateName:"calico-apiserver-84854587bd-", Namespace:"calico-apiserver", SelfLink:"", UID:"e27c6b1a-18a7-411c-9050-0f3b48a38781", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 5, 12, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84854587bd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5f8c4b46601b6cb8639944e52fa52b85c557886a9552bab0ff5eebe1d969658b", Pod:"calico-apiserver-84854587bd-9b74s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ec8d47fc6e", MAC:"ce:10:91:7a:7e:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 05:13:04.606607 containerd[1611]: 2025-10-28 05:13:04.601 [INFO][4280] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5f8c4b46601b6cb8639944e52fa52b85c557886a9552bab0ff5eebe1d969658b" Namespace="calico-apiserver" Pod="calico-apiserver-84854587bd-9b74s" WorkloadEndpoint="localhost-k8s-calico--apiserver--84854587bd--9b74s-eth0" Oct 28 05:13:04.612865 kubelet[2793]: E1028 05:13:04.612767 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:13:04.616379 kubelet[2793]: I1028 05:13:04.616347 2793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 28 05:13:04.616945 kubelet[2793]: E1028 05:13:04.616857 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:13:04.661951 containerd[1611]: time="2025-10-28T05:13:04.660293119Z" level=info msg="connecting to shim 5f8c4b46601b6cb8639944e52fa52b85c557886a9552bab0ff5eebe1d969658b" address="unix:///run/containerd/s/daee2c501e7a9a228badad0cfb85e98ac3b66714aa1c71ba608be6c636d34903" namespace=k8s.io protocol=ttrpc version=3 Oct 28 05:13:04.697165 containerd[1611]: time="2025-10-28T05:13:04.697108580Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:13:04.702509 containerd[1611]: time="2025-10-28T05:13:04.702460443Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 28 05:13:04.707437 containerd[1611]: time="2025-10-28T05:13:04.707363725Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 28 05:13:04.708107 kubelet[2793]: E1028 05:13:04.707820 2793 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 05:13:04.708107 kubelet[2793]: E1028 05:13:04.707872 2793 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 05:13:04.709174 containerd[1611]: time="2025-10-28T05:13:04.708978739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 28 05:13:04.714043 systemd[1]: Started cri-containerd-5f8c4b46601b6cb8639944e52fa52b85c557886a9552bab0ff5eebe1d969658b.scope - libcontainer container 5f8c4b46601b6cb8639944e52fa52b85c557886a9552bab0ff5eebe1d969658b. Oct 28 05:13:04.715276 kubelet[2793]: E1028 05:13:04.714882 2793 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9zs97,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-k9fjz_calico-system(49813e29-063b-4ee4-bacc-cb4ec7eba7e0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 28 05:13:04.716106 kubelet[2793]: E1028 05:13:04.716062 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k9fjz" podUID="49813e29-063b-4ee4-bacc-cb4ec7eba7e0" Oct 28 05:13:04.739090 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 05:13:04.813663 containerd[1611]: time="2025-10-28T05:13:04.813527572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84854587bd-9b74s,Uid:e27c6b1a-18a7-411c-9050-0f3b48a38781,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5f8c4b46601b6cb8639944e52fa52b85c557886a9552bab0ff5eebe1d969658b\"" Oct 28 05:13:04.981446 systemd[1]: Started sshd@9-10.0.0.49:22-10.0.0.1:35708.service - OpenSSH per-connection server daemon (10.0.0.1:35708). Oct 28 05:13:05.050288 containerd[1611]: time="2025-10-28T05:13:05.050221700Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:13:05.058042 containerd[1611]: time="2025-10-28T05:13:05.057970015Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 28 05:13:05.058042 containerd[1611]: time="2025-10-28T05:13:05.058010511Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 28 05:13:05.058314 kubelet[2793]: E1028 05:13:05.058263 2793 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 05:13:05.059201 kubelet[2793]: E1028 05:13:05.058329 2793 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 05:13:05.059201 kubelet[2793]: E1028 05:13:05.058620 2793 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2926ad180e634552b42f6badbaccdc32,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lkk5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-664d46d96d-xvk5c_calico-system(8192e129-0d18-4558-9c03-84afd7a7f848): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 28 05:13:05.059325 containerd[1611]: time="2025-10-28T05:13:05.058700677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 05:13:05.065194 sshd[4378]: Accepted publickey for core from 10.0.0.1 port 35708 ssh2: RSA SHA256:fnPxJp9OOcM7toOTW/sODQxaZsmsBo9HTVuuDohs1/Q Oct 28 05:13:05.067303 sshd-session[4378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:13:05.072465 systemd-logind[1584]: New session 10 of user core. Oct 28 05:13:05.083077 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 28 05:13:05.230420 sshd[4383]: Connection closed by 10.0.0.1 port 35708 Oct 28 05:13:05.232957 sshd-session[4378]: pam_unix(sshd:session): session closed for user core Oct 28 05:13:05.245122 systemd[1]: sshd@9-10.0.0.49:22-10.0.0.1:35708.service: Deactivated successfully. Oct 28 05:13:05.249142 systemd-networkd[1506]: cali81611f092ce: Gained IPv6LL Oct 28 05:13:05.256439 systemd[1]: session-10.scope: Deactivated successfully. Oct 28 05:13:05.258833 systemd-logind[1584]: Session 10 logged out. Waiting for processes to exit. Oct 28 05:13:05.262782 systemd-logind[1584]: Removed session 10. Oct 28 05:13:05.469511 containerd[1611]: time="2025-10-28T05:13:05.469417807Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:13:05.474548 containerd[1611]: time="2025-10-28T05:13:05.474483362Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 05:13:05.474652 containerd[1611]: time="2025-10-28T05:13:05.474571438Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 05:13:05.475062 kubelet[2793]: E1028 05:13:05.474964 2793 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 05:13:05.475062 kubelet[2793]: E1028 05:13:05.475052 2793 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 05:13:05.475847 kubelet[2793]: E1028 05:13:05.475514 2793 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j8hs2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84854587bd-9b74s_calico-apiserver(e27c6b1a-18a7-411c-9050-0f3b48a38781): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 05:13:05.476036 containerd[1611]: time="2025-10-28T05:13:05.475631569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 28 05:13:05.477746 kubelet[2793]: E1028 05:13:05.477645 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84854587bd-9b74s" podUID="e27c6b1a-18a7-411c-9050-0f3b48a38781" Oct 28 05:13:05.504027 systemd-networkd[1506]: cali8b03af24a63: Gained IPv6LL Oct 28 05:13:05.618997 kubelet[2793]: E1028 05:13:05.618592 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:13:05.618997 kubelet[2793]: E1028 05:13:05.618874 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k9fjz" podUID="49813e29-063b-4ee4-bacc-cb4ec7eba7e0" Oct 28 05:13:05.620276 kubelet[2793]: E1028 05:13:05.620236 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84854587bd-9b74s" podUID="e27c6b1a-18a7-411c-9050-0f3b48a38781" Oct 28 05:13:05.632347 systemd-networkd[1506]: calia245fc810cc: Gained IPv6LL Oct 28 05:13:05.636163 kubelet[2793]: I1028 05:13:05.636082 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6kw6d" podStartSLOduration=34.636048662 podStartE2EDuration="34.636048662s" podCreationTimestamp="2025-10-28 05:12:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 05:13:04.635705239 +0000 UTC m=+38.305231310" watchObservedRunningTime="2025-10-28 05:13:05.636048662 +0000 UTC m=+39.305574723" Oct 28 05:13:05.798819 containerd[1611]: time="2025-10-28T05:13:05.798631925Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:13:05.800058 containerd[1611]: time="2025-10-28T05:13:05.799990527Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 28 05:13:05.800058 containerd[1611]: time="2025-10-28T05:13:05.800050680Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 28 05:13:05.800337 kubelet[2793]: E1028 05:13:05.800290 2793 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 05:13:05.800403 kubelet[2793]: E1028 05:13:05.800349 2793 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 05:13:05.800531 kubelet[2793]: E1028 05:13:05.800491 2793 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lkk5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-664d46d96d-xvk5c_calico-system(8192e129-0d18-4558-9c03-84afd7a7f848): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 28 05:13:05.801731 kubelet[2793]: E1028 05:13:05.801670 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-664d46d96d-xvk5c" podUID="8192e129-0d18-4558-9c03-84afd7a7f848" Oct 28 05:13:05.887986 systemd-networkd[1506]: cali9ec8d47fc6e: Gained IPv6LL Oct 28 05:13:06.466251 containerd[1611]: time="2025-10-28T05:13:06.466189852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c6c464845-gchcn,Uid:17014fb8-50b3-4fb0-83a0-7ba96dfdd77b,Namespace:calico-system,Attempt:0,}" Oct 28 05:13:06.566555 systemd-networkd[1506]: calid8d34144f72: Link UP Oct 28 05:13:06.567690 systemd-networkd[1506]: calid8d34144f72: Gained carrier Oct 28 05:13:06.581817 containerd[1611]: 2025-10-28 05:13:06.489 [INFO][4449] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 28 05:13:06.581817 containerd[1611]: 2025-10-28 05:13:06.501 [INFO][4449] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5c6c464845--gchcn-eth0 calico-kube-controllers-5c6c464845- calico-system 17014fb8-50b3-4fb0-83a0-7ba96dfdd77b 872 0 2025-10-28 05:12:44 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5c6c464845 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5c6c464845-gchcn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid8d34144f72 [] [] }} ContainerID="1228f685034bb98c1c38ada1df05b77e517f35161cac34b67c8dcd0d17b11c56" Namespace="calico-system" Pod="calico-kube-controllers-5c6c464845-gchcn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c6c464845--gchcn-" Oct 28 05:13:06.581817 containerd[1611]: 2025-10-28 05:13:06.501 [INFO][4449] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1228f685034bb98c1c38ada1df05b77e517f35161cac34b67c8dcd0d17b11c56" Namespace="calico-system" Pod="calico-kube-controllers-5c6c464845-gchcn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c6c464845--gchcn-eth0" Oct 28 05:13:06.581817 containerd[1611]: 2025-10-28 05:13:06.529 [INFO][4463] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1228f685034bb98c1c38ada1df05b77e517f35161cac34b67c8dcd0d17b11c56" HandleID="k8s-pod-network.1228f685034bb98c1c38ada1df05b77e517f35161cac34b67c8dcd0d17b11c56" Workload="localhost-k8s-calico--kube--controllers--5c6c464845--gchcn-eth0" Oct 28 05:13:06.581817 containerd[1611]: 2025-10-28 05:13:06.529 [INFO][4463] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1228f685034bb98c1c38ada1df05b77e517f35161cac34b67c8dcd0d17b11c56" HandleID="k8s-pod-network.1228f685034bb98c1c38ada1df05b77e517f35161cac34b67c8dcd0d17b11c56" Workload="localhost-k8s-calico--kube--controllers--5c6c464845--gchcn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135700), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5c6c464845-gchcn", "timestamp":"2025-10-28 05:13:06.529095309 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 05:13:06.581817 containerd[1611]: 2025-10-28 05:13:06.529 [INFO][4463] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 05:13:06.581817 containerd[1611]: 2025-10-28 05:13:06.529 [INFO][4463] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 05:13:06.581817 containerd[1611]: 2025-10-28 05:13:06.529 [INFO][4463] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 05:13:06.581817 containerd[1611]: 2025-10-28 05:13:06.536 [INFO][4463] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1228f685034bb98c1c38ada1df05b77e517f35161cac34b67c8dcd0d17b11c56" host="localhost" Oct 28 05:13:06.581817 containerd[1611]: 2025-10-28 05:13:06.540 [INFO][4463] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 05:13:06.581817 containerd[1611]: 2025-10-28 05:13:06.545 [INFO][4463] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 05:13:06.581817 containerd[1611]: 2025-10-28 05:13:06.547 [INFO][4463] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 05:13:06.581817 containerd[1611]: 2025-10-28 05:13:06.549 [INFO][4463] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 05:13:06.581817 containerd[1611]: 2025-10-28 05:13:06.549 [INFO][4463] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1228f685034bb98c1c38ada1df05b77e517f35161cac34b67c8dcd0d17b11c56" host="localhost" Oct 28 05:13:06.581817 containerd[1611]: 2025-10-28 05:13:06.551 [INFO][4463] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1228f685034bb98c1c38ada1df05b77e517f35161cac34b67c8dcd0d17b11c56 Oct 28 05:13:06.581817 containerd[1611]: 2025-10-28 05:13:06.555 [INFO][4463] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1228f685034bb98c1c38ada1df05b77e517f35161cac34b67c8dcd0d17b11c56" host="localhost" Oct 28 05:13:06.581817 containerd[1611]: 2025-10-28 05:13:06.561 [INFO][4463] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.1228f685034bb98c1c38ada1df05b77e517f35161cac34b67c8dcd0d17b11c56" host="localhost" Oct 28 05:13:06.581817 containerd[1611]: 2025-10-28 05:13:06.561 [INFO][4463] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.1228f685034bb98c1c38ada1df05b77e517f35161cac34b67c8dcd0d17b11c56" host="localhost" Oct 28 05:13:06.581817 containerd[1611]: 2025-10-28 05:13:06.561 [INFO][4463] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 05:13:06.581817 containerd[1611]: 2025-10-28 05:13:06.561 [INFO][4463] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="1228f685034bb98c1c38ada1df05b77e517f35161cac34b67c8dcd0d17b11c56" HandleID="k8s-pod-network.1228f685034bb98c1c38ada1df05b77e517f35161cac34b67c8dcd0d17b11c56" Workload="localhost-k8s-calico--kube--controllers--5c6c464845--gchcn-eth0" Oct 28 05:13:06.582623 containerd[1611]: 2025-10-28 05:13:06.564 [INFO][4449] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1228f685034bb98c1c38ada1df05b77e517f35161cac34b67c8dcd0d17b11c56" Namespace="calico-system" Pod="calico-kube-controllers-5c6c464845-gchcn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c6c464845--gchcn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5c6c464845--gchcn-eth0", GenerateName:"calico-kube-controllers-5c6c464845-", Namespace:"calico-system", SelfLink:"", UID:"17014fb8-50b3-4fb0-83a0-7ba96dfdd77b", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 5, 12, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c6c464845", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5c6c464845-gchcn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid8d34144f72", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 05:13:06.582623 containerd[1611]: 2025-10-28 05:13:06.564 [INFO][4449] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="1228f685034bb98c1c38ada1df05b77e517f35161cac34b67c8dcd0d17b11c56" Namespace="calico-system" Pod="calico-kube-controllers-5c6c464845-gchcn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c6c464845--gchcn-eth0" Oct 28 05:13:06.582623 containerd[1611]: 2025-10-28 05:13:06.564 [INFO][4449] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid8d34144f72 ContainerID="1228f685034bb98c1c38ada1df05b77e517f35161cac34b67c8dcd0d17b11c56" Namespace="calico-system" Pod="calico-kube-controllers-5c6c464845-gchcn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c6c464845--gchcn-eth0" Oct 28 05:13:06.582623 containerd[1611]: 2025-10-28 05:13:06.568 [INFO][4449] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1228f685034bb98c1c38ada1df05b77e517f35161cac34b67c8dcd0d17b11c56" Namespace="calico-system" Pod="calico-kube-controllers-5c6c464845-gchcn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c6c464845--gchcn-eth0" Oct 28 05:13:06.582623 containerd[1611]: 2025-10-28 05:13:06.568 [INFO][4449] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1228f685034bb98c1c38ada1df05b77e517f35161cac34b67c8dcd0d17b11c56" Namespace="calico-system" Pod="calico-kube-controllers-5c6c464845-gchcn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c6c464845--gchcn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5c6c464845--gchcn-eth0", GenerateName:"calico-kube-controllers-5c6c464845-", Namespace:"calico-system", SelfLink:"", UID:"17014fb8-50b3-4fb0-83a0-7ba96dfdd77b", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 5, 12, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c6c464845", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1228f685034bb98c1c38ada1df05b77e517f35161cac34b67c8dcd0d17b11c56", Pod:"calico-kube-controllers-5c6c464845-gchcn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid8d34144f72", MAC:"52:19:c4:54:b9:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 05:13:06.582623 containerd[1611]: 2025-10-28 05:13:06.576 [INFO][4449] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1228f685034bb98c1c38ada1df05b77e517f35161cac34b67c8dcd0d17b11c56" Namespace="calico-system" Pod="calico-kube-controllers-5c6c464845-gchcn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c6c464845--gchcn-eth0" Oct 28 05:13:06.622705 kubelet[2793]: E1028 05:13:06.622657 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:13:06.623268 kubelet[2793]: E1028 05:13:06.623215 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84854587bd-9b74s" podUID="e27c6b1a-18a7-411c-9050-0f3b48a38781" Oct 28 05:13:06.623721 kubelet[2793]: E1028 05:13:06.623656 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-664d46d96d-xvk5c" podUID="8192e129-0d18-4558-9c03-84afd7a7f848" Oct 28 05:13:06.634981 containerd[1611]: time="2025-10-28T05:13:06.634901343Z" level=info msg="connecting to shim 1228f685034bb98c1c38ada1df05b77e517f35161cac34b67c8dcd0d17b11c56" address="unix:///run/containerd/s/5197fcd72cec0012283108de0aa23d355ab432a4c218cff508219a55ee4cbef3" namespace=k8s.io protocol=ttrpc version=3 Oct 28 05:13:06.677939 systemd[1]: Started cri-containerd-1228f685034bb98c1c38ada1df05b77e517f35161cac34b67c8dcd0d17b11c56.scope - libcontainer container 1228f685034bb98c1c38ada1df05b77e517f35161cac34b67c8dcd0d17b11c56. Oct 28 05:13:06.692968 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 05:13:06.724121 containerd[1611]: time="2025-10-28T05:13:06.723984989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c6c464845-gchcn,Uid:17014fb8-50b3-4fb0-83a0-7ba96dfdd77b,Namespace:calico-system,Attempt:0,} returns sandbox id \"1228f685034bb98c1c38ada1df05b77e517f35161cac34b67c8dcd0d17b11c56\"" Oct 28 05:13:06.725879 containerd[1611]: time="2025-10-28T05:13:06.725839913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 28 05:13:07.109956 containerd[1611]: time="2025-10-28T05:13:07.109783511Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:13:07.111929 containerd[1611]: time="2025-10-28T05:13:07.111854510Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 28 05:13:07.112036 containerd[1611]: time="2025-10-28T05:13:07.111902390Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 28 05:13:07.112278 kubelet[2793]: E1028 05:13:07.112210 2793 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 05:13:07.112346 kubelet[2793]: E1028 05:13:07.112281 2793 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 05:13:07.112523 kubelet[2793]: E1028 05:13:07.112457 2793 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvswr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5c6c464845-gchcn_calico-system(17014fb8-50b3-4fb0-83a0-7ba96dfdd77b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 28 05:13:07.113727 kubelet[2793]: E1028 05:13:07.113661 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c6c464845-gchcn" podUID="17014fb8-50b3-4fb0-83a0-7ba96dfdd77b" Oct 28 05:13:07.463481 kubelet[2793]: E1028 05:13:07.463428 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:13:07.464160 containerd[1611]: time="2025-10-28T05:13:07.464076596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4mflf,Uid:edaaa83e-dc34-4a96-9305-4f3c94df0a41,Namespace:kube-system,Attempt:0,}" Oct 28 05:13:07.464531 containerd[1611]: time="2025-10-28T05:13:07.464495523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84854587bd-trjrh,Uid:4bd50f09-b032-49f4-8f6f-5043dcd6661f,Namespace:calico-apiserver,Attempt:0,}" Oct 28 05:13:07.464774 containerd[1611]: time="2025-10-28T05:13:07.464738319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xsl75,Uid:d86ce3dc-83d4-402a-b381-76ea2d723abb,Namespace:calico-system,Attempt:0,}" Oct 28 05:13:07.627103 kubelet[2793]: E1028 05:13:07.626946 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c6c464845-gchcn" podUID="17014fb8-50b3-4fb0-83a0-7ba96dfdd77b" Oct 28 05:13:07.646247 systemd-networkd[1506]: cali102a5b54847: Link UP Oct 28 05:13:07.647602 systemd-networkd[1506]: cali102a5b54847: Gained carrier Oct 28 05:13:07.661634 containerd[1611]: 2025-10-28 05:13:07.521 [INFO][4541] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 28 05:13:07.661634 containerd[1611]: 2025-10-28 05:13:07.540 [INFO][4541] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--4mflf-eth0 coredns-674b8bbfcf- kube-system edaaa83e-dc34-4a96-9305-4f3c94df0a41 867 0 2025-10-28 05:12:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-4mflf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali102a5b54847 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5b839e514e26d6bce268d5e530d0cb429c37eb9809ed60910c43fb15b30e94b3" Namespace="kube-system" Pod="coredns-674b8bbfcf-4mflf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4mflf-" Oct 28 05:13:07.661634 containerd[1611]: 2025-10-28 05:13:07.541 [INFO][4541] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5b839e514e26d6bce268d5e530d0cb429c37eb9809ed60910c43fb15b30e94b3" Namespace="kube-system" Pod="coredns-674b8bbfcf-4mflf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4mflf-eth0" Oct 28 05:13:07.661634 containerd[1611]: 2025-10-28 05:13:07.594 [INFO][4588] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5b839e514e26d6bce268d5e530d0cb429c37eb9809ed60910c43fb15b30e94b3" HandleID="k8s-pod-network.5b839e514e26d6bce268d5e530d0cb429c37eb9809ed60910c43fb15b30e94b3" Workload="localhost-k8s-coredns--674b8bbfcf--4mflf-eth0" Oct 28 05:13:07.661634 containerd[1611]: 2025-10-28 05:13:07.595 [INFO][4588] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5b839e514e26d6bce268d5e530d0cb429c37eb9809ed60910c43fb15b30e94b3" HandleID="k8s-pod-network.5b839e514e26d6bce268d5e530d0cb429c37eb9809ed60910c43fb15b30e94b3" Workload="localhost-k8s-coredns--674b8bbfcf--4mflf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001166d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-4mflf", "timestamp":"2025-10-28 05:13:07.594863746 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 05:13:07.661634 containerd[1611]: 2025-10-28 05:13:07.595 [INFO][4588] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 05:13:07.661634 containerd[1611]: 2025-10-28 05:13:07.595 [INFO][4588] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 05:13:07.661634 containerd[1611]: 2025-10-28 05:13:07.595 [INFO][4588] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 05:13:07.661634 containerd[1611]: 2025-10-28 05:13:07.607 [INFO][4588] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5b839e514e26d6bce268d5e530d0cb429c37eb9809ed60910c43fb15b30e94b3" host="localhost" Oct 28 05:13:07.661634 containerd[1611]: 2025-10-28 05:13:07.611 [INFO][4588] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 05:13:07.661634 containerd[1611]: 2025-10-28 05:13:07.615 [INFO][4588] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 05:13:07.661634 containerd[1611]: 2025-10-28 05:13:07.617 [INFO][4588] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 05:13:07.661634 containerd[1611]: 2025-10-28 05:13:07.620 [INFO][4588] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 05:13:07.661634 containerd[1611]: 2025-10-28 05:13:07.620 [INFO][4588] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5b839e514e26d6bce268d5e530d0cb429c37eb9809ed60910c43fb15b30e94b3" host="localhost" Oct 28 05:13:07.661634 containerd[1611]: 2025-10-28 05:13:07.622 [INFO][4588] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5b839e514e26d6bce268d5e530d0cb429c37eb9809ed60910c43fb15b30e94b3 Oct 28 05:13:07.661634 containerd[1611]: 2025-10-28 05:13:07.628 [INFO][4588] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5b839e514e26d6bce268d5e530d0cb429c37eb9809ed60910c43fb15b30e94b3" host="localhost" Oct 28 05:13:07.661634 containerd[1611]: 2025-10-28 05:13:07.637 [INFO][4588] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.5b839e514e26d6bce268d5e530d0cb429c37eb9809ed60910c43fb15b30e94b3" host="localhost" Oct 28 05:13:07.661634 containerd[1611]: 2025-10-28 05:13:07.637 [INFO][4588] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.5b839e514e26d6bce268d5e530d0cb429c37eb9809ed60910c43fb15b30e94b3" host="localhost" Oct 28 05:13:07.661634 containerd[1611]: 2025-10-28 05:13:07.637 [INFO][4588] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 05:13:07.661634 containerd[1611]: 2025-10-28 05:13:07.637 [INFO][4588] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="5b839e514e26d6bce268d5e530d0cb429c37eb9809ed60910c43fb15b30e94b3" HandleID="k8s-pod-network.5b839e514e26d6bce268d5e530d0cb429c37eb9809ed60910c43fb15b30e94b3" Workload="localhost-k8s-coredns--674b8bbfcf--4mflf-eth0" Oct 28 05:13:07.662474 containerd[1611]: 2025-10-28 05:13:07.643 [INFO][4541] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5b839e514e26d6bce268d5e530d0cb429c37eb9809ed60910c43fb15b30e94b3" Namespace="kube-system" Pod="coredns-674b8bbfcf-4mflf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4mflf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--4mflf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"edaaa83e-dc34-4a96-9305-4f3c94df0a41", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 5, 12, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-4mflf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali102a5b54847", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 05:13:07.662474 containerd[1611]: 2025-10-28 05:13:07.643 [INFO][4541] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="5b839e514e26d6bce268d5e530d0cb429c37eb9809ed60910c43fb15b30e94b3" Namespace="kube-system" Pod="coredns-674b8bbfcf-4mflf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4mflf-eth0" Oct 28 05:13:07.662474 containerd[1611]: 2025-10-28 05:13:07.644 [INFO][4541] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali102a5b54847 ContainerID="5b839e514e26d6bce268d5e530d0cb429c37eb9809ed60910c43fb15b30e94b3" Namespace="kube-system" Pod="coredns-674b8bbfcf-4mflf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4mflf-eth0" Oct 28 05:13:07.662474 containerd[1611]: 2025-10-28 05:13:07.646 [INFO][4541] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5b839e514e26d6bce268d5e530d0cb429c37eb9809ed60910c43fb15b30e94b3" Namespace="kube-system" Pod="coredns-674b8bbfcf-4mflf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4mflf-eth0" Oct 28 05:13:07.662474 containerd[1611]: 2025-10-28 05:13:07.647 [INFO][4541] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5b839e514e26d6bce268d5e530d0cb429c37eb9809ed60910c43fb15b30e94b3" Namespace="kube-system" Pod="coredns-674b8bbfcf-4mflf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4mflf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--4mflf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"edaaa83e-dc34-4a96-9305-4f3c94df0a41", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 5, 12, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5b839e514e26d6bce268d5e530d0cb429c37eb9809ed60910c43fb15b30e94b3", Pod:"coredns-674b8bbfcf-4mflf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali102a5b54847", MAC:"66:a2:39:c3:36:eb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 05:13:07.662474 containerd[1611]: 2025-10-28 05:13:07.658 [INFO][4541] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5b839e514e26d6bce268d5e530d0cb429c37eb9809ed60910c43fb15b30e94b3" Namespace="kube-system" Pod="coredns-674b8bbfcf-4mflf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4mflf-eth0" Oct 28 05:13:07.690567 containerd[1611]: time="2025-10-28T05:13:07.690513728Z" level=info msg="connecting to shim 5b839e514e26d6bce268d5e530d0cb429c37eb9809ed60910c43fb15b30e94b3" address="unix:///run/containerd/s/1c9e3fc29e5ab79bcddc91d04ebf5ed594dbc8c1535a534e06a7a6d89d826d10" namespace=k8s.io protocol=ttrpc version=3 Oct 28 05:13:07.726109 systemd[1]: Started cri-containerd-5b839e514e26d6bce268d5e530d0cb429c37eb9809ed60910c43fb15b30e94b3.scope - libcontainer container 5b839e514e26d6bce268d5e530d0cb429c37eb9809ed60910c43fb15b30e94b3. Oct 28 05:13:07.749293 systemd-networkd[1506]: calidee43f6353e: Link UP Oct 28 05:13:07.750316 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 05:13:07.750340 systemd-networkd[1506]: calidee43f6353e: Gained carrier Oct 28 05:13:07.767000 containerd[1611]: 2025-10-28 05:13:07.559 [INFO][4567] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 28 05:13:07.767000 containerd[1611]: 2025-10-28 05:13:07.573 [INFO][4567] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--xsl75-eth0 csi-node-driver- calico-system d86ce3dc-83d4-402a-b381-76ea2d723abb 767 0 2025-10-28 05:12:44 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-xsl75 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calidee43f6353e [] [] }} ContainerID="2fd353a1f4de783afa2e35ef7eb70b44818c1c75bf8519f4b813ce37e8331c2b" Namespace="calico-system" Pod="csi-node-driver-xsl75" WorkloadEndpoint="localhost-k8s-csi--node--driver--xsl75-" Oct 28 05:13:07.767000 containerd[1611]: 2025-10-28 05:13:07.573 [INFO][4567] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2fd353a1f4de783afa2e35ef7eb70b44818c1c75bf8519f4b813ce37e8331c2b" Namespace="calico-system" Pod="csi-node-driver-xsl75" WorkloadEndpoint="localhost-k8s-csi--node--driver--xsl75-eth0" Oct 28 05:13:07.767000 containerd[1611]: 2025-10-28 05:13:07.623 [INFO][4599] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2fd353a1f4de783afa2e35ef7eb70b44818c1c75bf8519f4b813ce37e8331c2b" HandleID="k8s-pod-network.2fd353a1f4de783afa2e35ef7eb70b44818c1c75bf8519f4b813ce37e8331c2b" Workload="localhost-k8s-csi--node--driver--xsl75-eth0" Oct 28 05:13:07.767000 containerd[1611]: 2025-10-28 05:13:07.624 [INFO][4599] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2fd353a1f4de783afa2e35ef7eb70b44818c1c75bf8519f4b813ce37e8331c2b" HandleID="k8s-pod-network.2fd353a1f4de783afa2e35ef7eb70b44818c1c75bf8519f4b813ce37e8331c2b" Workload="localhost-k8s-csi--node--driver--xsl75-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c70a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-xsl75", "timestamp":"2025-10-28 05:13:07.623495707 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 05:13:07.767000 containerd[1611]: 2025-10-28 05:13:07.624 [INFO][4599] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 05:13:07.767000 containerd[1611]: 2025-10-28 05:13:07.637 [INFO][4599] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 05:13:07.767000 containerd[1611]: 2025-10-28 05:13:07.637 [INFO][4599] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 05:13:07.767000 containerd[1611]: 2025-10-28 05:13:07.708 [INFO][4599] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2fd353a1f4de783afa2e35ef7eb70b44818c1c75bf8519f4b813ce37e8331c2b" host="localhost" Oct 28 05:13:07.767000 containerd[1611]: 2025-10-28 05:13:07.713 [INFO][4599] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 05:13:07.767000 containerd[1611]: 2025-10-28 05:13:07.717 [INFO][4599] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 05:13:07.767000 containerd[1611]: 2025-10-28 05:13:07.719 [INFO][4599] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 05:13:07.767000 containerd[1611]: 2025-10-28 05:13:07.721 [INFO][4599] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 05:13:07.767000 containerd[1611]: 2025-10-28 05:13:07.721 [INFO][4599] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2fd353a1f4de783afa2e35ef7eb70b44818c1c75bf8519f4b813ce37e8331c2b" host="localhost" Oct 28 05:13:07.767000 containerd[1611]: 2025-10-28 05:13:07.723 [INFO][4599] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2fd353a1f4de783afa2e35ef7eb70b44818c1c75bf8519f4b813ce37e8331c2b Oct 28 05:13:07.767000 containerd[1611]: 2025-10-28 05:13:07.728 [INFO][4599] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2fd353a1f4de783afa2e35ef7eb70b44818c1c75bf8519f4b813ce37e8331c2b" host="localhost" Oct 28 05:13:07.767000 containerd[1611]: 2025-10-28 05:13:07.736 [INFO][4599] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.2fd353a1f4de783afa2e35ef7eb70b44818c1c75bf8519f4b813ce37e8331c2b" host="localhost" Oct 28 05:13:07.767000 containerd[1611]: 2025-10-28 05:13:07.736 [INFO][4599] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.2fd353a1f4de783afa2e35ef7eb70b44818c1c75bf8519f4b813ce37e8331c2b" host="localhost" Oct 28 05:13:07.767000 containerd[1611]: 2025-10-28 05:13:07.736 [INFO][4599] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 05:13:07.767000 containerd[1611]: 2025-10-28 05:13:07.736 [INFO][4599] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="2fd353a1f4de783afa2e35ef7eb70b44818c1c75bf8519f4b813ce37e8331c2b" HandleID="k8s-pod-network.2fd353a1f4de783afa2e35ef7eb70b44818c1c75bf8519f4b813ce37e8331c2b" Workload="localhost-k8s-csi--node--driver--xsl75-eth0" Oct 28 05:13:07.767755 containerd[1611]: 2025-10-28 05:13:07.744 [INFO][4567] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2fd353a1f4de783afa2e35ef7eb70b44818c1c75bf8519f4b813ce37e8331c2b" Namespace="calico-system" Pod="csi-node-driver-xsl75" WorkloadEndpoint="localhost-k8s-csi--node--driver--xsl75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xsl75-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d86ce3dc-83d4-402a-b381-76ea2d723abb", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 5, 12, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-xsl75", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidee43f6353e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 05:13:07.767755 containerd[1611]: 2025-10-28 05:13:07.744 [INFO][4567] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="2fd353a1f4de783afa2e35ef7eb70b44818c1c75bf8519f4b813ce37e8331c2b" Namespace="calico-system" Pod="csi-node-driver-xsl75" WorkloadEndpoint="localhost-k8s-csi--node--driver--xsl75-eth0" Oct 28 05:13:07.767755 containerd[1611]: 2025-10-28 05:13:07.744 [INFO][4567] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidee43f6353e ContainerID="2fd353a1f4de783afa2e35ef7eb70b44818c1c75bf8519f4b813ce37e8331c2b" Namespace="calico-system" Pod="csi-node-driver-xsl75" WorkloadEndpoint="localhost-k8s-csi--node--driver--xsl75-eth0" Oct 28 05:13:07.767755 containerd[1611]: 2025-10-28 05:13:07.751 [INFO][4567] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2fd353a1f4de783afa2e35ef7eb70b44818c1c75bf8519f4b813ce37e8331c2b" Namespace="calico-system" Pod="csi-node-driver-xsl75" WorkloadEndpoint="localhost-k8s-csi--node--driver--xsl75-eth0" Oct 28 05:13:07.767755 containerd[1611]: 2025-10-28 05:13:07.751 [INFO][4567] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2fd353a1f4de783afa2e35ef7eb70b44818c1c75bf8519f4b813ce37e8331c2b" Namespace="calico-system" Pod="csi-node-driver-xsl75" WorkloadEndpoint="localhost-k8s-csi--node--driver--xsl75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xsl75-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d86ce3dc-83d4-402a-b381-76ea2d723abb", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 5, 12, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2fd353a1f4de783afa2e35ef7eb70b44818c1c75bf8519f4b813ce37e8331c2b", Pod:"csi-node-driver-xsl75", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidee43f6353e", MAC:"be:c8:b0:c2:3a:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 05:13:07.767755 containerd[1611]: 2025-10-28 05:13:07.763 [INFO][4567] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2fd353a1f4de783afa2e35ef7eb70b44818c1c75bf8519f4b813ce37e8331c2b" Namespace="calico-system" Pod="csi-node-driver-xsl75" WorkloadEndpoint="localhost-k8s-csi--node--driver--xsl75-eth0" Oct 28 05:13:07.785998 containerd[1611]: time="2025-10-28T05:13:07.785944468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4mflf,Uid:edaaa83e-dc34-4a96-9305-4f3c94df0a41,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b839e514e26d6bce268d5e530d0cb429c37eb9809ed60910c43fb15b30e94b3\"" Oct 28 05:13:07.786874 kubelet[2793]: E1028 05:13:07.786839 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:13:07.795278 containerd[1611]: time="2025-10-28T05:13:07.795229555Z" level=info msg="CreateContainer within sandbox \"5b839e514e26d6bce268d5e530d0cb429c37eb9809ed60910c43fb15b30e94b3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 28 05:13:07.800284 containerd[1611]: time="2025-10-28T05:13:07.800217803Z" level=info msg="connecting to shim 2fd353a1f4de783afa2e35ef7eb70b44818c1c75bf8519f4b813ce37e8331c2b" address="unix:///run/containerd/s/ab459c1d2812f83e5ce862ce2c1c4bbc49a66ab5d1c76ae1a1ee95eb18c4a55e" namespace=k8s.io protocol=ttrpc version=3 Oct 28 05:13:07.806221 containerd[1611]: time="2025-10-28T05:13:07.806155574Z" level=info msg="Container 34e9961b575b47b3f21f6cd3f1c09a9a1589fda68f3fbf1419f06c00fba526de: CDI devices from CRI Config.CDIDevices: []" Oct 28 05:13:07.814071 containerd[1611]: time="2025-10-28T05:13:07.814013972Z" level=info msg="CreateContainer within sandbox \"5b839e514e26d6bce268d5e530d0cb429c37eb9809ed60910c43fb15b30e94b3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"34e9961b575b47b3f21f6cd3f1c09a9a1589fda68f3fbf1419f06c00fba526de\"" Oct 28 05:13:07.815191 containerd[1611]: time="2025-10-28T05:13:07.815143734Z" level=info msg="StartContainer for \"34e9961b575b47b3f21f6cd3f1c09a9a1589fda68f3fbf1419f06c00fba526de\"" Oct 28 05:13:07.817817 containerd[1611]: time="2025-10-28T05:13:07.816892759Z" level=info msg="connecting to shim 34e9961b575b47b3f21f6cd3f1c09a9a1589fda68f3fbf1419f06c00fba526de" address="unix:///run/containerd/s/1c9e3fc29e5ab79bcddc91d04ebf5ed594dbc8c1535a534e06a7a6d89d826d10" protocol=ttrpc version=3 Oct 28 05:13:07.843101 systemd[1]: Started cri-containerd-2fd353a1f4de783afa2e35ef7eb70b44818c1c75bf8519f4b813ce37e8331c2b.scope - libcontainer container 2fd353a1f4de783afa2e35ef7eb70b44818c1c75bf8519f4b813ce37e8331c2b. Oct 28 05:13:07.849863 systemd[1]: Started cri-containerd-34e9961b575b47b3f21f6cd3f1c09a9a1589fda68f3fbf1419f06c00fba526de.scope - libcontainer container 34e9961b575b47b3f21f6cd3f1c09a9a1589fda68f3fbf1419f06c00fba526de. Oct 28 05:13:07.853063 systemd-networkd[1506]: cali92051ec4d9e: Link UP Oct 28 05:13:07.854116 systemd-networkd[1506]: cali92051ec4d9e: Gained carrier Oct 28 05:13:07.873954 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 05:13:07.877660 containerd[1611]: 2025-10-28 05:13:07.551 [INFO][4545] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 28 05:13:07.877660 containerd[1611]: 2025-10-28 05:13:07.589 [INFO][4545] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--84854587bd--trjrh-eth0 calico-apiserver-84854587bd- calico-apiserver 4bd50f09-b032-49f4-8f6f-5043dcd6661f 876 0 2025-10-28 05:12:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84854587bd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-84854587bd-trjrh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali92051ec4d9e [] [] }} ContainerID="20fa99222fb6637e76d49183f212e372bd890112183f5f4731461b2ad26305a5" Namespace="calico-apiserver" Pod="calico-apiserver-84854587bd-trjrh" WorkloadEndpoint="localhost-k8s-calico--apiserver--84854587bd--trjrh-" Oct 28 05:13:07.877660 containerd[1611]: 2025-10-28 05:13:07.589 [INFO][4545] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="20fa99222fb6637e76d49183f212e372bd890112183f5f4731461b2ad26305a5" Namespace="calico-apiserver" Pod="calico-apiserver-84854587bd-trjrh" WorkloadEndpoint="localhost-k8s-calico--apiserver--84854587bd--trjrh-eth0" Oct 28 05:13:07.877660 containerd[1611]: 2025-10-28 05:13:07.638 [INFO][4609] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="20fa99222fb6637e76d49183f212e372bd890112183f5f4731461b2ad26305a5" HandleID="k8s-pod-network.20fa99222fb6637e76d49183f212e372bd890112183f5f4731461b2ad26305a5" Workload="localhost-k8s-calico--apiserver--84854587bd--trjrh-eth0" Oct 28 05:13:07.877660 containerd[1611]: 2025-10-28 05:13:07.640 [INFO][4609] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="20fa99222fb6637e76d49183f212e372bd890112183f5f4731461b2ad26305a5" HandleID="k8s-pod-network.20fa99222fb6637e76d49183f212e372bd890112183f5f4731461b2ad26305a5" Workload="localhost-k8s-calico--apiserver--84854587bd--trjrh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000680590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-84854587bd-trjrh", "timestamp":"2025-10-28 05:13:07.638702335 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 05:13:07.877660 containerd[1611]: 2025-10-28 05:13:07.640 [INFO][4609] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 05:13:07.877660 containerd[1611]: 2025-10-28 05:13:07.736 [INFO][4609] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 05:13:07.877660 containerd[1611]: 2025-10-28 05:13:07.737 [INFO][4609] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 05:13:07.877660 containerd[1611]: 2025-10-28 05:13:07.808 [INFO][4609] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.20fa99222fb6637e76d49183f212e372bd890112183f5f4731461b2ad26305a5" host="localhost" Oct 28 05:13:07.877660 containerd[1611]: 2025-10-28 05:13:07.816 [INFO][4609] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 05:13:07.877660 containerd[1611]: 2025-10-28 05:13:07.824 [INFO][4609] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 05:13:07.877660 containerd[1611]: 2025-10-28 05:13:07.826 [INFO][4609] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 05:13:07.877660 containerd[1611]: 2025-10-28 05:13:07.829 [INFO][4609] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 05:13:07.877660 containerd[1611]: 2025-10-28 05:13:07.830 [INFO][4609] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.20fa99222fb6637e76d49183f212e372bd890112183f5f4731461b2ad26305a5" host="localhost" Oct 28 05:13:07.877660 containerd[1611]: 2025-10-28 05:13:07.832 [INFO][4609] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.20fa99222fb6637e76d49183f212e372bd890112183f5f4731461b2ad26305a5 Oct 28 05:13:07.877660 containerd[1611]: 2025-10-28 05:13:07.837 [INFO][4609] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.20fa99222fb6637e76d49183f212e372bd890112183f5f4731461b2ad26305a5" host="localhost" Oct 28 05:13:07.877660 containerd[1611]: 2025-10-28 05:13:07.846 [INFO][4609] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.20fa99222fb6637e76d49183f212e372bd890112183f5f4731461b2ad26305a5" host="localhost" Oct 28 05:13:07.877660 containerd[1611]: 2025-10-28 05:13:07.846 [INFO][4609] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.20fa99222fb6637e76d49183f212e372bd890112183f5f4731461b2ad26305a5" host="localhost" Oct 28 05:13:07.877660 containerd[1611]: 2025-10-28 05:13:07.846 [INFO][4609] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 05:13:07.877660 containerd[1611]: 2025-10-28 05:13:07.846 [INFO][4609] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="20fa99222fb6637e76d49183f212e372bd890112183f5f4731461b2ad26305a5" HandleID="k8s-pod-network.20fa99222fb6637e76d49183f212e372bd890112183f5f4731461b2ad26305a5" Workload="localhost-k8s-calico--apiserver--84854587bd--trjrh-eth0" Oct 28 05:13:07.878735 containerd[1611]: 2025-10-28 05:13:07.850 [INFO][4545] cni-plugin/k8s.go 418: Populated endpoint ContainerID="20fa99222fb6637e76d49183f212e372bd890112183f5f4731461b2ad26305a5" Namespace="calico-apiserver" Pod="calico-apiserver-84854587bd-trjrh" WorkloadEndpoint="localhost-k8s-calico--apiserver--84854587bd--trjrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84854587bd--trjrh-eth0", GenerateName:"calico-apiserver-84854587bd-", Namespace:"calico-apiserver", SelfLink:"", UID:"4bd50f09-b032-49f4-8f6f-5043dcd6661f", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 5, 12, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84854587bd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-84854587bd-trjrh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali92051ec4d9e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 05:13:07.878735 containerd[1611]: 2025-10-28 05:13:07.850 [INFO][4545] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="20fa99222fb6637e76d49183f212e372bd890112183f5f4731461b2ad26305a5" Namespace="calico-apiserver" Pod="calico-apiserver-84854587bd-trjrh" WorkloadEndpoint="localhost-k8s-calico--apiserver--84854587bd--trjrh-eth0" Oct 28 05:13:07.878735 containerd[1611]: 2025-10-28 05:13:07.850 [INFO][4545] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali92051ec4d9e ContainerID="20fa99222fb6637e76d49183f212e372bd890112183f5f4731461b2ad26305a5" Namespace="calico-apiserver" Pod="calico-apiserver-84854587bd-trjrh" WorkloadEndpoint="localhost-k8s-calico--apiserver--84854587bd--trjrh-eth0" Oct 28 05:13:07.878735 containerd[1611]: 2025-10-28 05:13:07.856 [INFO][4545] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="20fa99222fb6637e76d49183f212e372bd890112183f5f4731461b2ad26305a5" Namespace="calico-apiserver" Pod="calico-apiserver-84854587bd-trjrh" WorkloadEndpoint="localhost-k8s-calico--apiserver--84854587bd--trjrh-eth0" Oct 28 05:13:07.878735 containerd[1611]: 2025-10-28 05:13:07.860 [INFO][4545] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="20fa99222fb6637e76d49183f212e372bd890112183f5f4731461b2ad26305a5" Namespace="calico-apiserver" Pod="calico-apiserver-84854587bd-trjrh" WorkloadEndpoint="localhost-k8s-calico--apiserver--84854587bd--trjrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84854587bd--trjrh-eth0", GenerateName:"calico-apiserver-84854587bd-", Namespace:"calico-apiserver", SelfLink:"", UID:"4bd50f09-b032-49f4-8f6f-5043dcd6661f", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 5, 12, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84854587bd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"20fa99222fb6637e76d49183f212e372bd890112183f5f4731461b2ad26305a5", Pod:"calico-apiserver-84854587bd-trjrh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali92051ec4d9e", MAC:"36:37:cb:fa:38:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 05:13:07.878735 containerd[1611]: 2025-10-28 05:13:07.870 [INFO][4545] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="20fa99222fb6637e76d49183f212e372bd890112183f5f4731461b2ad26305a5" Namespace="calico-apiserver" Pod="calico-apiserver-84854587bd-trjrh" WorkloadEndpoint="localhost-k8s-calico--apiserver--84854587bd--trjrh-eth0" Oct 28 05:13:07.916139 containerd[1611]: time="2025-10-28T05:13:07.916099830Z" level=info msg="StartContainer for \"34e9961b575b47b3f21f6cd3f1c09a9a1589fda68f3fbf1419f06c00fba526de\" returns successfully" Oct 28 05:13:07.930597 containerd[1611]: time="2025-10-28T05:13:07.930447175Z" level=info msg="connecting to shim 20fa99222fb6637e76d49183f212e372bd890112183f5f4731461b2ad26305a5" address="unix:///run/containerd/s/63ecdb55a1b9659dd7c049ba7b9efce19442bba695f190d45b80be20624878bb" namespace=k8s.io protocol=ttrpc version=3 Oct 28 05:13:07.960521 containerd[1611]: time="2025-10-28T05:13:07.960469567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xsl75,Uid:d86ce3dc-83d4-402a-b381-76ea2d723abb,Namespace:calico-system,Attempt:0,} returns sandbox id \"2fd353a1f4de783afa2e35ef7eb70b44818c1c75bf8519f4b813ce37e8331c2b\"" Oct 28 05:13:07.963370 containerd[1611]: time="2025-10-28T05:13:07.963339818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 28 05:13:07.971067 kubelet[2793]: I1028 05:13:07.971009 2793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 28 05:13:07.971532 kubelet[2793]: E1028 05:13:07.971485 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:13:07.974184 systemd[1]: Started cri-containerd-20fa99222fb6637e76d49183f212e372bd890112183f5f4731461b2ad26305a5.scope - libcontainer container 20fa99222fb6637e76d49183f212e372bd890112183f5f4731461b2ad26305a5. Oct 28 05:13:07.998667 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 05:13:08.039149 containerd[1611]: time="2025-10-28T05:13:08.039068430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84854587bd-trjrh,Uid:4bd50f09-b032-49f4-8f6f-5043dcd6661f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"20fa99222fb6637e76d49183f212e372bd890112183f5f4731461b2ad26305a5\"" Oct 28 05:13:08.327177 containerd[1611]: time="2025-10-28T05:13:08.326974994Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:13:08.329164 containerd[1611]: time="2025-10-28T05:13:08.329007971Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 28 05:13:08.329164 containerd[1611]: time="2025-10-28T05:13:08.329086409Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 28 05:13:08.329416 kubelet[2793]: E1028 05:13:08.329362 2793 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 05:13:08.329478 kubelet[2793]: E1028 05:13:08.329437 2793 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 05:13:08.330241 kubelet[2793]: E1028 05:13:08.329753 2793 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r4d7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xsl75_calico-system(d86ce3dc-83d4-402a-b381-76ea2d723abb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 28 05:13:08.330354 containerd[1611]: time="2025-10-28T05:13:08.330240476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 05:13:08.512040 systemd-networkd[1506]: calid8d34144f72: Gained IPv6LL Oct 28 05:13:08.635450 kubelet[2793]: E1028 05:13:08.635311 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:13:08.642340 kubelet[2793]: E1028 05:13:08.642284 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c6c464845-gchcn" podUID="17014fb8-50b3-4fb0-83a0-7ba96dfdd77b" Oct 28 05:13:08.643228 kubelet[2793]: E1028 05:13:08.643175 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:13:08.658177 containerd[1611]: time="2025-10-28T05:13:08.658090176Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:13:08.660130 containerd[1611]: time="2025-10-28T05:13:08.659914713Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 05:13:08.660130 containerd[1611]: time="2025-10-28T05:13:08.659992599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 05:13:08.660563 kubelet[2793]: E1028 05:13:08.660492 2793 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 05:13:08.660563 kubelet[2793]: E1028 05:13:08.660556 2793 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 05:13:08.662194 containerd[1611]: time="2025-10-28T05:13:08.662131495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 28 05:13:08.663261 kubelet[2793]: E1028 05:13:08.663036 2793 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7wgcd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84854587bd-trjrh_calico-apiserver(4bd50f09-b032-49f4-8f6f-5043dcd6661f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 05:13:08.664666 kubelet[2793]: E1028 05:13:08.664635 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84854587bd-trjrh" podUID="4bd50f09-b032-49f4-8f6f-5043dcd6661f" Oct 28 05:13:08.673204 kubelet[2793]: I1028 05:13:08.672903 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-4mflf" podStartSLOduration=37.672884548 podStartE2EDuration="37.672884548s" podCreationTimestamp="2025-10-28 05:12:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 05:13:08.655340523 +0000 UTC m=+42.324866594" watchObservedRunningTime="2025-10-28 05:13:08.672884548 +0000 UTC m=+42.342410619" Oct 28 05:13:08.837643 systemd-networkd[1506]: vxlan.calico: Link UP Oct 28 05:13:08.837974 systemd-networkd[1506]: vxlan.calico: Gained carrier Oct 28 05:13:08.960010 systemd-networkd[1506]: calidee43f6353e: Gained IPv6LL Oct 28 05:13:09.030880 containerd[1611]: time="2025-10-28T05:13:09.030782430Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:13:09.032039 containerd[1611]: time="2025-10-28T05:13:09.031999224Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 28 05:13:09.032245 containerd[1611]: time="2025-10-28T05:13:09.032150378Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 28 05:13:09.032413 kubelet[2793]: E1028 05:13:09.032240 2793 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 05:13:09.032413 kubelet[2793]: E1028 05:13:09.032298 2793 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 05:13:09.032553 kubelet[2793]: E1028 05:13:09.032443 2793 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r4d7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xsl75_calico-system(d86ce3dc-83d4-402a-b381-76ea2d723abb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 28 05:13:09.033734 kubelet[2793]: E1028 05:13:09.033674 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xsl75" podUID="d86ce3dc-83d4-402a-b381-76ea2d723abb" Oct 28 05:13:09.216000 systemd-networkd[1506]: cali102a5b54847: Gained IPv6LL Oct 28 05:13:09.642493 kubelet[2793]: E1028 05:13:09.642309 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:13:09.643452 kubelet[2793]: E1028 05:13:09.643420 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84854587bd-trjrh" podUID="4bd50f09-b032-49f4-8f6f-5043dcd6661f" Oct 28 05:13:09.645060 kubelet[2793]: E1028 05:13:09.645013 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xsl75" podUID="d86ce3dc-83d4-402a-b381-76ea2d723abb" Oct 28 05:13:09.710666 kubelet[2793]: I1028 05:13:09.710586 2793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 28 05:13:09.711214 kubelet[2793]: E1028 05:13:09.711175 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:13:09.727994 systemd-networkd[1506]: cali92051ec4d9e: Gained IPv6LL Oct 28 05:13:09.872183 containerd[1611]: time="2025-10-28T05:13:09.872130139Z" level=info msg="TaskExit event in podsandbox handler container_id:\"839ed2195728776404f5631c78b9d3e10de4d32504541cab9c85c3a15ef68076\" id:\"4127464744a63c66e3ad5f9ff3b892d1dd6aaa792ec1c4239ef4e9a650cc94c4\" pid:4972 exit_status:1 exited_at:{seconds:1761628389 nanos:871656700}" Oct 28 05:13:10.048125 systemd-networkd[1506]: vxlan.calico: Gained IPv6LL Oct 28 05:13:10.053180 containerd[1611]: time="2025-10-28T05:13:10.053130233Z" level=info msg="TaskExit event in podsandbox handler container_id:\"839ed2195728776404f5631c78b9d3e10de4d32504541cab9c85c3a15ef68076\" id:\"b13875f913813ccd22f474019d12d70a425fe9820cb9c6b9d2464a0c14beaf5a\" pid:4996 exit_status:1 exited_at:{seconds:1761628390 nanos:52744579}" Oct 28 05:13:10.248272 systemd[1]: Started sshd@10-10.0.0.49:22-10.0.0.1:49380.service - OpenSSH per-connection server daemon (10.0.0.1:49380). Oct 28 05:13:10.330290 sshd[5012]: Accepted publickey for core from 10.0.0.1 port 49380 ssh2: RSA SHA256:fnPxJp9OOcM7toOTW/sODQxaZsmsBo9HTVuuDohs1/Q Oct 28 05:13:10.332859 sshd-session[5012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:13:10.337941 systemd-logind[1584]: New session 11 of user core. Oct 28 05:13:10.345946 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 28 05:13:10.441728 sshd[5015]: Connection closed by 10.0.0.1 port 49380 Oct 28 05:13:10.442068 sshd-session[5012]: pam_unix(sshd:session): session closed for user core Oct 28 05:13:10.447438 systemd[1]: sshd@10-10.0.0.49:22-10.0.0.1:49380.service: Deactivated successfully. Oct 28 05:13:10.450226 systemd[1]: session-11.scope: Deactivated successfully. Oct 28 05:13:10.451340 systemd-logind[1584]: Session 11 logged out. Waiting for processes to exit. Oct 28 05:13:10.453237 systemd-logind[1584]: Removed session 11. Oct 28 05:13:10.644672 kubelet[2793]: E1028 05:13:10.644540 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:13:15.459770 systemd[1]: Started sshd@11-10.0.0.49:22-10.0.0.1:49392.service - OpenSSH per-connection server daemon (10.0.0.1:49392). Oct 28 05:13:15.519009 sshd[5040]: Accepted publickey for core from 10.0.0.1 port 49392 ssh2: RSA SHA256:fnPxJp9OOcM7toOTW/sODQxaZsmsBo9HTVuuDohs1/Q Oct 28 05:13:15.520730 sshd-session[5040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:13:15.525862 systemd-logind[1584]: New session 12 of user core. Oct 28 05:13:15.535296 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 28 05:13:15.623731 sshd[5043]: Connection closed by 10.0.0.1 port 49392 Oct 28 05:13:15.624992 sshd-session[5040]: pam_unix(sshd:session): session closed for user core Oct 28 05:13:15.629124 systemd[1]: sshd@11-10.0.0.49:22-10.0.0.1:49392.service: Deactivated successfully. Oct 28 05:13:15.631295 systemd[1]: session-12.scope: Deactivated successfully. Oct 28 05:13:15.632056 systemd-logind[1584]: Session 12 logged out. Waiting for processes to exit. Oct 28 05:13:15.633381 systemd-logind[1584]: Removed session 12. Oct 28 05:13:17.464729 containerd[1611]: time="2025-10-28T05:13:17.464662796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 05:13:17.859148 containerd[1611]: time="2025-10-28T05:13:17.858959371Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:13:17.948945 containerd[1611]: time="2025-10-28T05:13:17.948855523Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 05:13:17.949145 containerd[1611]: time="2025-10-28T05:13:17.948924753Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 05:13:17.949308 kubelet[2793]: E1028 05:13:17.949229 2793 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 05:13:17.949308 kubelet[2793]: E1028 05:13:17.949293 2793 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 05:13:17.949900 kubelet[2793]: E1028 05:13:17.949494 2793 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j8hs2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84854587bd-9b74s_calico-apiserver(e27c6b1a-18a7-411c-9050-0f3b48a38781): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 05:13:17.950928 kubelet[2793]: E1028 05:13:17.950890 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84854587bd-9b74s" podUID="e27c6b1a-18a7-411c-9050-0f3b48a38781" Oct 28 05:13:19.464415 containerd[1611]: time="2025-10-28T05:13:19.464355701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 28 05:13:19.915980 containerd[1611]: time="2025-10-28T05:13:19.915926400Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:13:19.984370 containerd[1611]: time="2025-10-28T05:13:19.984270974Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 28 05:13:19.984546 containerd[1611]: time="2025-10-28T05:13:19.984332219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 28 05:13:19.984757 kubelet[2793]: E1028 05:13:19.984660 2793 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 05:13:19.985222 kubelet[2793]: E1028 05:13:19.984769 2793 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 05:13:19.985222 kubelet[2793]: E1028 05:13:19.985025 2793 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9zs97,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-k9fjz_calico-system(49813e29-063b-4ee4-bacc-cb4ec7eba7e0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 28 05:13:19.986262 kubelet[2793]: E1028 05:13:19.986220 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k9fjz" podUID="49813e29-063b-4ee4-bacc-cb4ec7eba7e0" Oct 28 05:13:20.639642 systemd[1]: Started sshd@12-10.0.0.49:22-10.0.0.1:54968.service - OpenSSH per-connection server daemon (10.0.0.1:54968). Oct 28 05:13:20.690964 sshd[5065]: Accepted publickey for core from 10.0.0.1 port 54968 ssh2: RSA SHA256:fnPxJp9OOcM7toOTW/sODQxaZsmsBo9HTVuuDohs1/Q Oct 28 05:13:20.693086 sshd-session[5065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:13:20.698023 systemd-logind[1584]: New session 13 of user core. Oct 28 05:13:20.711954 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 28 05:13:20.791441 sshd[5068]: Connection closed by 10.0.0.1 port 54968 Oct 28 05:13:20.791857 sshd-session[5065]: pam_unix(sshd:session): session closed for user core Oct 28 05:13:20.801876 systemd[1]: sshd@12-10.0.0.49:22-10.0.0.1:54968.service: Deactivated successfully. Oct 28 05:13:20.803887 systemd[1]: session-13.scope: Deactivated successfully. Oct 28 05:13:20.804826 systemd-logind[1584]: Session 13 logged out. Waiting for processes to exit. Oct 28 05:13:20.807890 systemd[1]: Started sshd@13-10.0.0.49:22-10.0.0.1:54970.service - OpenSSH per-connection server daemon (10.0.0.1:54970). Oct 28 05:13:20.808822 systemd-logind[1584]: Removed session 13. Oct 28 05:13:20.870933 sshd[5082]: Accepted publickey for core from 10.0.0.1 port 54970 ssh2: RSA SHA256:fnPxJp9OOcM7toOTW/sODQxaZsmsBo9HTVuuDohs1/Q Oct 28 05:13:20.872850 sshd-session[5082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:13:20.878500 systemd-logind[1584]: New session 14 of user core. Oct 28 05:13:20.889223 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 28 05:13:21.028861 sshd[5085]: Connection closed by 10.0.0.1 port 54970 Oct 28 05:13:21.029335 sshd-session[5082]: pam_unix(sshd:session): session closed for user core Oct 28 05:13:21.040011 systemd[1]: sshd@13-10.0.0.49:22-10.0.0.1:54970.service: Deactivated successfully. Oct 28 05:13:21.043233 systemd[1]: session-14.scope: Deactivated successfully. Oct 28 05:13:21.045279 systemd-logind[1584]: Session 14 logged out. Waiting for processes to exit. Oct 28 05:13:21.053290 systemd-logind[1584]: Removed session 14. Oct 28 05:13:21.055514 systemd[1]: Started sshd@14-10.0.0.49:22-10.0.0.1:54972.service - OpenSSH per-connection server daemon (10.0.0.1:54972). Oct 28 05:13:21.131410 sshd[5097]: Accepted publickey for core from 10.0.0.1 port 54972 ssh2: RSA SHA256:fnPxJp9OOcM7toOTW/sODQxaZsmsBo9HTVuuDohs1/Q Oct 28 05:13:21.133526 sshd-session[5097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:13:21.139391 systemd-logind[1584]: New session 15 of user core. Oct 28 05:13:21.155974 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 28 05:13:21.249783 sshd[5100]: Connection closed by 10.0.0.1 port 54972 Oct 28 05:13:21.250137 sshd-session[5097]: pam_unix(sshd:session): session closed for user core Oct 28 05:13:21.254908 systemd[1]: sshd@14-10.0.0.49:22-10.0.0.1:54972.service: Deactivated successfully. Oct 28 05:13:21.256859 systemd[1]: session-15.scope: Deactivated successfully. Oct 28 05:13:21.257917 systemd-logind[1584]: Session 15 logged out. Waiting for processes to exit. Oct 28 05:13:21.259012 systemd-logind[1584]: Removed session 15. Oct 28 05:13:21.466082 containerd[1611]: time="2025-10-28T05:13:21.465190718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 28 05:13:21.782395 containerd[1611]: time="2025-10-28T05:13:21.782244544Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:13:21.794746 containerd[1611]: time="2025-10-28T05:13:21.794630115Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 28 05:13:21.794866 containerd[1611]: time="2025-10-28T05:13:21.794727258Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 28 05:13:21.795073 kubelet[2793]: E1028 05:13:21.795004 2793 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 05:13:21.795556 kubelet[2793]: E1028 05:13:21.795079 2793 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 05:13:21.795556 kubelet[2793]: E1028 05:13:21.795233 2793 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2926ad180e634552b42f6badbaccdc32,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lkk5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-664d46d96d-xvk5c_calico-system(8192e129-0d18-4558-9c03-84afd7a7f848): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 28 05:13:21.797462 containerd[1611]: time="2025-10-28T05:13:21.797396829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 28 05:13:22.157927 containerd[1611]: time="2025-10-28T05:13:22.157874072Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:13:22.223906 containerd[1611]: time="2025-10-28T05:13:22.223776121Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 28 05:13:22.223906 containerd[1611]: time="2025-10-28T05:13:22.223812140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 28 05:13:22.224230 kubelet[2793]: E1028 05:13:22.224114 2793 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 05:13:22.224230 kubelet[2793]: E1028 05:13:22.224175 2793 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 05:13:22.224431 kubelet[2793]: E1028 05:13:22.224380 2793 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lkk5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-664d46d96d-xvk5c_calico-system(8192e129-0d18-4558-9c03-84afd7a7f848): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 28 05:13:22.225596 kubelet[2793]: E1028 05:13:22.225549 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-664d46d96d-xvk5c" podUID="8192e129-0d18-4558-9c03-84afd7a7f848" Oct 28 05:13:22.465521 containerd[1611]: time="2025-10-28T05:13:22.464886838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 28 05:13:22.792278 containerd[1611]: time="2025-10-28T05:13:22.792121345Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:13:22.817187 containerd[1611]: time="2025-10-28T05:13:22.817149993Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 28 05:13:22.817261 containerd[1611]: time="2025-10-28T05:13:22.817177577Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 28 05:13:22.817456 kubelet[2793]: E1028 05:13:22.817402 2793 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 05:13:22.817724 kubelet[2793]: E1028 05:13:22.817466 2793 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 05:13:22.818044 containerd[1611]: time="2025-10-28T05:13:22.817820559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 05:13:22.818104 kubelet[2793]: E1028 05:13:22.817814 2793 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvswr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5c6c464845-gchcn_calico-system(17014fb8-50b3-4fb0-83a0-7ba96dfdd77b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 28 05:13:22.819060 kubelet[2793]: E1028 05:13:22.819011 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c6c464845-gchcn" podUID="17014fb8-50b3-4fb0-83a0-7ba96dfdd77b" Oct 28 05:13:23.214770 containerd[1611]: time="2025-10-28T05:13:23.214702479Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:13:23.215903 containerd[1611]: time="2025-10-28T05:13:23.215846225Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 05:13:23.215903 containerd[1611]: time="2025-10-28T05:13:23.215886924Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 05:13:23.216155 kubelet[2793]: E1028 05:13:23.216105 2793 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 05:13:23.216215 kubelet[2793]: E1028 05:13:23.216167 2793 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 05:13:23.216406 kubelet[2793]: E1028 05:13:23.216348 2793 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7wgcd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84854587bd-trjrh_calico-apiserver(4bd50f09-b032-49f4-8f6f-5043dcd6661f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 05:13:23.217580 kubelet[2793]: E1028 05:13:23.217540 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84854587bd-trjrh" podUID="4bd50f09-b032-49f4-8f6f-5043dcd6661f" Oct 28 05:13:24.464722 containerd[1611]: time="2025-10-28T05:13:24.464653120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 28 05:13:24.887764 containerd[1611]: time="2025-10-28T05:13:24.887602975Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:13:24.888983 containerd[1611]: time="2025-10-28T05:13:24.888940717Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 28 05:13:24.889078 containerd[1611]: time="2025-10-28T05:13:24.888998189Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 28 05:13:24.889207 kubelet[2793]: E1028 05:13:24.889161 2793 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 05:13:24.889638 kubelet[2793]: E1028 05:13:24.889223 2793 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 05:13:24.889638 kubelet[2793]: E1028 05:13:24.889378 2793 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r4d7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xsl75_calico-system(d86ce3dc-83d4-402a-b381-76ea2d723abb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 28 05:13:24.892142 containerd[1611]: time="2025-10-28T05:13:24.892090852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 28 05:13:25.196843 containerd[1611]: time="2025-10-28T05:13:25.196770049Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:13:25.198539 containerd[1611]: time="2025-10-28T05:13:25.198496272Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 28 05:13:25.198625 containerd[1611]: time="2025-10-28T05:13:25.198533435Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 28 05:13:25.198835 kubelet[2793]: E1028 05:13:25.198765 2793 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 05:13:25.198904 kubelet[2793]: E1028 05:13:25.198850 2793 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 05:13:25.199057 kubelet[2793]: E1028 05:13:25.199011 2793 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r4d7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xsl75_calico-system(d86ce3dc-83d4-402a-b381-76ea2d723abb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 28 05:13:25.200256 kubelet[2793]: E1028 05:13:25.200207 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xsl75" podUID="d86ce3dc-83d4-402a-b381-76ea2d723abb" Oct 28 05:13:26.269111 systemd[1]: Started sshd@15-10.0.0.49:22-10.0.0.1:43850.service - OpenSSH per-connection server daemon (10.0.0.1:43850). Oct 28 05:13:26.322364 sshd[5118]: Accepted publickey for core from 10.0.0.1 port 43850 ssh2: RSA SHA256:fnPxJp9OOcM7toOTW/sODQxaZsmsBo9HTVuuDohs1/Q Oct 28 05:13:26.324267 sshd-session[5118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:13:26.328998 systemd-logind[1584]: New session 16 of user core. Oct 28 05:13:26.339978 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 28 05:13:26.451636 sshd[5121]: Connection closed by 10.0.0.1 port 43850 Oct 28 05:13:26.452055 sshd-session[5118]: pam_unix(sshd:session): session closed for user core Oct 28 05:13:26.457868 systemd[1]: sshd@15-10.0.0.49:22-10.0.0.1:43850.service: Deactivated successfully. Oct 28 05:13:26.460088 systemd[1]: session-16.scope: Deactivated successfully. Oct 28 05:13:26.461122 systemd-logind[1584]: Session 16 logged out. Waiting for processes to exit. Oct 28 05:13:26.462874 systemd-logind[1584]: Removed session 16. Oct 28 05:13:28.464069 kubelet[2793]: E1028 05:13:28.464001 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84854587bd-9b74s" podUID="e27c6b1a-18a7-411c-9050-0f3b48a38781" Oct 28 05:13:31.463845 systemd[1]: Started sshd@16-10.0.0.49:22-10.0.0.1:43852.service - OpenSSH per-connection server daemon (10.0.0.1:43852). Oct 28 05:13:31.513604 sshd[5145]: Accepted publickey for core from 10.0.0.1 port 43852 ssh2: RSA SHA256:fnPxJp9OOcM7toOTW/sODQxaZsmsBo9HTVuuDohs1/Q Oct 28 05:13:31.515299 sshd-session[5145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:13:31.519496 systemd-logind[1584]: New session 17 of user core. Oct 28 05:13:31.534049 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 28 05:13:31.608245 sshd[5148]: Connection closed by 10.0.0.1 port 43852 Oct 28 05:13:31.608562 sshd-session[5145]: pam_unix(sshd:session): session closed for user core Oct 28 05:13:31.613257 systemd[1]: sshd@16-10.0.0.49:22-10.0.0.1:43852.service: Deactivated successfully. Oct 28 05:13:31.615094 systemd[1]: session-17.scope: Deactivated successfully. Oct 28 05:13:31.615922 systemd-logind[1584]: Session 17 logged out. Waiting for processes to exit. Oct 28 05:13:31.616963 systemd-logind[1584]: Removed session 17. Oct 28 05:13:32.464405 kubelet[2793]: E1028 05:13:32.464325 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k9fjz" podUID="49813e29-063b-4ee4-bacc-cb4ec7eba7e0" Oct 28 05:13:32.466730 kubelet[2793]: E1028 05:13:32.466658 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-664d46d96d-xvk5c" podUID="8192e129-0d18-4558-9c03-84afd7a7f848" Oct 28 05:13:35.463809 kubelet[2793]: E1028 05:13:35.463737 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84854587bd-trjrh" podUID="4bd50f09-b032-49f4-8f6f-5043dcd6661f" Oct 28 05:13:36.465683 kubelet[2793]: E1028 05:13:36.465619 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xsl75" podUID="d86ce3dc-83d4-402a-b381-76ea2d723abb" Oct 28 05:13:36.469722 kubelet[2793]: E1028 05:13:36.469688 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:13:36.621122 systemd[1]: Started sshd@17-10.0.0.49:22-10.0.0.1:37702.service - OpenSSH per-connection server daemon (10.0.0.1:37702). Oct 28 05:13:36.694082 sshd[5165]: Accepted publickey for core from 10.0.0.1 port 37702 ssh2: RSA SHA256:fnPxJp9OOcM7toOTW/sODQxaZsmsBo9HTVuuDohs1/Q Oct 28 05:13:36.696080 sshd-session[5165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:13:36.701217 systemd-logind[1584]: New session 18 of user core. Oct 28 05:13:36.718051 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 28 05:13:36.789048 sshd[5168]: Connection closed by 10.0.0.1 port 37702 Oct 28 05:13:36.789434 sshd-session[5165]: pam_unix(sshd:session): session closed for user core Oct 28 05:13:36.794782 systemd[1]: sshd@17-10.0.0.49:22-10.0.0.1:37702.service: Deactivated successfully. Oct 28 05:13:36.797174 systemd[1]: session-18.scope: Deactivated successfully. Oct 28 05:13:36.797918 systemd-logind[1584]: Session 18 logged out. Waiting for processes to exit. Oct 28 05:13:36.799153 systemd-logind[1584]: Removed session 18. Oct 28 05:13:37.464595 kubelet[2793]: E1028 05:13:37.464534 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c6c464845-gchcn" podUID="17014fb8-50b3-4fb0-83a0-7ba96dfdd77b" Oct 28 05:13:40.056542 containerd[1611]: time="2025-10-28T05:13:40.056481749Z" level=info msg="TaskExit event in podsandbox handler container_id:\"839ed2195728776404f5631c78b9d3e10de4d32504541cab9c85c3a15ef68076\" id:\"f8ab520361cb69ce14578676feab9308a8f332417e8efa9d81b158533d846c0f\" pid:5194 exited_at:{seconds:1761628420 nanos:55619754}" Oct 28 05:13:40.058765 kubelet[2793]: E1028 05:13:40.058730 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:13:41.806849 systemd[1]: Started sshd@18-10.0.0.49:22-10.0.0.1:37714.service - OpenSSH per-connection server daemon (10.0.0.1:37714). Oct 28 05:13:41.870596 sshd[5208]: Accepted publickey for core from 10.0.0.1 port 37714 ssh2: RSA SHA256:fnPxJp9OOcM7toOTW/sODQxaZsmsBo9HTVuuDohs1/Q Oct 28 05:13:41.872578 sshd-session[5208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:13:41.878147 systemd-logind[1584]: New session 19 of user core. Oct 28 05:13:41.888059 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 28 05:13:41.968419 sshd[5211]: Connection closed by 10.0.0.1 port 37714 Oct 28 05:13:41.968985 sshd-session[5208]: pam_unix(sshd:session): session closed for user core Oct 28 05:13:41.980785 systemd[1]: sshd@18-10.0.0.49:22-10.0.0.1:37714.service: Deactivated successfully. Oct 28 05:13:41.982775 systemd[1]: session-19.scope: Deactivated successfully. Oct 28 05:13:41.983914 systemd-logind[1584]: Session 19 logged out. Waiting for processes to exit. Oct 28 05:13:41.987237 systemd[1]: Started sshd@19-10.0.0.49:22-10.0.0.1:37728.service - OpenSSH per-connection server daemon (10.0.0.1:37728). Oct 28 05:13:41.987935 systemd-logind[1584]: Removed session 19. Oct 28 05:13:42.061030 sshd[5224]: Accepted publickey for core from 10.0.0.1 port 37728 ssh2: RSA SHA256:fnPxJp9OOcM7toOTW/sODQxaZsmsBo9HTVuuDohs1/Q Oct 28 05:13:42.062915 sshd-session[5224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:13:42.067739 systemd-logind[1584]: New session 20 of user core. Oct 28 05:13:42.081092 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 28 05:13:42.346915 sshd[5227]: Connection closed by 10.0.0.1 port 37728 Oct 28 05:13:42.346986 sshd-session[5224]: pam_unix(sshd:session): session closed for user core Oct 28 05:13:42.355623 systemd[1]: sshd@19-10.0.0.49:22-10.0.0.1:37728.service: Deactivated successfully. Oct 28 05:13:42.357549 systemd[1]: session-20.scope: Deactivated successfully. Oct 28 05:13:42.358890 systemd-logind[1584]: Session 20 logged out. Waiting for processes to exit. Oct 28 05:13:42.362612 systemd[1]: Started sshd@20-10.0.0.49:22-10.0.0.1:37742.service - OpenSSH per-connection server daemon (10.0.0.1:37742). Oct 28 05:13:42.363658 systemd-logind[1584]: Removed session 20. Oct 28 05:13:42.411918 sshd[5239]: Accepted publickey for core from 10.0.0.1 port 37742 ssh2: RSA SHA256:fnPxJp9OOcM7toOTW/sODQxaZsmsBo9HTVuuDohs1/Q Oct 28 05:13:42.413714 sshd-session[5239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:13:42.418513 systemd-logind[1584]: New session 21 of user core. Oct 28 05:13:42.430039 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 28 05:13:42.923436 sshd[5242]: Connection closed by 10.0.0.1 port 37742 Oct 28 05:13:42.924484 sshd-session[5239]: pam_unix(sshd:session): session closed for user core Oct 28 05:13:42.937945 systemd[1]: sshd@20-10.0.0.49:22-10.0.0.1:37742.service: Deactivated successfully. Oct 28 05:13:42.940014 systemd[1]: session-21.scope: Deactivated successfully. Oct 28 05:13:42.942764 systemd-logind[1584]: Session 21 logged out. Waiting for processes to exit. Oct 28 05:13:42.946334 systemd[1]: Started sshd@21-10.0.0.49:22-10.0.0.1:37754.service - OpenSSH per-connection server daemon (10.0.0.1:37754). Oct 28 05:13:42.948203 systemd-logind[1584]: Removed session 21. Oct 28 05:13:42.997509 sshd[5261]: Accepted publickey for core from 10.0.0.1 port 37754 ssh2: RSA SHA256:fnPxJp9OOcM7toOTW/sODQxaZsmsBo9HTVuuDohs1/Q Oct 28 05:13:42.999485 sshd-session[5261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:13:43.004610 systemd-logind[1584]: New session 22 of user core. Oct 28 05:13:43.011964 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 28 05:13:43.214082 sshd[5264]: Connection closed by 10.0.0.1 port 37754 Oct 28 05:13:43.215457 sshd-session[5261]: pam_unix(sshd:session): session closed for user core Oct 28 05:13:43.227456 systemd[1]: sshd@21-10.0.0.49:22-10.0.0.1:37754.service: Deactivated successfully. Oct 28 05:13:43.230168 systemd[1]: session-22.scope: Deactivated successfully. Oct 28 05:13:43.231346 systemd-logind[1584]: Session 22 logged out. Waiting for processes to exit. Oct 28 05:13:43.235638 systemd[1]: Started sshd@22-10.0.0.49:22-10.0.0.1:37756.service - OpenSSH per-connection server daemon (10.0.0.1:37756). Oct 28 05:13:43.236308 systemd-logind[1584]: Removed session 22. Oct 28 05:13:43.282899 sshd[5276]: Accepted publickey for core from 10.0.0.1 port 37756 ssh2: RSA SHA256:fnPxJp9OOcM7toOTW/sODQxaZsmsBo9HTVuuDohs1/Q Oct 28 05:13:43.284524 sshd-session[5276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:13:43.289100 systemd-logind[1584]: New session 23 of user core. Oct 28 05:13:43.298963 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 28 05:13:43.371298 sshd[5279]: Connection closed by 10.0.0.1 port 37756 Oct 28 05:13:43.371629 sshd-session[5276]: pam_unix(sshd:session): session closed for user core Oct 28 05:13:43.376872 systemd-logind[1584]: Session 23 logged out. Waiting for processes to exit. Oct 28 05:13:43.377200 systemd[1]: sshd@22-10.0.0.49:22-10.0.0.1:37756.service: Deactivated successfully. Oct 28 05:13:43.379489 systemd[1]: session-23.scope: Deactivated successfully. Oct 28 05:13:43.381277 systemd-logind[1584]: Removed session 23. Oct 28 05:13:43.465700 containerd[1611]: time="2025-10-28T05:13:43.465569864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 05:13:43.768213 containerd[1611]: time="2025-10-28T05:13:43.768077044Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:13:43.777303 containerd[1611]: time="2025-10-28T05:13:43.777246129Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 05:13:43.777372 containerd[1611]: time="2025-10-28T05:13:43.777306776Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 05:13:43.777500 kubelet[2793]: E1028 05:13:43.777456 2793 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 05:13:43.777911 kubelet[2793]: E1028 05:13:43.777513 2793 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 05:13:43.777911 kubelet[2793]: E1028 05:13:43.777671 2793 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j8hs2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84854587bd-9b74s_calico-apiserver(e27c6b1a-18a7-411c-9050-0f3b48a38781): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 05:13:43.778905 kubelet[2793]: E1028 05:13:43.778869 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84854587bd-9b74s" podUID="e27c6b1a-18a7-411c-9050-0f3b48a38781" Oct 28 05:13:44.466881 containerd[1611]: time="2025-10-28T05:13:44.466543704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 28 05:13:44.855346 containerd[1611]: time="2025-10-28T05:13:44.854996974Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:13:44.856586 containerd[1611]: time="2025-10-28T05:13:44.856463624Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 28 05:13:44.856640 containerd[1611]: time="2025-10-28T05:13:44.856571560Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 28 05:13:44.857498 kubelet[2793]: E1028 05:13:44.857426 2793 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 05:13:44.858049 kubelet[2793]: E1028 05:13:44.857537 2793 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 05:13:44.858049 kubelet[2793]: E1028 05:13:44.857965 2793 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9zs97,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-k9fjz_calico-system(49813e29-063b-4ee4-bacc-cb4ec7eba7e0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 28 05:13:44.859685 kubelet[2793]: E1028 05:13:44.859621 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k9fjz" podUID="49813e29-063b-4ee4-bacc-cb4ec7eba7e0" Oct 28 05:13:47.465161 containerd[1611]: time="2025-10-28T05:13:47.465098200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 28 05:13:47.804141 containerd[1611]: time="2025-10-28T05:13:47.803934330Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:13:47.805170 containerd[1611]: time="2025-10-28T05:13:47.805100491Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 28 05:13:47.805170 containerd[1611]: time="2025-10-28T05:13:47.805148141Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 28 05:13:47.805385 kubelet[2793]: E1028 05:13:47.805335 2793 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 05:13:47.805890 kubelet[2793]: E1028 05:13:47.805395 2793 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 05:13:47.806492 kubelet[2793]: E1028 05:13:47.806395 2793 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2926ad180e634552b42f6badbaccdc32,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lkk5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-664d46d96d-xvk5c_calico-system(8192e129-0d18-4558-9c03-84afd7a7f848): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 28 05:13:47.808663 containerd[1611]: time="2025-10-28T05:13:47.808615083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 28 05:13:48.154873 containerd[1611]: time="2025-10-28T05:13:48.154776735Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:13:48.156088 containerd[1611]: time="2025-10-28T05:13:48.156027616Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 28 05:13:48.156169 containerd[1611]: time="2025-10-28T05:13:48.156099874Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 28 05:13:48.156439 kubelet[2793]: E1028 05:13:48.156356 2793 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 05:13:48.156524 kubelet[2793]: E1028 05:13:48.156461 2793 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 05:13:48.156733 kubelet[2793]: E1028 05:13:48.156686 2793 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lkk5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-664d46d96d-xvk5c_calico-system(8192e129-0d18-4558-9c03-84afd7a7f848): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 28 05:13:48.158112 kubelet[2793]: E1028 05:13:48.158053 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-664d46d96d-xvk5c" podUID="8192e129-0d18-4558-9c03-84afd7a7f848" Oct 28 05:13:48.388481 systemd[1]: Started sshd@23-10.0.0.49:22-10.0.0.1:46194.service - OpenSSH per-connection server daemon (10.0.0.1:46194). Oct 28 05:13:48.465632 containerd[1611]: time="2025-10-28T05:13:48.465174629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 28 05:13:48.466385 sshd[5295]: Accepted publickey for core from 10.0.0.1 port 46194 ssh2: RSA SHA256:fnPxJp9OOcM7toOTW/sODQxaZsmsBo9HTVuuDohs1/Q Oct 28 05:13:48.468341 sshd-session[5295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:13:48.474524 systemd-logind[1584]: New session 24 of user core. Oct 28 05:13:48.478993 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 28 05:13:48.566747 sshd[5298]: Connection closed by 10.0.0.1 port 46194 Oct 28 05:13:48.567124 sshd-session[5295]: pam_unix(sshd:session): session closed for user core Oct 28 05:13:48.572537 systemd[1]: sshd@23-10.0.0.49:22-10.0.0.1:46194.service: Deactivated successfully. Oct 28 05:13:48.574514 systemd[1]: session-24.scope: Deactivated successfully. Oct 28 05:13:48.575294 systemd-logind[1584]: Session 24 logged out. Waiting for processes to exit. Oct 28 05:13:48.576392 systemd-logind[1584]: Removed session 24. Oct 28 05:13:48.786141 containerd[1611]: time="2025-10-28T05:13:48.785963482Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:13:48.787209 containerd[1611]: time="2025-10-28T05:13:48.787172695Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 28 05:13:48.787345 containerd[1611]: time="2025-10-28T05:13:48.787272234Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 28 05:13:48.787502 kubelet[2793]: E1028 05:13:48.787452 2793 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 05:13:48.787547 kubelet[2793]: E1028 05:13:48.787515 2793 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 05:13:48.787772 kubelet[2793]: E1028 05:13:48.787704 2793 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvswr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5c6c464845-gchcn_calico-system(17014fb8-50b3-4fb0-83a0-7ba96dfdd77b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 28 05:13:48.788990 kubelet[2793]: E1028 05:13:48.788946 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c6c464845-gchcn" podUID="17014fb8-50b3-4fb0-83a0-7ba96dfdd77b" Oct 28 05:13:49.467422 kubelet[2793]: E1028 05:13:49.466423 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:13:49.470046 containerd[1611]: time="2025-10-28T05:13:49.469125849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 05:13:49.805393 containerd[1611]: time="2025-10-28T05:13:49.805179909Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:13:49.806475 containerd[1611]: time="2025-10-28T05:13:49.806416141Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 05:13:49.806475 containerd[1611]: time="2025-10-28T05:13:49.806482729Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 05:13:49.806743 kubelet[2793]: E1028 05:13:49.806654 2793 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 05:13:49.806743 kubelet[2793]: E1028 05:13:49.806712 2793 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 05:13:49.807013 kubelet[2793]: E1028 05:13:49.806944 2793 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7wgcd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84854587bd-trjrh_calico-apiserver(4bd50f09-b032-49f4-8f6f-5043dcd6661f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 05:13:49.808172 kubelet[2793]: E1028 05:13:49.808134 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84854587bd-trjrh" podUID="4bd50f09-b032-49f4-8f6f-5043dcd6661f" Oct 28 05:13:51.466359 containerd[1611]: time="2025-10-28T05:13:51.466248411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 28 05:13:51.778342 containerd[1611]: time="2025-10-28T05:13:51.778108967Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:13:51.779611 containerd[1611]: time="2025-10-28T05:13:51.779546762Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 28 05:13:51.779686 containerd[1611]: time="2025-10-28T05:13:51.779586868Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 28 05:13:51.779902 kubelet[2793]: E1028 05:13:51.779845 2793 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 05:13:51.780360 kubelet[2793]: E1028 05:13:51.779923 2793 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 05:13:51.780360 kubelet[2793]: E1028 05:13:51.780078 2793 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r4d7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xsl75_calico-system(d86ce3dc-83d4-402a-b381-76ea2d723abb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 28 05:13:51.782067 containerd[1611]: time="2025-10-28T05:13:51.782025855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 28 05:13:52.083705 containerd[1611]: time="2025-10-28T05:13:52.083544277Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 05:13:52.085668 containerd[1611]: time="2025-10-28T05:13:52.085618445Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 28 05:13:52.085757 containerd[1611]: time="2025-10-28T05:13:52.085688089Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 28 05:13:52.085944 kubelet[2793]: E1028 05:13:52.085896 2793 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 05:13:52.086027 kubelet[2793]: E1028 05:13:52.085949 2793 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 05:13:52.086183 kubelet[2793]: E1028 05:13:52.086125 2793 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r4d7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xsl75_calico-system(d86ce3dc-83d4-402a-b381-76ea2d723abb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 28 05:13:52.087430 kubelet[2793]: E1028 05:13:52.087368 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xsl75" podUID="d86ce3dc-83d4-402a-b381-76ea2d723abb" Oct 28 05:13:53.580177 systemd[1]: Started sshd@24-10.0.0.49:22-10.0.0.1:46208.service - OpenSSH per-connection server daemon (10.0.0.1:46208). Oct 28 05:13:53.639847 sshd[5320]: Accepted publickey for core from 10.0.0.1 port 46208 ssh2: RSA SHA256:fnPxJp9OOcM7toOTW/sODQxaZsmsBo9HTVuuDohs1/Q Oct 28 05:13:53.641971 sshd-session[5320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:13:53.647306 systemd-logind[1584]: New session 25 of user core. Oct 28 05:13:53.653963 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 28 05:13:53.742849 sshd[5323]: Connection closed by 10.0.0.1 port 46208 Oct 28 05:13:53.743187 sshd-session[5320]: pam_unix(sshd:session): session closed for user core Oct 28 05:13:53.748731 systemd[1]: sshd@24-10.0.0.49:22-10.0.0.1:46208.service: Deactivated successfully. Oct 28 05:13:53.751379 systemd[1]: session-25.scope: Deactivated successfully. Oct 28 05:13:53.752488 systemd-logind[1584]: Session 25 logged out. Waiting for processes to exit. Oct 28 05:13:53.754673 systemd-logind[1584]: Removed session 25. Oct 28 05:13:56.468835 kubelet[2793]: E1028 05:13:56.468619 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k9fjz" podUID="49813e29-063b-4ee4-bacc-cb4ec7eba7e0" Oct 28 05:13:57.464960 kubelet[2793]: E1028 05:13:57.464855 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84854587bd-9b74s" podUID="e27c6b1a-18a7-411c-9050-0f3b48a38781" Oct 28 05:13:58.463141 kubelet[2793]: E1028 05:13:58.463095 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 05:13:58.763815 systemd[1]: Started sshd@25-10.0.0.49:22-10.0.0.1:48958.service - OpenSSH per-connection server daemon (10.0.0.1:48958). Oct 28 05:13:58.825240 sshd[5336]: Accepted publickey for core from 10.0.0.1 port 48958 ssh2: RSA SHA256:fnPxJp9OOcM7toOTW/sODQxaZsmsBo9HTVuuDohs1/Q Oct 28 05:13:58.827104 sshd-session[5336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 05:13:58.831891 systemd-logind[1584]: New session 26 of user core. Oct 28 05:13:58.841917 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 28 05:13:58.910050 sshd[5339]: Connection closed by 10.0.0.1 port 48958 Oct 28 05:13:58.910378 sshd-session[5336]: pam_unix(sshd:session): session closed for user core Oct 28 05:13:58.915220 systemd[1]: sshd@25-10.0.0.49:22-10.0.0.1:48958.service: Deactivated successfully. Oct 28 05:13:58.917941 systemd[1]: session-26.scope: Deactivated successfully. Oct 28 05:13:58.918777 systemd-logind[1584]: Session 26 logged out. Waiting for processes to exit. Oct 28 05:13:58.920042 systemd-logind[1584]: Removed session 26. Oct 28 05:13:59.464869 kubelet[2793]: E1028 05:13:59.464781 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-664d46d96d-xvk5c" podUID="8192e129-0d18-4558-9c03-84afd7a7f848"