Nov 5 16:00:47.654262 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Nov 5 13:45:21 -00 2025 Nov 5 16:00:47.654287 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 16:00:47.654296 kernel: BIOS-provided physical RAM map: Nov 5 16:00:47.654303 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Nov 5 16:00:47.654310 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Nov 5 16:00:47.654319 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Nov 5 16:00:47.654327 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Nov 5 16:00:47.654344 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Nov 5 16:00:47.654351 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Nov 5 16:00:47.654358 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Nov 5 16:00:47.654366 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Nov 5 16:00:47.654373 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Nov 5 16:00:47.654380 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Nov 5 16:00:47.654389 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Nov 5 16:00:47.654398 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Nov 5 16:00:47.654408 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Nov 5 16:00:47.654419 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 5 16:00:47.654431 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 5 16:00:47.654439 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 5 16:00:47.654446 kernel: NX (Execute Disable) protection: active Nov 5 16:00:47.654454 kernel: APIC: Static calls initialized Nov 5 16:00:47.654481 kernel: e820: update [mem 0x9a13d018-0x9a146c57] usable ==> usable Nov 5 16:00:47.654489 kernel: e820: update [mem 0x9a100018-0x9a13ce57] usable ==> usable Nov 5 16:00:47.654496 kernel: extended physical RAM map: Nov 5 16:00:47.654504 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Nov 5 16:00:47.654511 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Nov 5 16:00:47.654518 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Nov 5 16:00:47.654526 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Nov 5 16:00:47.654536 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a100017] usable Nov 5 16:00:47.654543 kernel: reserve setup_data: [mem 0x000000009a100018-0x000000009a13ce57] usable Nov 5 16:00:47.654550 kernel: reserve setup_data: [mem 0x000000009a13ce58-0x000000009a13d017] usable Nov 5 16:00:47.654558 kernel: reserve setup_data: [mem 0x000000009a13d018-0x000000009a146c57] usable Nov 5 16:00:47.654565 kernel: reserve setup_data: [mem 0x000000009a146c58-0x000000009b8ecfff] usable Nov 5 16:00:47.654572 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Nov 5 16:00:47.654580 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Nov 5 16:00:47.654587 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Nov 5 16:00:47.654595 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Nov 5 16:00:47.654602 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Nov 5 16:00:47.654612 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Nov 5 16:00:47.654619 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Nov 5 16:00:47.654630 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Nov 5 16:00:47.654637 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 5 16:00:47.654645 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 5 16:00:47.654655 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 5 16:00:47.654662 kernel: efi: EFI v2.7 by EDK II Nov 5 16:00:47.654670 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Nov 5 16:00:47.654678 kernel: random: crng init done Nov 5 16:00:47.654686 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Nov 5 16:00:47.654693 kernel: secureboot: Secure boot enabled Nov 5 16:00:47.654701 kernel: SMBIOS 2.8 present. Nov 5 16:00:47.654708 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Nov 5 16:00:47.654716 kernel: DMI: Memory slots populated: 1/1 Nov 5 16:00:47.654725 kernel: Hypervisor detected: KVM Nov 5 16:00:47.654733 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Nov 5 16:00:47.654741 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 5 16:00:47.654748 kernel: kvm-clock: using sched offset of 4865870377 cycles Nov 5 16:00:47.654757 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 5 16:00:47.654766 kernel: tsc: Detected 2794.748 MHz processor Nov 5 16:00:47.654774 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 5 16:00:47.654782 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 5 16:00:47.654790 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Nov 5 16:00:47.654800 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 5 16:00:47.654808 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 5 16:00:47.654816 kernel: Using GB pages for direct mapping Nov 5 16:00:47.654824 kernel: ACPI: Early table checksum verification disabled Nov 5 16:00:47.654832 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Nov 5 16:00:47.654841 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Nov 5 16:00:47.654849 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 16:00:47.654859 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 16:00:47.654867 kernel: ACPI: FACS 0x000000009BBDD000 000040 Nov 5 16:00:47.654875 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 16:00:47.654883 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 16:00:47.654891 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 16:00:47.654899 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 16:00:47.654907 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Nov 5 16:00:47.654917 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Nov 5 16:00:47.654925 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Nov 5 16:00:47.654933 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Nov 5 16:00:47.654941 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Nov 5 16:00:47.654949 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Nov 5 16:00:47.654957 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Nov 5 16:00:47.654965 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Nov 5 16:00:47.654974 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Nov 5 16:00:47.654982 kernel: No NUMA configuration found Nov 5 16:00:47.654990 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Nov 5 16:00:47.654998 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Nov 5 16:00:47.655006 kernel: Zone ranges: Nov 5 16:00:47.655014 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 5 16:00:47.655022 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Nov 5 16:00:47.655030 kernel: Normal empty Nov 5 16:00:47.655040 kernel: Device empty Nov 5 16:00:47.655048 kernel: Movable zone start for each node Nov 5 16:00:47.655056 kernel: Early memory node ranges Nov 5 16:00:47.655064 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Nov 5 16:00:47.655072 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Nov 5 16:00:47.655080 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Nov 5 16:00:47.655088 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Nov 5 16:00:47.655097 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Nov 5 16:00:47.655105 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Nov 5 16:00:47.655113 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 5 16:00:47.655121 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Nov 5 16:00:47.655129 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 5 16:00:47.655137 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Nov 5 16:00:47.655145 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Nov 5 16:00:47.655156 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Nov 5 16:00:47.655164 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 5 16:00:47.655172 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 5 16:00:47.655180 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 5 16:00:47.655188 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 5 16:00:47.655196 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 5 16:00:47.655204 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 5 16:00:47.655212 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 5 16:00:47.655222 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 5 16:00:47.655230 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 5 16:00:47.655238 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 5 16:00:47.655246 kernel: TSC deadline timer available Nov 5 16:00:47.655254 kernel: CPU topo: Max. logical packages: 1 Nov 5 16:00:47.655262 kernel: CPU topo: Max. logical dies: 1 Nov 5 16:00:47.655278 kernel: CPU topo: Max. dies per package: 1 Nov 5 16:00:47.655291 kernel: CPU topo: Max. threads per core: 1 Nov 5 16:00:47.655299 kernel: CPU topo: Num. cores per package: 4 Nov 5 16:00:47.655309 kernel: CPU topo: Num. threads per package: 4 Nov 5 16:00:47.655318 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Nov 5 16:00:47.655382 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 5 16:00:47.655391 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 5 16:00:47.655402 kernel: kvm-guest: setup PV sched yield Nov 5 16:00:47.655416 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Nov 5 16:00:47.655428 kernel: Booting paravirtualized kernel on KVM Nov 5 16:00:47.655437 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 5 16:00:47.655446 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 5 16:00:47.655454 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Nov 5 16:00:47.655505 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Nov 5 16:00:47.655517 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 5 16:00:47.655525 kernel: kvm-guest: PV spinlocks enabled Nov 5 16:00:47.655533 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 5 16:00:47.655543 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 16:00:47.655552 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 5 16:00:47.655561 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 5 16:00:47.655569 kernel: Fallback order for Node 0: 0 Nov 5 16:00:47.655580 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Nov 5 16:00:47.655588 kernel: Policy zone: DMA32 Nov 5 16:00:47.655597 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 5 16:00:47.655607 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 5 16:00:47.655624 kernel: ftrace: allocating 40092 entries in 157 pages Nov 5 16:00:47.655635 kernel: ftrace: allocated 157 pages with 5 groups Nov 5 16:00:47.655645 kernel: Dynamic Preempt: voluntary Nov 5 16:00:47.655659 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 5 16:00:47.655671 kernel: rcu: RCU event tracing is enabled. Nov 5 16:00:47.655682 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 5 16:00:47.655693 kernel: Trampoline variant of Tasks RCU enabled. Nov 5 16:00:47.655703 kernel: Rude variant of Tasks RCU enabled. Nov 5 16:00:47.655714 kernel: Tracing variant of Tasks RCU enabled. Nov 5 16:00:47.655724 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 5 16:00:47.655735 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 5 16:00:47.655749 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 5 16:00:47.655759 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 5 16:00:47.655771 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 5 16:00:47.655781 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 5 16:00:47.655792 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 5 16:00:47.655803 kernel: Console: colour dummy device 80x25 Nov 5 16:00:47.655814 kernel: printk: legacy console [ttyS0] enabled Nov 5 16:00:47.655827 kernel: ACPI: Core revision 20240827 Nov 5 16:00:47.655838 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 5 16:00:47.655849 kernel: APIC: Switch to symmetric I/O mode setup Nov 5 16:00:47.655859 kernel: x2apic enabled Nov 5 16:00:47.655869 kernel: APIC: Switched APIC routing to: physical x2apic Nov 5 16:00:47.655880 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 5 16:00:47.655891 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 5 16:00:47.655903 kernel: kvm-guest: setup PV IPIs Nov 5 16:00:47.655914 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 5 16:00:47.655924 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 5 16:00:47.655935 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Nov 5 16:00:47.655946 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 5 16:00:47.655957 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 5 16:00:47.655968 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 5 16:00:47.655981 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 5 16:00:47.655992 kernel: Spectre V2 : Mitigation: Retpolines Nov 5 16:00:47.656003 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 5 16:00:47.656014 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 5 16:00:47.656025 kernel: active return thunk: retbleed_return_thunk Nov 5 16:00:47.656035 kernel: RETBleed: Mitigation: untrained return thunk Nov 5 16:00:47.656046 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 5 16:00:47.656059 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 5 16:00:47.656070 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 5 16:00:47.656082 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 5 16:00:47.656092 kernel: active return thunk: srso_return_thunk Nov 5 16:00:47.656103 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 5 16:00:47.656114 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 5 16:00:47.656126 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 5 16:00:47.656142 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 5 16:00:47.656154 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 5 16:00:47.656167 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 5 16:00:47.656181 kernel: Freeing SMP alternatives memory: 32K Nov 5 16:00:47.656193 kernel: pid_max: default: 32768 minimum: 301 Nov 5 16:00:47.656204 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 5 16:00:47.656215 kernel: landlock: Up and running. Nov 5 16:00:47.656229 kernel: SELinux: Initializing. Nov 5 16:00:47.656241 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 16:00:47.656252 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 16:00:47.656264 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 5 16:00:47.656275 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 5 16:00:47.656287 kernel: ... version: 0 Nov 5 16:00:47.656298 kernel: ... bit width: 48 Nov 5 16:00:47.656313 kernel: ... generic registers: 6 Nov 5 16:00:47.656325 kernel: ... value mask: 0000ffffffffffff Nov 5 16:00:47.656347 kernel: ... max period: 00007fffffffffff Nov 5 16:00:47.656359 kernel: ... fixed-purpose events: 0 Nov 5 16:00:47.656370 kernel: ... event mask: 000000000000003f Nov 5 16:00:47.656381 kernel: signal: max sigframe size: 1776 Nov 5 16:00:47.656392 kernel: rcu: Hierarchical SRCU implementation. Nov 5 16:00:47.656408 kernel: rcu: Max phase no-delay instances is 400. Nov 5 16:00:47.656420 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 5 16:00:47.656431 kernel: smp: Bringing up secondary CPUs ... Nov 5 16:00:47.656443 kernel: smpboot: x86: Booting SMP configuration: Nov 5 16:00:47.656454 kernel: .... node #0, CPUs: #1 #2 #3 Nov 5 16:00:47.656483 kernel: smp: Brought up 1 node, 4 CPUs Nov 5 16:00:47.656495 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Nov 5 16:00:47.656511 kernel: Memory: 2431744K/2552216K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15964K init, 2080K bss, 114536K reserved, 0K cma-reserved) Nov 5 16:00:47.656522 kernel: devtmpfs: initialized Nov 5 16:00:47.656534 kernel: x86/mm: Memory block size: 128MB Nov 5 16:00:47.656545 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Nov 5 16:00:47.656557 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Nov 5 16:00:47.656569 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 5 16:00:47.656580 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 5 16:00:47.656595 kernel: pinctrl core: initialized pinctrl subsystem Nov 5 16:00:47.656606 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 5 16:00:47.656623 kernel: audit: initializing netlink subsys (disabled) Nov 5 16:00:47.656635 kernel: audit: type=2000 audit(1762358445.096:1): state=initialized audit_enabled=0 res=1 Nov 5 16:00:47.656645 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 5 16:00:47.656656 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 5 16:00:47.656666 kernel: cpuidle: using governor menu Nov 5 16:00:47.656677 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 5 16:00:47.656692 kernel: dca service started, version 1.12.1 Nov 5 16:00:47.656703 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Nov 5 16:00:47.656720 kernel: PCI: Using configuration type 1 for base access Nov 5 16:00:47.656733 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 5 16:00:47.656745 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 5 16:00:47.656756 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 5 16:00:47.656767 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 5 16:00:47.656782 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 5 16:00:47.656792 kernel: ACPI: Added _OSI(Module Device) Nov 5 16:00:47.656803 kernel: ACPI: Added _OSI(Processor Device) Nov 5 16:00:47.656819 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 5 16:00:47.656831 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 5 16:00:47.656842 kernel: ACPI: Interpreter enabled Nov 5 16:00:47.656853 kernel: ACPI: PM: (supports S0 S5) Nov 5 16:00:47.656869 kernel: ACPI: Using IOAPIC for interrupt routing Nov 5 16:00:47.656880 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 5 16:00:47.656891 kernel: PCI: Using E820 reservations for host bridge windows Nov 5 16:00:47.656902 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 5 16:00:47.656913 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 5 16:00:47.657219 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 5 16:00:47.657451 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 5 16:00:47.657650 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 5 16:00:47.657662 kernel: PCI host bridge to bus 0000:00 Nov 5 16:00:47.657829 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 5 16:00:47.658011 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 5 16:00:47.658200 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 5 16:00:47.658556 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Nov 5 16:00:47.658720 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Nov 5 16:00:47.658872 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Nov 5 16:00:47.659036 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 5 16:00:47.659243 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 5 16:00:47.659436 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 5 16:00:47.659622 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Nov 5 16:00:47.659793 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Nov 5 16:00:47.659960 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Nov 5 16:00:47.660142 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 5 16:00:47.660366 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 5 16:00:47.660559 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Nov 5 16:00:47.660727 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Nov 5 16:00:47.660896 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Nov 5 16:00:47.661119 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 5 16:00:47.661302 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Nov 5 16:00:47.661496 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Nov 5 16:00:47.661669 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Nov 5 16:00:47.661844 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 5 16:00:47.662019 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Nov 5 16:00:47.662186 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Nov 5 16:00:47.662364 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Nov 5 16:00:47.662551 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Nov 5 16:00:47.662733 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 5 16:00:47.662905 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 5 16:00:47.663098 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 5 16:00:47.663269 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Nov 5 16:00:47.663448 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Nov 5 16:00:47.663674 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 5 16:00:47.663839 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Nov 5 16:00:47.663851 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 5 16:00:47.663860 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 5 16:00:47.663869 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 5 16:00:47.663878 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 5 16:00:47.663891 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 5 16:00:47.663899 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 5 16:00:47.663908 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 5 16:00:47.663917 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 5 16:00:47.663926 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 5 16:00:47.663934 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 5 16:00:47.663943 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 5 16:00:47.663958 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 5 16:00:47.663970 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 5 16:00:47.663982 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 5 16:00:47.663993 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 5 16:00:47.664002 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 5 16:00:47.664011 kernel: iommu: Default domain type: Translated Nov 5 16:00:47.664020 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 5 16:00:47.664031 kernel: efivars: Registered efivars operations Nov 5 16:00:47.664040 kernel: PCI: Using ACPI for IRQ routing Nov 5 16:00:47.664049 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 5 16:00:47.664058 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Nov 5 16:00:47.664066 kernel: e820: reserve RAM buffer [mem 0x9a100018-0x9bffffff] Nov 5 16:00:47.664075 kernel: e820: reserve RAM buffer [mem 0x9a13d018-0x9bffffff] Nov 5 16:00:47.664083 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Nov 5 16:00:47.664094 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Nov 5 16:00:47.664267 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 5 16:00:47.664529 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 5 16:00:47.664754 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 5 16:00:47.664767 kernel: vgaarb: loaded Nov 5 16:00:47.664776 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 5 16:00:47.664785 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 5 16:00:47.664798 kernel: clocksource: Switched to clocksource kvm-clock Nov 5 16:00:47.664806 kernel: VFS: Disk quotas dquot_6.6.0 Nov 5 16:00:47.664815 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 5 16:00:47.664824 kernel: pnp: PnP ACPI init Nov 5 16:00:47.665032 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Nov 5 16:00:47.665052 kernel: pnp: PnP ACPI: found 6 devices Nov 5 16:00:47.665065 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 5 16:00:47.665080 kernel: NET: Registered PF_INET protocol family Nov 5 16:00:47.665091 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 5 16:00:47.665100 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 5 16:00:47.665109 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 5 16:00:47.665117 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 5 16:00:47.665126 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 5 16:00:47.665135 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 5 16:00:47.665147 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 16:00:47.665158 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 16:00:47.665172 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 5 16:00:47.665183 kernel: NET: Registered PF_XDP protocol family Nov 5 16:00:47.665376 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Nov 5 16:00:47.665582 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Nov 5 16:00:47.665747 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 5 16:00:47.665902 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 5 16:00:47.666113 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 5 16:00:47.666278 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Nov 5 16:00:47.666448 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Nov 5 16:00:47.666617 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Nov 5 16:00:47.666629 kernel: PCI: CLS 0 bytes, default 64 Nov 5 16:00:47.666643 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 5 16:00:47.666651 kernel: Initialise system trusted keyrings Nov 5 16:00:47.666660 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 5 16:00:47.666669 kernel: Key type asymmetric registered Nov 5 16:00:47.666678 kernel: Asymmetric key parser 'x509' registered Nov 5 16:00:47.666701 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 5 16:00:47.666712 kernel: io scheduler mq-deadline registered Nov 5 16:00:47.666723 kernel: io scheduler kyber registered Nov 5 16:00:47.666732 kernel: io scheduler bfq registered Nov 5 16:00:47.666741 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 5 16:00:47.666750 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 5 16:00:47.666759 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 5 16:00:47.666768 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 5 16:00:47.666777 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 5 16:00:47.666788 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 5 16:00:47.666798 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 5 16:00:47.666806 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 5 16:00:47.666816 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 5 16:00:47.666825 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 5 16:00:47.667008 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 5 16:00:47.667175 kernel: rtc_cmos 00:04: registered as rtc0 Nov 5 16:00:47.667347 kernel: rtc_cmos 00:04: setting system clock to 2025-11-05T16:00:45 UTC (1762358445) Nov 5 16:00:47.667550 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Nov 5 16:00:47.667564 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 5 16:00:47.667573 kernel: efifb: probing for efifb Nov 5 16:00:47.667586 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Nov 5 16:00:47.667595 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Nov 5 16:00:47.667607 kernel: efifb: scrolling: redraw Nov 5 16:00:47.667618 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 5 16:00:47.667628 kernel: Console: switching to colour frame buffer device 160x50 Nov 5 16:00:47.667639 kernel: fb0: EFI VGA frame buffer device Nov 5 16:00:47.667648 kernel: pstore: Using crash dump compression: deflate Nov 5 16:00:47.667658 kernel: pstore: Registered efi_pstore as persistent store backend Nov 5 16:00:47.667667 kernel: NET: Registered PF_INET6 protocol family Nov 5 16:00:47.667676 kernel: Segment Routing with IPv6 Nov 5 16:00:47.667685 kernel: In-situ OAM (IOAM) with IPv6 Nov 5 16:00:47.667694 kernel: NET: Registered PF_PACKET protocol family Nov 5 16:00:47.667703 kernel: Key type dns_resolver registered Nov 5 16:00:47.667712 kernel: IPI shorthand broadcast: enabled Nov 5 16:00:47.667723 kernel: sched_clock: Marking stable (1280004207, 267402548)->(1605273180, -57866425) Nov 5 16:00:47.667732 kernel: registered taskstats version 1 Nov 5 16:00:47.667741 kernel: Loading compiled-in X.509 certificates Nov 5 16:00:47.667756 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 9f02cc8d588ce542f03b0da66dde47a90a145382' Nov 5 16:00:47.667765 kernel: Demotion targets for Node 0: null Nov 5 16:00:47.667775 kernel: Key type .fscrypt registered Nov 5 16:00:47.667784 kernel: Key type fscrypt-provisioning registered Nov 5 16:00:47.667795 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 5 16:00:47.667804 kernel: ima: Allocated hash algorithm: sha1 Nov 5 16:00:47.667813 kernel: ima: No architecture policies found Nov 5 16:00:47.667822 kernel: clk: Disabling unused clocks Nov 5 16:00:47.667831 kernel: Freeing unused kernel image (initmem) memory: 15964K Nov 5 16:00:47.667839 kernel: Write protecting the kernel read-only data: 40960k Nov 5 16:00:47.667848 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 5 16:00:47.667859 kernel: Run /init as init process Nov 5 16:00:47.667868 kernel: with arguments: Nov 5 16:00:47.667880 kernel: /init Nov 5 16:00:47.667889 kernel: with environment: Nov 5 16:00:47.667897 kernel: HOME=/ Nov 5 16:00:47.667906 kernel: TERM=linux Nov 5 16:00:47.667915 kernel: SCSI subsystem initialized Nov 5 16:00:47.667926 kernel: libata version 3.00 loaded. Nov 5 16:00:47.668109 kernel: ahci 0000:00:1f.2: version 3.0 Nov 5 16:00:47.668122 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 5 16:00:47.668301 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 5 16:00:47.668497 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 5 16:00:47.668671 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 5 16:00:47.668863 kernel: scsi host0: ahci Nov 5 16:00:47.669078 kernel: scsi host1: ahci Nov 5 16:00:47.669263 kernel: scsi host2: ahci Nov 5 16:00:47.669451 kernel: scsi host3: ahci Nov 5 16:00:47.669645 kernel: scsi host4: ahci Nov 5 16:00:47.669818 kernel: scsi host5: ahci Nov 5 16:00:47.669835 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Nov 5 16:00:47.669844 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Nov 5 16:00:47.669854 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Nov 5 16:00:47.669863 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Nov 5 16:00:47.669872 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Nov 5 16:00:47.669881 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Nov 5 16:00:47.669893 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 5 16:00:47.669902 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 5 16:00:47.669911 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 5 16:00:47.669920 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 5 16:00:47.669929 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 5 16:00:47.669938 kernel: ata3.00: LPM support broken, forcing max_power Nov 5 16:00:47.669949 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 5 16:00:47.669970 kernel: ata3.00: applying bridge limits Nov 5 16:00:47.669985 kernel: ata3.00: LPM support broken, forcing max_power Nov 5 16:00:47.669994 kernel: ata3.00: configured for UDMA/100 Nov 5 16:00:47.670002 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 5 16:00:47.670232 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 5 16:00:47.670430 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 5 16:00:47.670645 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 5 16:00:47.670664 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 5 16:00:47.670673 kernel: GPT:16515071 != 27000831 Nov 5 16:00:47.670682 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 5 16:00:47.670691 kernel: GPT:16515071 != 27000831 Nov 5 16:00:47.670700 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 5 16:00:47.670709 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 5 16:00:47.670896 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 5 16:00:47.670908 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 5 16:00:47.671116 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 5 16:00:47.671130 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 5 16:00:47.671139 kernel: device-mapper: uevent: version 1.0.3 Nov 5 16:00:47.671148 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 5 16:00:47.671158 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 5 16:00:47.671170 kernel: raid6: avx2x4 gen() 27203 MB/s Nov 5 16:00:47.671179 kernel: raid6: avx2x2 gen() 30603 MB/s Nov 5 16:00:47.671188 kernel: raid6: avx2x1 gen() 22113 MB/s Nov 5 16:00:47.671197 kernel: raid6: using algorithm avx2x2 gen() 30603 MB/s Nov 5 16:00:47.671205 kernel: raid6: .... xor() 19090 MB/s, rmw enabled Nov 5 16:00:47.671219 kernel: raid6: using avx2x2 recovery algorithm Nov 5 16:00:47.671236 kernel: xor: automatically using best checksumming function avx Nov 5 16:00:47.671251 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 5 16:00:47.671263 kernel: BTRFS: device fsid a4c7be9c-39f6-471d-8a4c-d50144c6bf01 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (182) Nov 5 16:00:47.671274 kernel: BTRFS info (device dm-0): first mount of filesystem a4c7be9c-39f6-471d-8a4c-d50144c6bf01 Nov 5 16:00:47.671285 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 5 16:00:47.671297 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 5 16:00:47.671308 kernel: BTRFS info (device dm-0): enabling free space tree Nov 5 16:00:47.671317 kernel: loop: module loaded Nov 5 16:00:47.671337 kernel: loop0: detected capacity change from 0 to 100120 Nov 5 16:00:47.671346 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 5 16:00:47.671356 systemd[1]: Successfully made /usr/ read-only. Nov 5 16:00:47.671369 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 16:00:47.671380 systemd[1]: Detected virtualization kvm. Nov 5 16:00:47.671389 systemd[1]: Detected architecture x86-64. Nov 5 16:00:47.671401 systemd[1]: Running in initrd. Nov 5 16:00:47.671410 systemd[1]: No hostname configured, using default hostname. Nov 5 16:00:47.671420 systemd[1]: Hostname set to . Nov 5 16:00:47.671429 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 16:00:47.671438 systemd[1]: Queued start job for default target initrd.target. Nov 5 16:00:47.671451 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 16:00:47.671481 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 16:00:47.671495 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 16:00:47.671509 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 5 16:00:47.671520 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 16:00:47.671530 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 5 16:00:47.671540 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 5 16:00:47.671551 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 16:00:47.671561 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 16:00:47.671571 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 5 16:00:47.671580 systemd[1]: Reached target paths.target - Path Units. Nov 5 16:00:47.671590 systemd[1]: Reached target slices.target - Slice Units. Nov 5 16:00:47.671599 systemd[1]: Reached target swap.target - Swaps. Nov 5 16:00:47.671611 systemd[1]: Reached target timers.target - Timer Units. Nov 5 16:00:47.671623 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 16:00:47.671632 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 16:00:47.671642 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 5 16:00:47.671651 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 5 16:00:47.671661 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 16:00:47.671671 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 16:00:47.671680 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 16:00:47.671692 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 16:00:47.671702 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 5 16:00:47.671712 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 5 16:00:47.671723 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 16:00:47.671732 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 5 16:00:47.671743 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 5 16:00:47.671755 systemd[1]: Starting systemd-fsck-usr.service... Nov 5 16:00:47.671764 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 16:00:47.671774 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 16:00:47.671784 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 16:00:47.671794 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 5 16:00:47.671806 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 16:00:47.671815 systemd[1]: Finished systemd-fsck-usr.service. Nov 5 16:00:47.671827 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 16:00:47.671867 systemd-journald[317]: Collecting audit messages is disabled. Nov 5 16:00:47.671894 systemd-journald[317]: Journal started Nov 5 16:00:47.671915 systemd-journald[317]: Runtime Journal (/run/log/journal/cc3567e341d54d69bebb1b77f220f0f7) is 5.9M, max 47.9M, 41.9M free. Nov 5 16:00:47.674497 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 16:00:47.681933 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 16:00:47.688437 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 16:00:47.704500 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 5 16:00:47.707397 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 16:00:47.713520 kernel: Bridge firewalling registered Nov 5 16:00:47.713374 systemd-modules-load[319]: Inserted module 'br_netfilter' Nov 5 16:00:47.718053 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 16:00:47.722609 systemd-tmpfiles[331]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 5 16:00:47.723354 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 16:00:47.729927 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 16:00:47.732733 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 5 16:00:47.735643 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 16:00:47.755957 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 16:00:47.773674 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 16:00:47.777624 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 16:00:47.782718 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 16:00:47.797695 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 5 16:00:47.837990 dracut-cmdline[358]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 16:00:47.864387 systemd-resolved[357]: Positive Trust Anchors: Nov 5 16:00:47.864402 systemd-resolved[357]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 16:00:47.864406 systemd-resolved[357]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 16:00:47.864437 systemd-resolved[357]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 16:00:47.900022 systemd-resolved[357]: Defaulting to hostname 'linux'. Nov 5 16:00:47.901788 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 16:00:47.904439 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 16:00:47.987503 kernel: Loading iSCSI transport class v2.0-870. Nov 5 16:00:48.016546 kernel: iscsi: registered transport (tcp) Nov 5 16:00:48.168054 kernel: iscsi: registered transport (qla4xxx) Nov 5 16:00:48.168151 kernel: QLogic iSCSI HBA Driver Nov 5 16:00:48.221421 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 16:00:48.308549 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 16:00:48.312111 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 16:00:48.461938 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 5 16:00:48.466557 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 5 16:00:48.477064 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 5 16:00:48.576847 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 5 16:00:48.586718 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 16:00:48.681285 systemd-udevd[594]: Using default interface naming scheme 'v257'. Nov 5 16:00:48.723389 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 16:00:48.733550 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 5 16:00:48.806391 dracut-pre-trigger[669]: rd.md=0: removing MD RAID activation Nov 5 16:00:48.825848 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 16:00:48.843601 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 16:00:48.907874 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 16:00:48.912322 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 16:00:49.074649 systemd-networkd[711]: lo: Link UP Nov 5 16:00:49.074661 systemd-networkd[711]: lo: Gained carrier Nov 5 16:00:49.076769 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 16:00:49.085375 systemd[1]: Reached target network.target - Network. Nov 5 16:00:49.124272 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 16:00:49.213741 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 5 16:00:49.327702 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 5 16:00:49.330768 kernel: cryptd: max_cpu_qlen set to 1000 Nov 5 16:00:49.347042 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 5 16:00:49.360733 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 5 16:00:49.367171 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 5 16:00:49.374439 kernel: AES CTR mode by8 optimization enabled Nov 5 16:00:49.391301 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 5 16:00:49.399146 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 5 16:00:49.405449 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 16:00:49.405745 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 16:00:49.412857 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 16:00:49.414590 systemd-networkd[711]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 16:00:49.414597 systemd-networkd[711]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 16:00:49.417720 systemd-networkd[711]: eth0: Link UP Nov 5 16:00:49.418057 systemd-networkd[711]: eth0: Gained carrier Nov 5 16:00:49.418069 systemd-networkd[711]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 16:00:49.419785 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 16:00:49.439551 disk-uuid[838]: Primary Header is updated. Nov 5 16:00:49.439551 disk-uuid[838]: Secondary Entries is updated. Nov 5 16:00:49.439551 disk-uuid[838]: Secondary Header is updated. Nov 5 16:00:49.440535 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 16:00:49.440706 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 16:00:49.444150 systemd-networkd[711]: eth0: DHCPv4 address 10.0.0.130/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 5 16:00:49.459026 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 16:00:49.584931 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 16:00:49.641484 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 5 16:00:49.644241 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 16:00:49.647920 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 16:00:49.650225 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 16:00:49.657035 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 5 16:00:49.702181 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 5 16:00:50.563095 disk-uuid[840]: Warning: The kernel is still using the old partition table. Nov 5 16:00:50.563095 disk-uuid[840]: The new table will be used at the next reboot or after you Nov 5 16:00:50.563095 disk-uuid[840]: run partprobe(8) or kpartx(8) Nov 5 16:00:50.563095 disk-uuid[840]: The operation has completed successfully. Nov 5 16:00:50.575528 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 5 16:00:50.575717 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 5 16:00:50.577275 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 5 16:00:50.616525 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (870) Nov 5 16:00:50.620126 kernel: BTRFS info (device vda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 16:00:50.620155 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 16:00:50.624610 kernel: BTRFS info (device vda6): turning on async discard Nov 5 16:00:50.624642 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 16:00:50.632499 kernel: BTRFS info (device vda6): last unmount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 16:00:50.633979 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 5 16:00:50.637394 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 5 16:00:51.057428 ignition[889]: Ignition 2.22.0 Nov 5 16:00:51.057447 ignition[889]: Stage: fetch-offline Nov 5 16:00:51.057524 ignition[889]: no configs at "/usr/lib/ignition/base.d" Nov 5 16:00:51.057537 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 16:00:51.058034 ignition[889]: parsed url from cmdline: "" Nov 5 16:00:51.058038 ignition[889]: no config URL provided Nov 5 16:00:51.058045 ignition[889]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 16:00:51.058057 ignition[889]: no config at "/usr/lib/ignition/user.ign" Nov 5 16:00:51.058106 ignition[889]: op(1): [started] loading QEMU firmware config module Nov 5 16:00:51.058110 ignition[889]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 5 16:00:51.075455 ignition[889]: op(1): [finished] loading QEMU firmware config module Nov 5 16:00:51.108655 systemd-networkd[711]: eth0: Gained IPv6LL Nov 5 16:00:51.164946 ignition[889]: parsing config with SHA512: ffea8b82b994d903550948f97501fd1575d712005c225d6d5bb525c0f5253937203a55d981f6cc3bdd62ef09b6ee3e0b365978f23fb66bf891593f4a7c323687 Nov 5 16:00:51.182078 unknown[889]: fetched base config from "system" Nov 5 16:00:51.182097 unknown[889]: fetched user config from "qemu" Nov 5 16:00:51.182618 ignition[889]: fetch-offline: fetch-offline passed Nov 5 16:00:51.185495 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 16:00:51.182711 ignition[889]: Ignition finished successfully Nov 5 16:00:51.190144 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 5 16:00:51.192172 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 5 16:00:51.253765 ignition[900]: Ignition 2.22.0 Nov 5 16:00:51.253785 ignition[900]: Stage: kargs Nov 5 16:00:51.253960 ignition[900]: no configs at "/usr/lib/ignition/base.d" Nov 5 16:00:51.253999 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 16:00:51.255054 ignition[900]: kargs: kargs passed Nov 5 16:00:51.260676 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 5 16:00:51.255123 ignition[900]: Ignition finished successfully Nov 5 16:00:51.264546 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 5 16:00:51.418176 ignition[908]: Ignition 2.22.0 Nov 5 16:00:51.418195 ignition[908]: Stage: disks Nov 5 16:00:51.418406 ignition[908]: no configs at "/usr/lib/ignition/base.d" Nov 5 16:00:51.418421 ignition[908]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 16:00:51.425974 ignition[908]: disks: disks passed Nov 5 16:00:51.426046 ignition[908]: Ignition finished successfully Nov 5 16:00:51.432003 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 5 16:00:51.432359 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 5 16:00:51.438324 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 5 16:00:51.442811 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 16:00:51.446841 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 16:00:51.448700 systemd[1]: Reached target basic.target - Basic System. Nov 5 16:00:51.455848 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 5 16:00:51.608453 systemd-fsck[918]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 5 16:00:51.623492 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 5 16:00:51.629210 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 5 16:00:51.757503 kernel: EXT4-fs (vda9): mounted filesystem f3db699e-c9e0-4f6b-8c2b-aa40a78cd116 r/w with ordered data mode. Quota mode: none. Nov 5 16:00:51.758137 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 5 16:00:51.759200 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 5 16:00:51.762935 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 16:00:51.767829 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 5 16:00:51.769731 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 5 16:00:51.769785 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 5 16:00:51.769816 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 16:00:51.784045 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 5 16:00:51.789275 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 5 16:00:51.797279 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (926) Nov 5 16:00:51.797310 kernel: BTRFS info (device vda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 16:00:51.797326 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 16:00:51.800937 kernel: BTRFS info (device vda6): turning on async discard Nov 5 16:00:51.801017 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 16:00:51.802554 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 16:00:51.858512 initrd-setup-root[950]: cut: /sysroot/etc/passwd: No such file or directory Nov 5 16:00:51.865343 initrd-setup-root[957]: cut: /sysroot/etc/group: No such file or directory Nov 5 16:00:51.870019 initrd-setup-root[964]: cut: /sysroot/etc/shadow: No such file or directory Nov 5 16:00:51.874038 initrd-setup-root[971]: cut: /sysroot/etc/gshadow: No such file or directory Nov 5 16:00:51.982036 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 5 16:00:51.986085 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 5 16:00:51.989278 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 5 16:00:52.010023 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 5 16:00:52.017600 kernel: BTRFS info (device vda6): last unmount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 16:00:52.036188 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 5 16:00:52.066420 ignition[1040]: INFO : Ignition 2.22.0 Nov 5 16:00:52.066420 ignition[1040]: INFO : Stage: mount Nov 5 16:00:52.074476 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 16:00:52.074476 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 16:00:52.074476 ignition[1040]: INFO : mount: mount passed Nov 5 16:00:52.074476 ignition[1040]: INFO : Ignition finished successfully Nov 5 16:00:52.070190 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 5 16:00:52.074565 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 5 16:00:52.760261 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 16:00:52.816582 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1053) Nov 5 16:00:52.820312 kernel: BTRFS info (device vda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 16:00:52.820337 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 16:00:52.827160 kernel: BTRFS info (device vda6): turning on async discard Nov 5 16:00:52.827275 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 16:00:52.829140 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 16:00:52.939160 ignition[1070]: INFO : Ignition 2.22.0 Nov 5 16:00:52.939160 ignition[1070]: INFO : Stage: files Nov 5 16:00:52.942645 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 16:00:52.942645 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 16:00:52.942645 ignition[1070]: DEBUG : files: compiled without relabeling support, skipping Nov 5 16:00:52.948998 ignition[1070]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 5 16:00:52.948998 ignition[1070]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 5 16:00:52.954862 ignition[1070]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 5 16:00:52.957564 ignition[1070]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 5 16:00:52.960381 unknown[1070]: wrote ssh authorized keys file for user: core Nov 5 16:00:52.963150 ignition[1070]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 5 16:00:52.967546 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 5 16:00:52.971109 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 5 16:00:53.080029 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 5 16:00:53.171935 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 5 16:00:53.175648 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 5 16:00:53.175648 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 5 16:00:53.175648 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 5 16:00:53.175648 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 5 16:00:53.175648 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 16:00:53.175648 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 16:00:53.175648 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 16:00:53.175648 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 16:00:53.200007 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 16:00:53.200007 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 16:00:53.200007 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 5 16:00:53.200007 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 5 16:00:53.200007 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 5 16:00:53.200007 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 5 16:00:53.864624 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 5 16:00:54.493996 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 5 16:00:54.493996 ignition[1070]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 5 16:00:54.500067 ignition[1070]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 16:00:54.607363 ignition[1070]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 16:00:54.607363 ignition[1070]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 5 16:00:54.607363 ignition[1070]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 5 16:00:54.607363 ignition[1070]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 5 16:00:54.624762 ignition[1070]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 5 16:00:54.624762 ignition[1070]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 5 16:00:54.624762 ignition[1070]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 5 16:00:54.665564 ignition[1070]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 5 16:00:54.676949 ignition[1070]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 5 16:00:54.680242 ignition[1070]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 5 16:00:54.680242 ignition[1070]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 5 16:00:54.680242 ignition[1070]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 5 16:00:54.680242 ignition[1070]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 5 16:00:54.680242 ignition[1070]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 5 16:00:54.680242 ignition[1070]: INFO : files: files passed Nov 5 16:00:54.680242 ignition[1070]: INFO : Ignition finished successfully Nov 5 16:00:54.687779 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 5 16:00:54.695540 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 5 16:00:54.704950 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 5 16:00:54.719034 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 5 16:00:54.719207 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 5 16:00:54.728149 initrd-setup-root-after-ignition[1101]: grep: /sysroot/oem/oem-release: No such file or directory Nov 5 16:00:54.734069 initrd-setup-root-after-ignition[1103]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 16:00:54.737359 initrd-setup-root-after-ignition[1103]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 5 16:00:54.740449 initrd-setup-root-after-ignition[1107]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 16:00:54.745775 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 16:00:54.748682 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 5 16:00:54.755424 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 5 16:00:54.816760 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 5 16:00:54.816901 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 5 16:00:54.817554 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 5 16:00:54.825293 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 5 16:00:54.827922 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 5 16:00:54.829283 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 5 16:00:54.880029 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 16:00:54.883872 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 5 16:00:54.920422 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 16:00:54.920670 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 5 16:00:54.922903 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 16:00:54.926596 systemd[1]: Stopped target timers.target - Timer Units. Nov 5 16:00:54.930215 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 5 16:00:54.930426 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 16:00:54.936342 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 5 16:00:54.939922 systemd[1]: Stopped target basic.target - Basic System. Nov 5 16:00:54.941582 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 5 16:00:54.946030 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 16:00:54.947808 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 5 16:00:54.954829 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 5 16:00:54.955018 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 5 16:00:54.958561 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 16:00:54.961765 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 5 16:00:54.962339 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 5 16:00:54.968659 systemd[1]: Stopped target swap.target - Swaps. Nov 5 16:00:54.971711 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 5 16:00:54.971886 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 5 16:00:54.978973 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 5 16:00:54.979144 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 16:00:54.984447 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 5 16:00:54.986364 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 16:00:54.989962 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 5 16:00:54.990137 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 5 16:00:54.995683 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 5 16:00:54.995837 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 16:00:54.999491 systemd[1]: Stopped target paths.target - Path Units. Nov 5 16:00:55.001075 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 5 16:00:55.007576 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 16:00:55.007766 systemd[1]: Stopped target slices.target - Slice Units. Nov 5 16:00:55.013496 systemd[1]: Stopped target sockets.target - Socket Units. Nov 5 16:00:55.016506 systemd[1]: iscsid.socket: Deactivated successfully. Nov 5 16:00:55.016621 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 16:00:55.019592 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 5 16:00:55.019699 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 16:00:55.021064 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 5 16:00:55.021219 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 16:00:55.026078 systemd[1]: ignition-files.service: Deactivated successfully. Nov 5 16:00:55.026243 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 5 16:00:55.030836 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 5 16:00:55.032452 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 5 16:00:55.032615 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 16:00:55.038629 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 5 16:00:55.041061 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 5 16:00:55.041305 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 16:00:55.042037 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 5 16:00:55.042280 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 16:00:55.047383 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 5 16:00:55.047551 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 16:00:55.068670 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 5 16:00:55.068803 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 5 16:00:55.110061 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 5 16:00:55.129704 ignition[1127]: INFO : Ignition 2.22.0 Nov 5 16:00:55.129704 ignition[1127]: INFO : Stage: umount Nov 5 16:00:55.129704 ignition[1127]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 16:00:55.129704 ignition[1127]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 16:00:55.136567 ignition[1127]: INFO : umount: umount passed Nov 5 16:00:55.136567 ignition[1127]: INFO : Ignition finished successfully Nov 5 16:00:55.135100 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 5 16:00:55.135259 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 5 16:00:55.136827 systemd[1]: Stopped target network.target - Network. Nov 5 16:00:55.146682 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 5 16:00:55.146747 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 5 16:00:55.150201 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 5 16:00:55.150259 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 5 16:00:55.152033 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 5 16:00:55.152084 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 5 16:00:55.157380 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 5 16:00:55.157431 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 5 16:00:55.161208 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 5 16:00:55.163041 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 5 16:00:55.178694 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 5 16:00:55.178927 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 5 16:00:55.186566 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 5 16:00:55.186705 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 5 16:00:55.195238 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 5 16:00:55.195501 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 5 16:00:55.195578 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 5 16:00:55.202995 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 5 16:00:55.204826 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 5 16:00:55.204913 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 16:00:55.210127 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 16:00:55.210240 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 16:00:55.211227 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 5 16:00:55.211291 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 5 16:00:55.218569 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 16:00:55.220416 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 5 16:00:55.229321 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 5 16:00:55.233252 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 5 16:00:55.233399 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 5 16:00:55.251108 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 5 16:00:55.251348 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 16:00:55.255062 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 5 16:00:55.255110 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 5 16:00:55.257022 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 5 16:00:55.257069 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 16:00:55.260409 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 5 16:00:55.260484 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 5 16:00:55.268715 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 5 16:00:55.268764 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 5 16:00:55.273519 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 5 16:00:55.273575 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 16:00:55.280245 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 5 16:00:55.282056 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 5 16:00:55.282110 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 16:00:55.284311 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 5 16:00:55.284360 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 16:00:55.290279 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 16:00:55.290365 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 16:00:55.294096 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 5 16:00:55.294298 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 5 16:00:55.310077 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 5 16:00:55.310251 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 5 16:00:55.311963 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 5 16:00:55.313166 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 5 16:00:55.344188 systemd[1]: Switching root. Nov 5 16:00:55.399793 systemd-journald[317]: Journal stopped Nov 5 16:00:58.281051 systemd-journald[317]: Received SIGTERM from PID 1 (systemd). Nov 5 16:00:58.281188 kernel: SELinux: policy capability network_peer_controls=1 Nov 5 16:00:58.281210 kernel: SELinux: policy capability open_perms=1 Nov 5 16:00:58.281227 kernel: SELinux: policy capability extended_socket_class=1 Nov 5 16:00:58.281251 kernel: SELinux: policy capability always_check_network=0 Nov 5 16:00:58.281273 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 5 16:00:58.281308 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 5 16:00:58.281325 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 5 16:00:58.281346 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 5 16:00:58.281363 kernel: SELinux: policy capability userspace_initial_context=0 Nov 5 16:00:58.281381 kernel: audit: type=1403 audit(1762358456.197:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 5 16:00:58.281400 systemd[1]: Successfully loaded SELinux policy in 169.466ms. Nov 5 16:00:58.281432 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.808ms. Nov 5 16:00:58.281453 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 16:00:58.281490 systemd[1]: Detected virtualization kvm. Nov 5 16:00:58.281513 systemd[1]: Detected architecture x86-64. Nov 5 16:00:58.281531 systemd[1]: Detected first boot. Nov 5 16:00:58.281550 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 16:00:58.281569 zram_generator::config[1172]: No configuration found. Nov 5 16:00:58.281588 kernel: Guest personality initialized and is inactive Nov 5 16:00:58.281609 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 5 16:00:58.281627 kernel: Initialized host personality Nov 5 16:00:58.281648 kernel: NET: Registered PF_VSOCK protocol family Nov 5 16:00:58.281666 systemd[1]: Populated /etc with preset unit settings. Nov 5 16:00:58.281685 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 5 16:00:58.281704 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 5 16:00:58.281729 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 5 16:00:58.281753 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 5 16:00:58.281772 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 5 16:00:58.281799 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 5 16:00:58.281818 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 5 16:00:58.281838 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 5 16:00:58.281857 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 5 16:00:58.281876 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 5 16:00:58.281895 systemd[1]: Created slice user.slice - User and Session Slice. Nov 5 16:00:58.281914 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 16:00:58.281936 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 16:00:58.281955 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 5 16:00:58.281973 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 5 16:00:58.281993 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 5 16:00:58.282012 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 16:00:58.282045 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 5 16:00:58.282068 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 16:00:58.282087 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 16:00:58.282105 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 5 16:00:58.282127 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 5 16:00:58.282146 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 5 16:00:58.282165 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 5 16:00:58.282184 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 16:00:58.282204 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 16:00:58.282222 systemd[1]: Reached target slices.target - Slice Units. Nov 5 16:00:58.282241 systemd[1]: Reached target swap.target - Swaps. Nov 5 16:00:58.282259 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 5 16:00:58.282277 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 5 16:00:58.282295 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 5 16:00:58.282313 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 16:00:58.282334 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 16:00:58.282352 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 16:00:58.282369 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 5 16:00:58.282387 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 5 16:00:58.282404 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 5 16:00:58.282422 systemd[1]: Mounting media.mount - External Media Directory... Nov 5 16:00:58.282440 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 16:00:58.282614 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 5 16:00:58.282636 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 5 16:00:58.282654 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 5 16:00:58.282672 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 5 16:00:58.282689 systemd[1]: Reached target machines.target - Containers. Nov 5 16:00:58.282706 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 5 16:00:58.282724 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 16:00:58.282745 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 16:00:58.282762 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 5 16:00:58.282779 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 16:00:58.282797 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 16:00:58.282814 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 16:00:58.282831 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 5 16:00:58.282849 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 16:00:58.282872 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 5 16:00:58.282891 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 5 16:00:58.282910 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 5 16:00:58.282929 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 5 16:00:58.282948 systemd[1]: Stopped systemd-fsck-usr.service. Nov 5 16:00:58.282968 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 16:00:58.282990 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 16:00:58.283009 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 16:00:58.283042 kernel: fuse: init (API version 7.41) Nov 5 16:00:58.283062 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 16:00:58.283081 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 5 16:00:58.283103 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 5 16:00:58.283170 systemd-journald[1250]: Collecting audit messages is disabled. Nov 5 16:00:58.283215 kernel: ACPI: bus type drm_connector registered Nov 5 16:00:58.283234 systemd-journald[1250]: Journal started Nov 5 16:00:58.283267 systemd-journald[1250]: Runtime Journal (/run/log/journal/cc3567e341d54d69bebb1b77f220f0f7) is 5.9M, max 47.9M, 41.9M free. Nov 5 16:00:58.288708 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 16:00:56.860699 systemd[1]: Queued start job for default target multi-user.target. Nov 5 16:00:56.886455 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 5 16:00:56.887846 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 5 16:00:58.293499 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 16:00:58.300890 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 16:00:58.307509 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 5 16:00:58.313810 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 5 16:00:58.317196 systemd[1]: Mounted media.mount - External Media Directory. Nov 5 16:00:58.319288 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 5 16:00:58.323509 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 5 16:00:58.326107 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 5 16:00:58.329741 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 5 16:00:58.332971 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 16:00:58.335996 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 5 16:00:58.336383 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 5 16:00:58.339709 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 16:00:58.340731 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 16:00:58.352897 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 16:00:58.353416 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 16:00:58.359458 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 16:00:58.359810 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 16:00:58.365744 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 5 16:00:58.366089 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 5 16:00:58.372988 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 16:00:58.373335 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 16:00:58.375965 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 16:00:58.385285 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 16:00:58.391696 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 5 16:00:58.398430 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 5 16:00:58.430881 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 16:00:58.437808 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 5 16:00:58.446341 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 5 16:00:58.471261 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 5 16:00:58.476795 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 5 16:00:58.476860 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 16:00:58.484178 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 5 16:00:58.493751 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 16:00:58.500455 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 5 16:00:58.517260 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 5 16:00:58.522480 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 16:00:58.525357 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 5 16:00:58.537777 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 16:00:58.540228 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 16:00:58.547296 systemd-journald[1250]: Time spent on flushing to /var/log/journal/cc3567e341d54d69bebb1b77f220f0f7 is 25.530ms for 1022 entries. Nov 5 16:00:58.547296 systemd-journald[1250]: System Journal (/var/log/journal/cc3567e341d54d69bebb1b77f220f0f7) is 8M, max 163.5M, 155.5M free. Nov 5 16:00:58.585840 systemd-journald[1250]: Received client request to flush runtime journal. Nov 5 16:00:58.549747 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 5 16:00:58.568788 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 5 16:00:58.573325 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 16:00:58.601975 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 5 16:00:58.606492 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 5 16:00:58.624821 kernel: loop1: detected capacity change from 0 to 224512 Nov 5 16:00:58.633281 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 5 16:00:58.647681 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 5 16:00:58.657055 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 5 16:00:58.672662 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 5 16:00:58.836729 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 5 16:00:58.847648 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 16:00:58.869353 kernel: loop2: detected capacity change from 0 to 110984 Nov 5 16:00:58.910123 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 5 16:00:58.919168 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 16:00:58.924647 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 16:00:58.943632 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 5 16:00:58.980486 kernel: loop3: detected capacity change from 0 to 128048 Nov 5 16:00:58.999172 systemd-tmpfiles[1309]: ACLs are not supported, ignoring. Nov 5 16:00:58.999198 systemd-tmpfiles[1309]: ACLs are not supported, ignoring. Nov 5 16:00:59.020766 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 16:00:59.069744 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 5 16:00:59.080249 kernel: loop4: detected capacity change from 0 to 224512 Nov 5 16:00:59.110072 kernel: loop5: detected capacity change from 0 to 110984 Nov 5 16:00:59.160493 kernel: loop6: detected capacity change from 0 to 128048 Nov 5 16:00:59.184598 (sd-merge)[1315]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 5 16:00:59.190269 (sd-merge)[1315]: Merged extensions into '/usr'. Nov 5 16:00:59.217253 systemd-resolved[1308]: Positive Trust Anchors: Nov 5 16:00:59.217827 systemd-resolved[1308]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 16:00:59.217838 systemd-resolved[1308]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 16:00:59.217884 systemd-resolved[1308]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 16:00:59.229276 systemd-resolved[1308]: Defaulting to hostname 'linux'. Nov 5 16:00:59.232225 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 16:00:59.234881 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 16:00:59.295896 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 5 16:00:59.331489 systemd[1]: Reload requested from client PID 1291 ('systemd-sysext') (unit systemd-sysext.service)... Nov 5 16:00:59.331515 systemd[1]: Reloading... Nov 5 16:00:59.475505 zram_generator::config[1348]: No configuration found. Nov 5 16:00:59.870906 systemd[1]: Reloading finished in 538 ms. Nov 5 16:00:59.911433 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 5 16:00:59.916147 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 5 16:00:59.944250 systemd[1]: Starting ensure-sysext.service... Nov 5 16:00:59.955989 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 16:01:00.006259 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 16:01:00.041225 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 5 16:01:00.041335 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 5 16:01:00.041914 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 5 16:01:00.042314 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 5 16:01:00.045707 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 5 16:01:00.047215 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Nov 5 16:01:00.047326 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Nov 5 16:01:00.049123 systemd[1]: Reload requested from client PID 1386 ('systemctl') (unit ensure-sysext.service)... Nov 5 16:01:00.049144 systemd[1]: Reloading... Nov 5 16:01:00.066384 systemd-tmpfiles[1387]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 16:01:00.066401 systemd-tmpfiles[1387]: Skipping /boot Nov 5 16:01:00.071273 systemd-udevd[1389]: Using default interface naming scheme 'v257'. Nov 5 16:01:00.099633 systemd-tmpfiles[1387]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 16:01:00.099808 systemd-tmpfiles[1387]: Skipping /boot Nov 5 16:01:00.239496 zram_generator::config[1418]: No configuration found. Nov 5 16:01:00.425498 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 5 16:01:00.430059 kernel: mousedev: PS/2 mouse device common for all mice Nov 5 16:01:00.435443 kernel: ACPI: button: Power Button [PWRF] Nov 5 16:01:00.440552 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Nov 5 16:01:00.440997 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 5 16:01:00.441297 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 5 16:01:00.741216 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 5 16:01:00.741697 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 5 16:01:00.744925 systemd[1]: Reloading finished in 695 ms. Nov 5 16:01:00.821708 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 16:01:00.827203 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 16:01:01.015248 systemd[1]: Finished ensure-sysext.service. Nov 5 16:01:01.027928 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 16:01:01.223200 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 16:01:01.232755 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 5 16:01:01.238527 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 16:01:01.246745 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 5 16:01:01.254762 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 16:01:01.368677 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 16:01:01.376687 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 16:01:01.394337 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 16:01:01.396921 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 16:01:01.399583 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 5 16:01:01.405816 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 16:01:01.410044 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 5 16:01:01.424057 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 16:01:01.436592 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 5 16:01:01.445652 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 5 16:01:01.450505 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 16:01:01.450644 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 16:01:01.453436 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 16:01:01.461218 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 16:01:01.462112 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 16:01:01.462406 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 16:01:01.463422 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 16:01:01.463712 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 16:01:01.465094 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 16:01:01.465351 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 16:01:01.476266 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 16:01:01.476367 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 16:01:01.610618 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 5 16:01:01.624787 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 5 16:01:01.627287 kernel: kvm_amd: TSC scaling supported Nov 5 16:01:01.627351 kernel: kvm_amd: Nested Virtualization enabled Nov 5 16:01:01.627370 kernel: kvm_amd: Nested Paging enabled Nov 5 16:01:01.627387 kernel: kvm_amd: LBR virtualization supported Nov 5 16:01:01.629691 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 5 16:01:01.629775 kernel: kvm_amd: Virtual GIF supported Nov 5 16:01:01.659609 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 5 16:01:01.797991 augenrules[1546]: No rules Nov 5 16:01:01.793663 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 16:01:01.794057 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 16:01:01.819865 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 5 16:01:01.820139 systemd[1]: Reached target time-set.target - System Time Set. Nov 5 16:01:01.907904 systemd-networkd[1510]: lo: Link UP Nov 5 16:01:01.909100 systemd-networkd[1510]: lo: Gained carrier Nov 5 16:01:01.915150 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 16:01:01.915436 systemd[1]: Reached target network.target - Network. Nov 5 16:01:01.919682 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 5 16:01:01.924155 systemd-networkd[1510]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 16:01:01.924166 systemd-networkd[1510]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 16:01:01.928548 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 5 16:01:01.936326 systemd-networkd[1510]: eth0: Link UP Nov 5 16:01:01.940159 systemd-networkd[1510]: eth0: Gained carrier Nov 5 16:01:01.940196 systemd-networkd[1510]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 16:01:01.975937 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 5 16:01:01.976989 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 16:01:02.002561 systemd-networkd[1510]: eth0: DHCPv4 address 10.0.0.130/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 5 16:01:02.003476 systemd-timesyncd[1515]: Network configuration changed, trying to establish connection. Nov 5 16:01:03.038422 systemd-timesyncd[1515]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 5 16:01:03.038543 systemd-timesyncd[1515]: Initial clock synchronization to Wed 2025-11-05 16:01:03.038265 UTC. Nov 5 16:01:03.038756 systemd-resolved[1308]: Clock change detected. Flushing caches. Nov 5 16:01:03.043054 kernel: EDAC MC: Ver: 3.0.0 Nov 5 16:01:03.057435 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 5 16:01:03.084907 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 16:01:04.495523 systemd-networkd[1510]: eth0: Gained IPv6LL Nov 5 16:01:04.500574 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 5 16:01:04.506355 systemd[1]: Reached target network-online.target - Network is Online. Nov 5 16:01:04.770312 ldconfig[1502]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 5 16:01:04.784011 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 5 16:01:04.791879 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 5 16:01:04.941987 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 5 16:01:04.951504 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 16:01:04.956319 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 5 16:01:04.969507 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 5 16:01:04.972692 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 5 16:01:04.984571 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 5 16:01:04.987258 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 5 16:01:04.999048 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 5 16:01:05.002350 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 5 16:01:05.002413 systemd[1]: Reached target paths.target - Path Units. Nov 5 16:01:05.009739 systemd[1]: Reached target timers.target - Timer Units. Nov 5 16:01:05.020654 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 5 16:01:05.031010 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 5 16:01:05.038405 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 5 16:01:05.053445 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 5 16:01:05.066614 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 5 16:01:05.079074 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 5 16:01:05.082410 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 5 16:01:05.094691 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 5 16:01:05.104131 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 16:01:05.107015 systemd[1]: Reached target basic.target - Basic System. Nov 5 16:01:05.112217 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 5 16:01:05.114105 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 5 16:01:05.125434 systemd[1]: Starting containerd.service - containerd container runtime... Nov 5 16:01:05.133399 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 5 16:01:05.148895 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 5 16:01:05.156694 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 5 16:01:05.171540 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 5 16:01:05.181301 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 5 16:01:05.187725 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 5 16:01:05.194836 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 5 16:01:05.232727 oslogin_cache_refresh[1575]: Refreshing passwd entry cache Nov 5 16:01:05.272528 extend-filesystems[1574]: Found /dev/vda6 Nov 5 16:01:05.209540 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 16:01:05.274500 jq[1573]: false Nov 5 16:01:05.252279 oslogin_cache_refresh[1575]: Failure getting users, quitting Nov 5 16:01:05.224047 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 5 16:01:05.274986 google_oslogin_nss_cache[1575]: oslogin_cache_refresh[1575]: Refreshing passwd entry cache Nov 5 16:01:05.274986 google_oslogin_nss_cache[1575]: oslogin_cache_refresh[1575]: Failure getting users, quitting Nov 5 16:01:05.274986 google_oslogin_nss_cache[1575]: oslogin_cache_refresh[1575]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 16:01:05.274986 google_oslogin_nss_cache[1575]: oslogin_cache_refresh[1575]: Refreshing group entry cache Nov 5 16:01:05.274986 google_oslogin_nss_cache[1575]: oslogin_cache_refresh[1575]: Failure getting groups, quitting Nov 5 16:01:05.274986 google_oslogin_nss_cache[1575]: oslogin_cache_refresh[1575]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 16:01:05.252305 oslogin_cache_refresh[1575]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 16:01:05.233180 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 5 16:01:05.305523 extend-filesystems[1574]: Found /dev/vda9 Nov 5 16:01:05.305523 extend-filesystems[1574]: Checking size of /dev/vda9 Nov 5 16:01:05.252377 oslogin_cache_refresh[1575]: Refreshing group entry cache Nov 5 16:01:05.242485 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 5 16:01:05.273048 oslogin_cache_refresh[1575]: Failure getting groups, quitting Nov 5 16:01:05.254334 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 5 16:01:05.273065 oslogin_cache_refresh[1575]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 16:01:05.266392 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 5 16:01:05.279831 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 5 16:01:05.283609 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 5 16:01:05.284598 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 5 16:01:05.536044 extend-filesystems[1574]: Resized partition /dev/vda9 Nov 5 16:01:05.285552 systemd[1]: Starting update-engine.service - Update Engine... Nov 5 16:01:05.544008 jq[1594]: true Nov 5 16:01:05.293970 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 5 16:01:05.540907 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 5 16:01:05.550837 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 5 16:01:05.551170 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 5 16:01:05.551578 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 5 16:01:05.551887 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 5 16:01:05.557763 systemd[1]: motdgen.service: Deactivated successfully. Nov 5 16:01:05.561572 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 5 16:01:05.587827 extend-filesystems[1607]: resize2fs 1.47.3 (8-Jul-2025) Nov 5 16:01:05.576354 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 5 16:01:05.602917 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 5 16:01:05.603848 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 5 16:01:05.642870 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 5 16:01:05.660666 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 5 16:01:05.665040 update_engine[1592]: I20251105 16:01:05.664924 1592 main.cc:92] Flatcar Update Engine starting Nov 5 16:01:05.809749 (ntainerd)[1621]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 5 16:01:05.811119 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 5 16:01:05.823318 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 5 16:01:05.836783 jq[1620]: true Nov 5 16:01:05.842409 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 5 16:01:05.949598 sshd_keygen[1617]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 5 16:01:05.949961 extend-filesystems[1607]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 5 16:01:05.949961 extend-filesystems[1607]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 5 16:01:05.949961 extend-filesystems[1607]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 5 16:01:05.983133 extend-filesystems[1574]: Resized filesystem in /dev/vda9 Nov 5 16:01:06.003023 update_engine[1592]: I20251105 16:01:05.980703 1592 update_check_scheduler.cc:74] Next update check in 4m19s Nov 5 16:01:05.950660 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 5 16:01:05.962740 dbus-daemon[1571]: [system] SELinux support is enabled Nov 5 16:01:05.951068 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 5 16:01:05.989329 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 5 16:01:06.013096 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 5 16:01:06.319941 systemd-logind[1589]: Watching system buttons on /dev/input/event2 (Power Button) Nov 5 16:01:06.320323 systemd-logind[1589]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 5 16:01:06.320918 systemd-logind[1589]: New seat seat0. Nov 5 16:01:06.325790 bash[1662]: Updated "/home/core/.ssh/authorized_keys" Nov 5 16:01:06.406516 systemd[1]: Started systemd-logind.service - User Login Management. Nov 5 16:01:06.410456 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 5 16:01:06.416039 tar[1616]: linux-amd64/LICENSE Nov 5 16:01:06.416761 tar[1616]: linux-amd64/helm Nov 5 16:01:06.418411 dbus-daemon[1571]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 5 16:01:06.430464 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 5 16:01:06.432963 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 5 16:01:06.433142 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 5 16:01:06.433172 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 5 16:01:06.436506 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 5 16:01:06.436530 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 5 16:01:06.441901 systemd[1]: Started update-engine.service - Update Engine. Nov 5 16:01:06.469945 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 5 16:01:06.481094 systemd[1]: issuegen.service: Deactivated successfully. Nov 5 16:01:06.481436 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 5 16:01:06.491113 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 5 16:01:06.546581 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 5 16:01:06.557264 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 5 16:01:06.564156 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 5 16:01:06.566491 systemd[1]: Reached target getty.target - Login Prompts. Nov 5 16:01:06.810103 locksmithd[1667]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 5 16:01:07.731142 containerd[1621]: time="2025-11-05T16:01:07Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 5 16:01:07.733079 containerd[1621]: time="2025-11-05T16:01:07.732967707Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 5 16:01:07.748444 containerd[1621]: time="2025-11-05T16:01:07.748371815Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.219µs" Nov 5 16:01:07.748624 containerd[1621]: time="2025-11-05T16:01:07.748603379Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 5 16:01:07.748696 containerd[1621]: time="2025-11-05T16:01:07.748681566Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 5 16:01:07.749044 containerd[1621]: time="2025-11-05T16:01:07.749021554Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 5 16:01:07.749145 containerd[1621]: time="2025-11-05T16:01:07.749121611Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 5 16:01:07.749278 containerd[1621]: time="2025-11-05T16:01:07.749259971Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 16:01:07.749453 containerd[1621]: time="2025-11-05T16:01:07.749430551Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 16:01:07.749516 containerd[1621]: time="2025-11-05T16:01:07.749501554Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 16:01:07.749949 containerd[1621]: time="2025-11-05T16:01:07.749925248Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 16:01:07.750100 containerd[1621]: time="2025-11-05T16:01:07.750002353Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 16:01:07.750230 containerd[1621]: time="2025-11-05T16:01:07.750208429Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 16:01:07.750330 containerd[1621]: time="2025-11-05T16:01:07.750308517Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 5 16:01:07.750535 containerd[1621]: time="2025-11-05T16:01:07.750515285Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 5 16:01:07.750942 containerd[1621]: time="2025-11-05T16:01:07.750920966Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 16:01:07.751048 containerd[1621]: time="2025-11-05T16:01:07.751027015Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 16:01:07.752647 containerd[1621]: time="2025-11-05T16:01:07.752623569Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 5 16:01:07.752780 containerd[1621]: time="2025-11-05T16:01:07.752747882Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 5 16:01:07.753569 containerd[1621]: time="2025-11-05T16:01:07.753545878Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 5 16:01:07.753804 containerd[1621]: time="2025-11-05T16:01:07.753760080Z" level=info msg="metadata content store policy set" policy=shared Nov 5 16:01:07.767249 containerd[1621]: time="2025-11-05T16:01:07.767186559Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 5 16:01:07.767718 containerd[1621]: time="2025-11-05T16:01:07.767634950Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 5 16:01:07.767883 containerd[1621]: time="2025-11-05T16:01:07.767861375Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 5 16:01:07.768027 containerd[1621]: time="2025-11-05T16:01:07.767937127Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 5 16:01:07.768027 containerd[1621]: time="2025-11-05T16:01:07.767957946Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 5 16:01:07.768027 containerd[1621]: time="2025-11-05T16:01:07.767977292Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 5 16:01:07.768375 containerd[1621]: time="2025-11-05T16:01:07.768320656Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 5 16:01:07.768459 containerd[1621]: time="2025-11-05T16:01:07.768353658Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 5 16:01:07.768562 containerd[1621]: time="2025-11-05T16:01:07.768543634Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 5 16:01:07.768648 containerd[1621]: time="2025-11-05T16:01:07.768630558Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 5 16:01:07.768789 containerd[1621]: time="2025-11-05T16:01:07.768706811Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 5 16:01:07.768789 containerd[1621]: time="2025-11-05T16:01:07.768730986Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 5 16:01:07.769206 containerd[1621]: time="2025-11-05T16:01:07.769155131Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 5 16:01:07.769289 containerd[1621]: time="2025-11-05T16:01:07.769271760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 5 16:01:07.769425 containerd[1621]: time="2025-11-05T16:01:07.769360657Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 5 16:01:07.769425 containerd[1621]: time="2025-11-05T16:01:07.769387477Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 5 16:01:07.769509 containerd[1621]: time="2025-11-05T16:01:07.769401413Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 5 16:01:07.769651 containerd[1621]: time="2025-11-05T16:01:07.769580599Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 5 16:01:07.769651 containerd[1621]: time="2025-11-05T16:01:07.769614433Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 5 16:01:07.769651 containerd[1621]: time="2025-11-05T16:01:07.769629010Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 5 16:01:07.769882 containerd[1621]: time="2025-11-05T16:01:07.769817684Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 5 16:01:07.769882 containerd[1621]: time="2025-11-05T16:01:07.769839264Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 5 16:01:07.769882 containerd[1621]: time="2025-11-05T16:01:07.769853912Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 5 16:01:07.770239 containerd[1621]: time="2025-11-05T16:01:07.770174122Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 5 16:01:07.770239 containerd[1621]: time="2025-11-05T16:01:07.770201464Z" level=info msg="Start snapshots syncer" Nov 5 16:01:07.770359 containerd[1621]: time="2025-11-05T16:01:07.770341927Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 5 16:01:07.773273 containerd[1621]: time="2025-11-05T16:01:07.773148931Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 5 16:01:07.776337 containerd[1621]: time="2025-11-05T16:01:07.776270736Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 5 16:01:07.776616 containerd[1621]: time="2025-11-05T16:01:07.776451204Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 5 16:01:07.776928 containerd[1621]: time="2025-11-05T16:01:07.776826317Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 5 16:01:07.776928 containerd[1621]: time="2025-11-05T16:01:07.776876732Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 5 16:01:07.776928 containerd[1621]: time="2025-11-05T16:01:07.776892672Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 5 16:01:07.776928 containerd[1621]: time="2025-11-05T16:01:07.776906738Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 5 16:01:07.776928 containerd[1621]: time="2025-11-05T16:01:07.776922077Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 5 16:01:07.777113 containerd[1621]: time="2025-11-05T16:01:07.776936765Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 5 16:01:07.777113 containerd[1621]: time="2025-11-05T16:01:07.776951192Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 5 16:01:07.777113 containerd[1621]: time="2025-11-05T16:01:07.776990886Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 5 16:01:07.777113 containerd[1621]: time="2025-11-05T16:01:07.777010182Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 5 16:01:07.777113 containerd[1621]: time="2025-11-05T16:01:07.777025180Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 5 16:01:07.777113 containerd[1621]: time="2025-11-05T16:01:07.777068872Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 16:01:07.777113 containerd[1621]: time="2025-11-05T16:01:07.777087828Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 16:01:07.777113 containerd[1621]: time="2025-11-05T16:01:07.777113837Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 16:01:07.777351 containerd[1621]: time="2025-11-05T16:01:07.777126360Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 16:01:07.777351 containerd[1621]: time="2025-11-05T16:01:07.777136349Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 5 16:01:07.777351 containerd[1621]: time="2025-11-05T16:01:07.777148321Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 5 16:01:07.777351 containerd[1621]: time="2025-11-05T16:01:07.777166255Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 5 16:01:07.777351 containerd[1621]: time="2025-11-05T16:01:07.777190310Z" level=info msg="runtime interface created" Nov 5 16:01:07.777351 containerd[1621]: time="2025-11-05T16:01:07.777197974Z" level=info msg="created NRI interface" Nov 5 16:01:07.777351 containerd[1621]: time="2025-11-05T16:01:07.777209406Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 5 16:01:07.777351 containerd[1621]: time="2025-11-05T16:01:07.777232008Z" level=info msg="Connect containerd service" Nov 5 16:01:07.777351 containerd[1621]: time="2025-11-05T16:01:07.777262075Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 5 16:01:07.779421 containerd[1621]: time="2025-11-05T16:01:07.779388362Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 16:01:07.990610 tar[1616]: linux-amd64/README.md Nov 5 16:01:08.040857 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 5 16:01:08.129574 containerd[1621]: time="2025-11-05T16:01:08.129486030Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 5 16:01:08.129574 containerd[1621]: time="2025-11-05T16:01:08.129574927Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 5 16:01:08.129808 containerd[1621]: time="2025-11-05T16:01:08.129624019Z" level=info msg="Start subscribing containerd event" Nov 5 16:01:08.129808 containerd[1621]: time="2025-11-05T16:01:08.129654977Z" level=info msg="Start recovering state" Nov 5 16:01:08.129901 containerd[1621]: time="2025-11-05T16:01:08.129874759Z" level=info msg="Start event monitor" Nov 5 16:01:08.129901 containerd[1621]: time="2025-11-05T16:01:08.129891401Z" level=info msg="Start cni network conf syncer for default" Nov 5 16:01:08.129901 containerd[1621]: time="2025-11-05T16:01:08.129902141Z" level=info msg="Start streaming server" Nov 5 16:01:08.129990 containerd[1621]: time="2025-11-05T16:01:08.129912129Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 5 16:01:08.129990 containerd[1621]: time="2025-11-05T16:01:08.129920485Z" level=info msg="runtime interface starting up..." Nov 5 16:01:08.129990 containerd[1621]: time="2025-11-05T16:01:08.129927669Z" level=info msg="starting plugins..." Nov 5 16:01:08.129990 containerd[1621]: time="2025-11-05T16:01:08.129942947Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 5 16:01:08.130251 containerd[1621]: time="2025-11-05T16:01:08.130166096Z" level=info msg="containerd successfully booted in 0.400227s" Nov 5 16:01:08.130448 systemd[1]: Started containerd.service - containerd container runtime. Nov 5 16:01:10.336218 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:01:10.340027 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 5 16:01:10.344758 systemd[1]: Startup finished in 2.650s (kernel) + 9.120s (initrd) + 13.283s (userspace) = 25.053s. Nov 5 16:01:10.433066 (kubelet)[1708]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 16:01:11.663406 kubelet[1708]: E1105 16:01:11.662280 1708 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 16:01:11.669533 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 16:01:11.669760 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 16:01:11.670417 systemd[1]: kubelet.service: Consumed 3.555s CPU time, 266.7M memory peak. Nov 5 16:01:13.428895 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 5 16:01:13.437989 systemd[1]: Started sshd@0-10.0.0.130:22-10.0.0.1:33564.service - OpenSSH per-connection server daemon (10.0.0.1:33564). Nov 5 16:01:14.360325 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 33564 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 16:01:14.362481 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:01:14.417065 systemd-logind[1589]: New session 1 of user core. Nov 5 16:01:14.421260 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 5 16:01:14.423220 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 5 16:01:14.485886 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 5 16:01:14.495226 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 5 16:01:14.542489 (systemd)[1726]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 5 16:01:14.552717 systemd-logind[1589]: New session c1 of user core. Nov 5 16:01:14.895703 systemd[1726]: Queued start job for default target default.target. Nov 5 16:01:14.921017 systemd[1726]: Created slice app.slice - User Application Slice. Nov 5 16:01:14.921057 systemd[1726]: Reached target paths.target - Paths. Nov 5 16:01:14.921117 systemd[1726]: Reached target timers.target - Timers. Nov 5 16:01:14.924040 systemd[1726]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 5 16:01:14.956334 systemd[1726]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 5 16:01:14.962013 systemd[1726]: Reached target sockets.target - Sockets. Nov 5 16:01:14.962940 systemd[1726]: Reached target basic.target - Basic System. Nov 5 16:01:14.965941 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 5 16:01:14.966960 systemd[1726]: Reached target default.target - Main User Target. Nov 5 16:01:14.967807 systemd[1726]: Startup finished in 394ms. Nov 5 16:01:14.983951 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 5 16:01:15.080962 systemd[1]: Started sshd@1-10.0.0.130:22-10.0.0.1:33572.service - OpenSSH per-connection server daemon (10.0.0.1:33572). Nov 5 16:01:15.395573 sshd[1737]: Accepted publickey for core from 10.0.0.1 port 33572 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 16:01:15.401753 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:01:15.422860 systemd-logind[1589]: New session 2 of user core. Nov 5 16:01:15.433087 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 5 16:01:15.512612 sshd[1740]: Connection closed by 10.0.0.1 port 33572 Nov 5 16:01:15.514155 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Nov 5 16:01:15.536754 systemd[1]: sshd@1-10.0.0.130:22-10.0.0.1:33572.service: Deactivated successfully. Nov 5 16:01:15.545763 systemd[1]: session-2.scope: Deactivated successfully. Nov 5 16:01:15.551077 systemd-logind[1589]: Session 2 logged out. Waiting for processes to exit. Nov 5 16:01:15.561644 systemd[1]: Started sshd@2-10.0.0.130:22-10.0.0.1:33576.service - OpenSSH per-connection server daemon (10.0.0.1:33576). Nov 5 16:01:15.566364 systemd-logind[1589]: Removed session 2. Nov 5 16:01:15.651429 sshd[1746]: Accepted publickey for core from 10.0.0.1 port 33576 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 16:01:15.653362 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:01:15.679527 systemd-logind[1589]: New session 3 of user core. Nov 5 16:01:15.697156 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 5 16:01:15.766922 sshd[1749]: Connection closed by 10.0.0.1 port 33576 Nov 5 16:01:15.770724 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Nov 5 16:01:15.795438 systemd[1]: sshd@2-10.0.0.130:22-10.0.0.1:33576.service: Deactivated successfully. Nov 5 16:01:15.803189 systemd[1]: session-3.scope: Deactivated successfully. Nov 5 16:01:15.814376 systemd-logind[1589]: Session 3 logged out. Waiting for processes to exit. Nov 5 16:01:15.825701 systemd-logind[1589]: Removed session 3. Nov 5 16:01:15.838156 systemd[1]: Started sshd@3-10.0.0.130:22-10.0.0.1:33580.service - OpenSSH per-connection server daemon (10.0.0.1:33580). Nov 5 16:01:15.972724 sshd[1755]: Accepted publickey for core from 10.0.0.1 port 33580 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 16:01:15.973609 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:01:15.991546 systemd-logind[1589]: New session 4 of user core. Nov 5 16:01:16.004072 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 5 16:01:16.095490 sshd[1758]: Connection closed by 10.0.0.1 port 33580 Nov 5 16:01:16.094454 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Nov 5 16:01:16.108263 systemd[1]: sshd@3-10.0.0.130:22-10.0.0.1:33580.service: Deactivated successfully. Nov 5 16:01:16.110275 systemd[1]: session-4.scope: Deactivated successfully. Nov 5 16:01:16.116166 systemd-logind[1589]: Session 4 logged out. Waiting for processes to exit. Nov 5 16:01:16.117747 systemd[1]: Started sshd@4-10.0.0.130:22-10.0.0.1:33590.service - OpenSSH per-connection server daemon (10.0.0.1:33590). Nov 5 16:01:16.120645 systemd-logind[1589]: Removed session 4. Nov 5 16:01:16.203975 sshd[1764]: Accepted publickey for core from 10.0.0.1 port 33590 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 16:01:16.205879 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:01:16.235869 systemd-logind[1589]: New session 5 of user core. Nov 5 16:01:16.254184 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 5 16:01:16.347043 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 5 16:01:16.348213 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 16:01:16.381163 sudo[1768]: pam_unix(sudo:session): session closed for user root Nov 5 16:01:16.386990 sshd[1767]: Connection closed by 10.0.0.1 port 33590 Nov 5 16:01:16.390236 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Nov 5 16:01:16.411367 systemd[1]: Started sshd@5-10.0.0.130:22-10.0.0.1:33606.service - OpenSSH per-connection server daemon (10.0.0.1:33606). Nov 5 16:01:16.412479 systemd[1]: sshd@4-10.0.0.130:22-10.0.0.1:33590.service: Deactivated successfully. Nov 5 16:01:16.420754 systemd[1]: session-5.scope: Deactivated successfully. Nov 5 16:01:16.428075 systemd-logind[1589]: Session 5 logged out. Waiting for processes to exit. Nov 5 16:01:16.431132 systemd-logind[1589]: Removed session 5. Nov 5 16:01:16.487446 sshd[1771]: Accepted publickey for core from 10.0.0.1 port 33606 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 16:01:16.489820 sshd-session[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:01:16.499648 systemd-logind[1589]: New session 6 of user core. Nov 5 16:01:16.509133 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 5 16:01:16.585490 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 5 16:01:16.585992 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 16:01:16.993399 sudo[1779]: pam_unix(sudo:session): session closed for user root Nov 5 16:01:17.026785 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 5 16:01:17.027221 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 16:01:17.058901 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 16:01:17.164911 augenrules[1801]: No rules Nov 5 16:01:17.172731 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 16:01:17.175882 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 16:01:17.182868 sudo[1778]: pam_unix(sudo:session): session closed for user root Nov 5 16:01:17.185881 sshd[1777]: Connection closed by 10.0.0.1 port 33606 Nov 5 16:01:17.187975 sshd-session[1771]: pam_unix(sshd:session): session closed for user core Nov 5 16:01:17.210352 systemd[1]: sshd@5-10.0.0.130:22-10.0.0.1:33606.service: Deactivated successfully. Nov 5 16:01:17.212689 systemd[1]: session-6.scope: Deactivated successfully. Nov 5 16:01:17.219861 systemd[1]: Started sshd@6-10.0.0.130:22-10.0.0.1:33620.service - OpenSSH per-connection server daemon (10.0.0.1:33620). Nov 5 16:01:17.225148 systemd-logind[1589]: Session 6 logged out. Waiting for processes to exit. Nov 5 16:01:17.229334 systemd-logind[1589]: Removed session 6. Nov 5 16:01:17.357790 sshd[1810]: Accepted publickey for core from 10.0.0.1 port 33620 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 16:01:17.359092 sshd-session[1810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:01:17.377604 systemd-logind[1589]: New session 7 of user core. Nov 5 16:01:17.387058 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 5 16:01:17.470677 sudo[1814]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 5 16:01:17.471128 sudo[1814]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 16:01:21.415397 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 5 16:01:21.449465 (dockerd)[1836]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 5 16:01:21.744794 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 5 16:01:21.747922 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 16:01:23.007853 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:01:23.032420 (kubelet)[1851]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 16:01:24.042483 dockerd[1836]: time="2025-11-05T16:01:24.042382371Z" level=info msg="Starting up" Nov 5 16:01:24.043840 dockerd[1836]: time="2025-11-05T16:01:24.043810319Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 5 16:01:24.074500 kubelet[1851]: E1105 16:01:24.074411 1851 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 16:01:24.083051 dockerd[1836]: time="2025-11-05T16:01:24.081748136Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 5 16:01:24.086027 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 16:01:24.086269 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 16:01:24.086707 systemd[1]: kubelet.service: Consumed 1.504s CPU time, 112M memory peak. Nov 5 16:01:24.706184 dockerd[1836]: time="2025-11-05T16:01:24.704023003Z" level=info msg="Loading containers: start." Nov 5 16:01:24.747822 kernel: Initializing XFRM netlink socket Nov 5 16:01:25.353164 systemd-networkd[1510]: docker0: Link UP Nov 5 16:01:25.365930 dockerd[1836]: time="2025-11-05T16:01:25.365851308Z" level=info msg="Loading containers: done." Nov 5 16:01:25.414908 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1363018960-merged.mount: Deactivated successfully. Nov 5 16:01:25.418028 dockerd[1836]: time="2025-11-05T16:01:25.417962124Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 5 16:01:25.418128 dockerd[1836]: time="2025-11-05T16:01:25.418093380Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 5 16:01:25.418285 dockerd[1836]: time="2025-11-05T16:01:25.418250605Z" level=info msg="Initializing buildkit" Nov 5 16:01:25.561170 dockerd[1836]: time="2025-11-05T16:01:25.560726288Z" level=info msg="Completed buildkit initialization" Nov 5 16:01:25.577650 dockerd[1836]: time="2025-11-05T16:01:25.576518504Z" level=info msg="Daemon has completed initialization" Nov 5 16:01:25.580558 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 5 16:01:25.583367 dockerd[1836]: time="2025-11-05T16:01:25.581826979Z" level=info msg="API listen on /run/docker.sock" Nov 5 16:01:28.472152 containerd[1621]: time="2025-11-05T16:01:28.470945458Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 5 16:01:29.485727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3709127132.mount: Deactivated successfully. Nov 5 16:01:34.248492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 5 16:01:34.251708 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 16:01:34.929674 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:01:34.959387 (kubelet)[2141]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 16:01:35.307444 containerd[1621]: time="2025-11-05T16:01:35.307342683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:01:35.317410 containerd[1621]: time="2025-11-05T16:01:35.314061413Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Nov 5 16:01:35.317410 containerd[1621]: time="2025-11-05T16:01:35.314165398Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:01:35.329006 containerd[1621]: time="2025-11-05T16:01:35.327406269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:01:35.329006 containerd[1621]: time="2025-11-05T16:01:35.328511492Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 6.857498748s" Nov 5 16:01:35.329006 containerd[1621]: time="2025-11-05T16:01:35.328552459Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 5 16:01:35.334610 containerd[1621]: time="2025-11-05T16:01:35.334559995Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 5 16:01:35.381394 kubelet[2141]: E1105 16:01:35.380146 2141 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 16:01:35.391139 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 16:01:35.391491 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 16:01:35.481008 systemd[1]: kubelet.service: Consumed 769ms CPU time, 109.2M memory peak. Nov 5 16:01:41.903644 containerd[1621]: time="2025-11-05T16:01:41.901904862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:01:41.911879 containerd[1621]: time="2025-11-05T16:01:41.911052560Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Nov 5 16:01:41.913965 containerd[1621]: time="2025-11-05T16:01:41.913861560Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:01:41.925628 containerd[1621]: time="2025-11-05T16:01:41.925534451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:01:41.934967 containerd[1621]: time="2025-11-05T16:01:41.933321604Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 6.597721631s" Nov 5 16:01:41.934967 containerd[1621]: time="2025-11-05T16:01:41.934039455Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 5 16:01:41.944614 containerd[1621]: time="2025-11-05T16:01:41.944278898Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 5 16:01:45.579129 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 5 16:01:45.590725 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 16:01:46.261091 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:01:46.276611 (kubelet)[2166]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 16:01:46.712801 kubelet[2166]: E1105 16:01:46.709588 2166 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 16:01:46.721258 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 16:01:46.721509 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 16:01:46.722063 systemd[1]: kubelet.service: Consumed 709ms CPU time, 109.5M memory peak. Nov 5 16:01:47.596463 containerd[1621]: time="2025-11-05T16:01:47.595564640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:01:47.601147 containerd[1621]: time="2025-11-05T16:01:47.601007862Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Nov 5 16:01:47.607330 containerd[1621]: time="2025-11-05T16:01:47.602740758Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:01:47.616874 containerd[1621]: time="2025-11-05T16:01:47.614171936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:01:47.622267 containerd[1621]: time="2025-11-05T16:01:47.621272032Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 5.676874905s" Nov 5 16:01:47.622267 containerd[1621]: time="2025-11-05T16:01:47.621805224Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 5 16:01:47.629163 containerd[1621]: time="2025-11-05T16:01:47.628799504Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 5 16:01:49.338466 kernel: hrtimer: interrupt took 2155095 ns Nov 5 16:01:50.696093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1757891592.mount: Deactivated successfully. Nov 5 16:01:51.299241 update_engine[1592]: I20251105 16:01:51.298156 1592 update_attempter.cc:509] Updating boot flags... Nov 5 16:01:54.377220 containerd[1621]: time="2025-11-05T16:01:54.375981915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:01:54.387947 containerd[1621]: time="2025-11-05T16:01:54.387300190Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Nov 5 16:01:54.407578 containerd[1621]: time="2025-11-05T16:01:54.404797017Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:01:54.428703 containerd[1621]: time="2025-11-05T16:01:54.425348867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:01:54.428703 containerd[1621]: time="2025-11-05T16:01:54.427551708Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 6.798695088s" Nov 5 16:01:54.431691 containerd[1621]: time="2025-11-05T16:01:54.431355358Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 5 16:01:54.434699 containerd[1621]: time="2025-11-05T16:01:54.433536078Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 5 16:01:56.746506 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 5 16:01:56.751867 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 16:01:57.315509 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:01:57.360663 (kubelet)[2210]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 16:01:57.456470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount133526125.mount: Deactivated successfully. Nov 5 16:01:57.513104 kubelet[2210]: E1105 16:01:57.512693 2210 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 16:01:57.519741 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 16:01:57.520023 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 16:01:57.521123 systemd[1]: kubelet.service: Consumed 502ms CPU time, 110.9M memory peak. Nov 5 16:02:01.166258 containerd[1621]: time="2025-11-05T16:02:01.164806525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:01.169633 containerd[1621]: time="2025-11-05T16:02:01.169547509Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Nov 5 16:02:01.176250 containerd[1621]: time="2025-11-05T16:02:01.173919183Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:01.181059 containerd[1621]: time="2025-11-05T16:02:01.180939950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:01.184844 containerd[1621]: time="2025-11-05T16:02:01.184309901Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 6.745592171s" Nov 5 16:02:01.184844 containerd[1621]: time="2025-11-05T16:02:01.184361487Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 5 16:02:01.190108 containerd[1621]: time="2025-11-05T16:02:01.189341047Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 5 16:02:02.126612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3528763218.mount: Deactivated successfully. Nov 5 16:02:02.149736 containerd[1621]: time="2025-11-05T16:02:02.148742228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 16:02:02.151292 containerd[1621]: time="2025-11-05T16:02:02.150000451Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 5 16:02:02.155245 containerd[1621]: time="2025-11-05T16:02:02.151588621Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 16:02:02.157381 containerd[1621]: time="2025-11-05T16:02:02.157306243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 16:02:02.161979 containerd[1621]: time="2025-11-05T16:02:02.158836976Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 969.448099ms" Nov 5 16:02:02.167228 containerd[1621]: time="2025-11-05T16:02:02.162238697Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 5 16:02:02.170192 containerd[1621]: time="2025-11-05T16:02:02.168311964Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 5 16:02:03.648839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1124272526.mount: Deactivated successfully. Nov 5 16:02:07.745420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Nov 5 16:02:07.754496 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 16:02:08.707128 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:02:08.749719 (kubelet)[2335]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 16:02:08.885855 kubelet[2335]: E1105 16:02:08.884734 2335 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 16:02:08.889790 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 16:02:08.890093 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 16:02:08.891228 systemd[1]: kubelet.service: Consumed 763ms CPU time, 110.2M memory peak. Nov 5 16:02:12.153972 containerd[1621]: time="2025-11-05T16:02:12.152427553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:12.153972 containerd[1621]: time="2025-11-05T16:02:12.153847633Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Nov 5 16:02:12.156957 containerd[1621]: time="2025-11-05T16:02:12.156397147Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:12.165936 containerd[1621]: time="2025-11-05T16:02:12.163926586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:12.168646 containerd[1621]: time="2025-11-05T16:02:12.168549062Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 9.998793417s" Nov 5 16:02:12.168646 containerd[1621]: time="2025-11-05T16:02:12.168614906Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 5 16:02:15.424206 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:02:15.424451 systemd[1]: kubelet.service: Consumed 763ms CPU time, 110.2M memory peak. Nov 5 16:02:15.435170 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 16:02:15.532664 systemd[1]: Reload requested from client PID 2377 ('systemctl') (unit session-7.scope)... Nov 5 16:02:15.533273 systemd[1]: Reloading... Nov 5 16:02:15.748539 zram_generator::config[2422]: No configuration found. Nov 5 16:02:16.619730 systemd[1]: Reloading finished in 1085 ms. Nov 5 16:02:16.766891 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 5 16:02:16.767044 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 5 16:02:16.769407 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:02:16.769482 systemd[1]: kubelet.service: Consumed 455ms CPU time, 98.2M memory peak. Nov 5 16:02:16.776295 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 16:02:17.125020 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:02:17.152338 (kubelet)[2468]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 16:02:17.266924 kubelet[2468]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 16:02:17.266924 kubelet[2468]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 16:02:17.266924 kubelet[2468]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 16:02:17.267496 kubelet[2468]: I1105 16:02:17.266969 2468 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 16:02:17.707124 kubelet[2468]: I1105 16:02:17.705494 2468 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 5 16:02:17.707124 kubelet[2468]: I1105 16:02:17.706190 2468 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 16:02:17.707560 kubelet[2468]: I1105 16:02:17.707485 2468 server.go:954] "Client rotation is on, will bootstrap in background" Nov 5 16:02:17.827087 kubelet[2468]: E1105 16:02:17.826404 2468 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Nov 5 16:02:17.827087 kubelet[2468]: I1105 16:02:17.827134 2468 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 16:02:17.852949 kubelet[2468]: I1105 16:02:17.852876 2468 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 16:02:17.867919 kubelet[2468]: I1105 16:02:17.867835 2468 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 16:02:17.868335 kubelet[2468]: I1105 16:02:17.868211 2468 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 16:02:17.868609 kubelet[2468]: I1105 16:02:17.868269 2468 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 16:02:17.868609 kubelet[2468]: I1105 16:02:17.868602 2468 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 16:02:17.868970 kubelet[2468]: I1105 16:02:17.868616 2468 container_manager_linux.go:304] "Creating device plugin manager" Nov 5 16:02:17.868970 kubelet[2468]: I1105 16:02:17.868845 2468 state_mem.go:36] "Initialized new in-memory state store" Nov 5 16:02:17.874616 kubelet[2468]: I1105 16:02:17.874366 2468 kubelet.go:446] "Attempting to sync node with API server" Nov 5 16:02:17.877933 kubelet[2468]: I1105 16:02:17.875345 2468 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 16:02:17.877933 kubelet[2468]: I1105 16:02:17.875409 2468 kubelet.go:352] "Adding apiserver pod source" Nov 5 16:02:17.877933 kubelet[2468]: I1105 16:02:17.875433 2468 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 16:02:17.896399 kubelet[2468]: W1105 16:02:17.895874 2468 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Nov 5 16:02:17.896399 kubelet[2468]: E1105 16:02:17.895986 2468 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Nov 5 16:02:17.896399 kubelet[2468]: W1105 16:02:17.896130 2468 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Nov 5 16:02:17.896399 kubelet[2468]: E1105 16:02:17.896213 2468 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Nov 5 16:02:17.904488 kubelet[2468]: I1105 16:02:17.903857 2468 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 16:02:18.001891 kubelet[2468]: I1105 16:02:18.001343 2468 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 5 16:02:18.005557 kubelet[2468]: W1105 16:02:18.003222 2468 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 5 16:02:18.015719 kubelet[2468]: I1105 16:02:18.015623 2468 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 16:02:18.016254 kubelet[2468]: I1105 16:02:18.016214 2468 server.go:1287] "Started kubelet" Nov 5 16:02:18.021103 kubelet[2468]: I1105 16:02:18.021032 2468 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 16:02:18.022178 kubelet[2468]: I1105 16:02:18.021976 2468 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 16:02:18.022697 kubelet[2468]: I1105 16:02:18.022647 2468 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 16:02:18.022922 kubelet[2468]: I1105 16:02:18.022859 2468 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 16:02:18.026040 kubelet[2468]: I1105 16:02:18.025962 2468 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 16:02:18.026901 kubelet[2468]: I1105 16:02:18.026630 2468 server.go:479] "Adding debug handlers to kubelet server" Nov 5 16:02:18.030775 kubelet[2468]: I1105 16:02:18.030572 2468 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 16:02:18.034396 kubelet[2468]: E1105 16:02:18.032256 2468 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 16:02:18.034396 kubelet[2468]: I1105 16:02:18.032649 2468 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 16:02:18.034396 kubelet[2468]: I1105 16:02:18.032722 2468 reconciler.go:26] "Reconciler: start to sync state" Nov 5 16:02:18.034396 kubelet[2468]: W1105 16:02:18.033111 2468 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Nov 5 16:02:18.034396 kubelet[2468]: E1105 16:02:18.033170 2468 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Nov 5 16:02:18.034396 kubelet[2468]: E1105 16:02:18.033298 2468 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="200ms" Nov 5 16:02:18.034396 kubelet[2468]: I1105 16:02:18.033427 2468 factory.go:221] Registration of the systemd container factory successfully Nov 5 16:02:18.034396 kubelet[2468]: I1105 16:02:18.033511 2468 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 16:02:18.034396 kubelet[2468]: E1105 16:02:18.034170 2468 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 16:02:18.034755 kubelet[2468]: I1105 16:02:18.034585 2468 factory.go:221] Registration of the containerd container factory successfully Nov 5 16:02:18.039750 kubelet[2468]: E1105 16:02:18.036777 2468 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.130:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.130:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.187527c0c14cef32 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-05 16:02:18.01566597 +0000 UTC m=+0.845400330,LastTimestamp:2025-11-05 16:02:18.01566597 +0000 UTC m=+0.845400330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 5 16:02:18.058652 kubelet[2468]: I1105 16:02:18.058596 2468 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 16:02:18.058652 kubelet[2468]: I1105 16:02:18.058625 2468 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 16:02:18.058652 kubelet[2468]: I1105 16:02:18.058650 2468 state_mem.go:36] "Initialized new in-memory state store" Nov 5 16:02:18.065201 kubelet[2468]: I1105 16:02:18.065148 2468 policy_none.go:49] "None policy: Start" Nov 5 16:02:18.065201 kubelet[2468]: I1105 16:02:18.065189 2468 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 16:02:18.065201 kubelet[2468]: I1105 16:02:18.065208 2468 state_mem.go:35] "Initializing new in-memory state store" Nov 5 16:02:18.082218 kubelet[2468]: I1105 16:02:18.082002 2468 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 5 16:02:18.089302 kubelet[2468]: I1105 16:02:18.088570 2468 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 5 16:02:18.089302 kubelet[2468]: I1105 16:02:18.088744 2468 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 5 16:02:18.089798 kubelet[2468]: I1105 16:02:18.089598 2468 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 16:02:18.089798 kubelet[2468]: I1105 16:02:18.089619 2468 kubelet.go:2382] "Starting kubelet main sync loop" Nov 5 16:02:18.089798 kubelet[2468]: E1105 16:02:18.089698 2468 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 16:02:18.091710 kubelet[2468]: W1105 16:02:18.090456 2468 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Nov 5 16:02:18.091710 kubelet[2468]: E1105 16:02:18.090495 2468 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Nov 5 16:02:18.098809 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 5 16:02:18.133324 kubelet[2468]: E1105 16:02:18.133234 2468 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 16:02:18.139101 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 5 16:02:18.144680 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 5 16:02:18.167884 kubelet[2468]: I1105 16:02:18.167848 2468 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 5 16:02:18.168650 kubelet[2468]: I1105 16:02:18.168129 2468 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 16:02:18.168650 kubelet[2468]: I1105 16:02:18.168145 2468 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 16:02:18.168650 kubelet[2468]: I1105 16:02:18.168491 2468 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 16:02:18.169599 kubelet[2468]: E1105 16:02:18.169437 2468 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 16:02:18.169599 kubelet[2468]: E1105 16:02:18.169477 2468 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 5 16:02:18.211642 systemd[1]: Created slice kubepods-burstable-pod821b3f6fa792237d8f19702b3c98bee5.slice - libcontainer container kubepods-burstable-pod821b3f6fa792237d8f19702b3c98bee5.slice. Nov 5 16:02:18.233486 kubelet[2468]: I1105 16:02:18.233380 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/821b3f6fa792237d8f19702b3c98bee5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"821b3f6fa792237d8f19702b3c98bee5\") " pod="kube-system/kube-apiserver-localhost" Nov 5 16:02:18.233486 kubelet[2468]: I1105 16:02:18.233456 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 16:02:18.233486 kubelet[2468]: I1105 16:02:18.233492 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 16:02:18.233486 kubelet[2468]: I1105 16:02:18.233515 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 5 16:02:18.233881 kubelet[2468]: I1105 16:02:18.233536 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/821b3f6fa792237d8f19702b3c98bee5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"821b3f6fa792237d8f19702b3c98bee5\") " pod="kube-system/kube-apiserver-localhost" Nov 5 16:02:18.233881 kubelet[2468]: I1105 16:02:18.233558 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 16:02:18.233881 kubelet[2468]: I1105 16:02:18.233579 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 16:02:18.233881 kubelet[2468]: I1105 16:02:18.233605 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 16:02:18.233881 kubelet[2468]: I1105 16:02:18.233626 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/821b3f6fa792237d8f19702b3c98bee5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"821b3f6fa792237d8f19702b3c98bee5\") " pod="kube-system/kube-apiserver-localhost" Nov 5 16:02:18.234498 kubelet[2468]: E1105 16:02:18.234446 2468 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="400ms" Nov 5 16:02:18.237385 kubelet[2468]: E1105 16:02:18.237332 2468 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 16:02:18.244719 systemd[1]: Created slice kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice - libcontainer container kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice. Nov 5 16:02:18.260984 kubelet[2468]: E1105 16:02:18.259553 2468 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 16:02:18.264925 systemd[1]: Created slice kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice - libcontainer container kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice. Nov 5 16:02:18.268717 kubelet[2468]: E1105 16:02:18.268652 2468 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 16:02:18.274440 kubelet[2468]: I1105 16:02:18.270794 2468 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 16:02:18.275159 kubelet[2468]: E1105 16:02:18.275122 2468 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Nov 5 16:02:18.487150 kubelet[2468]: I1105 16:02:18.486669 2468 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 16:02:18.487933 kubelet[2468]: E1105 16:02:18.487735 2468 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Nov 5 16:02:18.538875 kubelet[2468]: E1105 16:02:18.538243 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:18.539187 containerd[1621]: time="2025-11-05T16:02:18.539115489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:821b3f6fa792237d8f19702b3c98bee5,Namespace:kube-system,Attempt:0,}" Nov 5 16:02:18.561808 kubelet[2468]: E1105 16:02:18.561578 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:18.565903 containerd[1621]: time="2025-11-05T16:02:18.565690596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Nov 5 16:02:18.571620 kubelet[2468]: E1105 16:02:18.571453 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:18.572979 containerd[1621]: time="2025-11-05T16:02:18.572701000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Nov 5 16:02:18.649152 kubelet[2468]: E1105 16:02:18.649091 2468 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="800ms" Nov 5 16:02:18.753918 containerd[1621]: time="2025-11-05T16:02:18.750323630Z" level=info msg="connecting to shim 1f6c5401bb9f889acb8f4da741a1f3780f29741b3f4c40c75b445d12ac9a221d" address="unix:///run/containerd/s/3f2949ab84fd55711d0b5befd17526082efe13e581d1c7bb1c98854050ba61e5" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:02:18.857334 systemd[1]: Started cri-containerd-1f6c5401bb9f889acb8f4da741a1f3780f29741b3f4c40c75b445d12ac9a221d.scope - libcontainer container 1f6c5401bb9f889acb8f4da741a1f3780f29741b3f4c40c75b445d12ac9a221d. Nov 5 16:02:18.867477 containerd[1621]: time="2025-11-05T16:02:18.863806247Z" level=info msg="connecting to shim a3464b9b7a8cc2d7cb9232691c909042b9874eaf5dce73e85b8480f2b6078563" address="unix:///run/containerd/s/1ba0c93994219f42f0a5c5227f71b0336d865ebd90ff787abef2bc0ca484387c" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:02:18.867477 containerd[1621]: time="2025-11-05T16:02:18.864597530Z" level=info msg="connecting to shim 14372e584e261f78c144f869954d32fbcb5a05a1abf063d1bca82ef0309b42e6" address="unix:///run/containerd/s/5618c062663aa3dc7ded6ec7e380864aba2f7e7d8af9f827014cf18f91ea8535" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:02:18.892736 kubelet[2468]: I1105 16:02:18.892150 2468 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 16:02:18.894938 kubelet[2468]: W1105 16:02:18.893433 2468 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Nov 5 16:02:18.897030 kubelet[2468]: E1105 16:02:18.895165 2468 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Nov 5 16:02:18.897030 kubelet[2468]: E1105 16:02:18.896353 2468 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Nov 5 16:02:19.064753 kubelet[2468]: W1105 16:02:19.064618 2468 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Nov 5 16:02:19.064925 kubelet[2468]: E1105 16:02:19.064779 2468 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Nov 5 16:02:19.091043 systemd[1]: Started cri-containerd-14372e584e261f78c144f869954d32fbcb5a05a1abf063d1bca82ef0309b42e6.scope - libcontainer container 14372e584e261f78c144f869954d32fbcb5a05a1abf063d1bca82ef0309b42e6. Nov 5 16:02:19.099611 systemd[1]: Started cri-containerd-a3464b9b7a8cc2d7cb9232691c909042b9874eaf5dce73e85b8480f2b6078563.scope - libcontainer container a3464b9b7a8cc2d7cb9232691c909042b9874eaf5dce73e85b8480f2b6078563. Nov 5 16:02:19.237075 containerd[1621]: time="2025-11-05T16:02:19.236602357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3464b9b7a8cc2d7cb9232691c909042b9874eaf5dce73e85b8480f2b6078563\"" Nov 5 16:02:19.238184 kubelet[2468]: E1105 16:02:19.237936 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:19.242385 containerd[1621]: time="2025-11-05T16:02:19.242331642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:821b3f6fa792237d8f19702b3c98bee5,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f6c5401bb9f889acb8f4da741a1f3780f29741b3f4c40c75b445d12ac9a221d\"" Nov 5 16:02:19.244790 kubelet[2468]: W1105 16:02:19.244708 2468 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Nov 5 16:02:19.244921 kubelet[2468]: E1105 16:02:19.244810 2468 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Nov 5 16:02:19.244921 kubelet[2468]: E1105 16:02:19.244904 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:19.249237 containerd[1621]: time="2025-11-05T16:02:19.249174893Z" level=info msg="CreateContainer within sandbox \"a3464b9b7a8cc2d7cb9232691c909042b9874eaf5dce73e85b8480f2b6078563\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 5 16:02:19.251699 containerd[1621]: time="2025-11-05T16:02:19.249174943Z" level=info msg="CreateContainer within sandbox \"1f6c5401bb9f889acb8f4da741a1f3780f29741b3f4c40c75b445d12ac9a221d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 5 16:02:19.251843 kubelet[2468]: W1105 16:02:19.249925 2468 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Nov 5 16:02:19.251843 kubelet[2468]: E1105 16:02:19.249994 2468 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Nov 5 16:02:19.410407 containerd[1621]: time="2025-11-05T16:02:19.410248419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"14372e584e261f78c144f869954d32fbcb5a05a1abf063d1bca82ef0309b42e6\"" Nov 5 16:02:19.416508 kubelet[2468]: E1105 16:02:19.411220 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:19.419180 containerd[1621]: time="2025-11-05T16:02:19.415852008Z" level=info msg="CreateContainer within sandbox \"14372e584e261f78c144f869954d32fbcb5a05a1abf063d1bca82ef0309b42e6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 5 16:02:19.423603 containerd[1621]: time="2025-11-05T16:02:19.423522670Z" level=info msg="Container a441c69a0ca74bed3f48b6803b1a223ca295ed37518e7e6c8d558203bc885ef6: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:02:19.426101 containerd[1621]: time="2025-11-05T16:02:19.425116186Z" level=info msg="Container 6594591b2730b8fa9e3a7406758f31abb702d0e2b25aac742d0b2ae8db509cfe: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:02:19.451929 kubelet[2468]: E1105 16:02:19.451878 2468 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="1.6s" Nov 5 16:02:19.453733 containerd[1621]: time="2025-11-05T16:02:19.453361015Z" level=info msg="CreateContainer within sandbox \"a3464b9b7a8cc2d7cb9232691c909042b9874eaf5dce73e85b8480f2b6078563\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6594591b2730b8fa9e3a7406758f31abb702d0e2b25aac742d0b2ae8db509cfe\"" Nov 5 16:02:19.457651 containerd[1621]: time="2025-11-05T16:02:19.454455576Z" level=info msg="StartContainer for \"6594591b2730b8fa9e3a7406758f31abb702d0e2b25aac742d0b2ae8db509cfe\"" Nov 5 16:02:19.457651 containerd[1621]: time="2025-11-05T16:02:19.455967408Z" level=info msg="Container 64267d8eee941ec8df9abf3164cd75ba72000d2ee011cf03472e6d9c373f7525: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:02:19.457651 containerd[1621]: time="2025-11-05T16:02:19.456802623Z" level=info msg="connecting to shim 6594591b2730b8fa9e3a7406758f31abb702d0e2b25aac742d0b2ae8db509cfe" address="unix:///run/containerd/s/1ba0c93994219f42f0a5c5227f71b0336d865ebd90ff787abef2bc0ca484387c" protocol=ttrpc version=3 Nov 5 16:02:19.468304 containerd[1621]: time="2025-11-05T16:02:19.468238949Z" level=info msg="CreateContainer within sandbox \"1f6c5401bb9f889acb8f4da741a1f3780f29741b3f4c40c75b445d12ac9a221d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a441c69a0ca74bed3f48b6803b1a223ca295ed37518e7e6c8d558203bc885ef6\"" Nov 5 16:02:19.468834 containerd[1621]: time="2025-11-05T16:02:19.468793769Z" level=info msg="StartContainer for \"a441c69a0ca74bed3f48b6803b1a223ca295ed37518e7e6c8d558203bc885ef6\"" Nov 5 16:02:19.470213 containerd[1621]: time="2025-11-05T16:02:19.469858774Z" level=info msg="connecting to shim a441c69a0ca74bed3f48b6803b1a223ca295ed37518e7e6c8d558203bc885ef6" address="unix:///run/containerd/s/3f2949ab84fd55711d0b5befd17526082efe13e581d1c7bb1c98854050ba61e5" protocol=ttrpc version=3 Nov 5 16:02:19.490729 containerd[1621]: time="2025-11-05T16:02:19.490522540Z" level=info msg="CreateContainer within sandbox \"14372e584e261f78c144f869954d32fbcb5a05a1abf063d1bca82ef0309b42e6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"64267d8eee941ec8df9abf3164cd75ba72000d2ee011cf03472e6d9c373f7525\"" Nov 5 16:02:19.491254 containerd[1621]: time="2025-11-05T16:02:19.491220077Z" level=info msg="StartContainer for \"64267d8eee941ec8df9abf3164cd75ba72000d2ee011cf03472e6d9c373f7525\"" Nov 5 16:02:19.492574 containerd[1621]: time="2025-11-05T16:02:19.492530682Z" level=info msg="connecting to shim 64267d8eee941ec8df9abf3164cd75ba72000d2ee011cf03472e6d9c373f7525" address="unix:///run/containerd/s/5618c062663aa3dc7ded6ec7e380864aba2f7e7d8af9f827014cf18f91ea8535" protocol=ttrpc version=3 Nov 5 16:02:19.519202 systemd[1]: Started cri-containerd-6594591b2730b8fa9e3a7406758f31abb702d0e2b25aac742d0b2ae8db509cfe.scope - libcontainer container 6594591b2730b8fa9e3a7406758f31abb702d0e2b25aac742d0b2ae8db509cfe. Nov 5 16:02:19.526008 systemd[1]: Started cri-containerd-a441c69a0ca74bed3f48b6803b1a223ca295ed37518e7e6c8d558203bc885ef6.scope - libcontainer container a441c69a0ca74bed3f48b6803b1a223ca295ed37518e7e6c8d558203bc885ef6. Nov 5 16:02:19.554412 systemd[1]: Started cri-containerd-64267d8eee941ec8df9abf3164cd75ba72000d2ee011cf03472e6d9c373f7525.scope - libcontainer container 64267d8eee941ec8df9abf3164cd75ba72000d2ee011cf03472e6d9c373f7525. Nov 5 16:02:19.700846 kubelet[2468]: I1105 16:02:19.700811 2468 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 16:02:19.701535 kubelet[2468]: E1105 16:02:19.701469 2468 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Nov 5 16:02:19.747135 containerd[1621]: time="2025-11-05T16:02:19.747020612Z" level=info msg="StartContainer for \"6594591b2730b8fa9e3a7406758f31abb702d0e2b25aac742d0b2ae8db509cfe\" returns successfully" Nov 5 16:02:19.765245 containerd[1621]: time="2025-11-05T16:02:19.765193371Z" level=info msg="StartContainer for \"64267d8eee941ec8df9abf3164cd75ba72000d2ee011cf03472e6d9c373f7525\" returns successfully" Nov 5 16:02:19.770115 containerd[1621]: time="2025-11-05T16:02:19.770061241Z" level=info msg="StartContainer for \"a441c69a0ca74bed3f48b6803b1a223ca295ed37518e7e6c8d558203bc885ef6\" returns successfully" Nov 5 16:02:20.109095 kubelet[2468]: E1105 16:02:20.108285 2468 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 16:02:20.109494 kubelet[2468]: E1105 16:02:20.109410 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:20.112081 kubelet[2468]: E1105 16:02:20.111841 2468 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 16:02:20.112293 kubelet[2468]: E1105 16:02:20.112272 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:20.114334 kubelet[2468]: E1105 16:02:20.113928 2468 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 16:02:20.114334 kubelet[2468]: E1105 16:02:20.114030 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:21.118497 kubelet[2468]: E1105 16:02:21.118429 2468 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 16:02:21.121837 kubelet[2468]: E1105 16:02:21.120659 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:21.121837 kubelet[2468]: E1105 16:02:21.121597 2468 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 16:02:21.121837 kubelet[2468]: E1105 16:02:21.121721 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:21.234233 kubelet[2468]: E1105 16:02:21.234044 2468 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 16:02:21.234705 kubelet[2468]: E1105 16:02:21.234657 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:21.308158 kubelet[2468]: I1105 16:02:21.307953 2468 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 16:02:22.119728 kubelet[2468]: E1105 16:02:22.119614 2468 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 16:02:22.121492 kubelet[2468]: E1105 16:02:22.120193 2468 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 16:02:22.121492 kubelet[2468]: E1105 16:02:22.120362 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:22.121492 kubelet[2468]: E1105 16:02:22.120537 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:22.985875 kubelet[2468]: E1105 16:02:22.985803 2468 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 5 16:02:23.094153 kubelet[2468]: I1105 16:02:23.093467 2468 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 5 16:02:23.133920 kubelet[2468]: I1105 16:02:23.133874 2468 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 16:02:23.191135 kubelet[2468]: E1105 16:02:23.191071 2468 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 5 16:02:23.191135 kubelet[2468]: I1105 16:02:23.191113 2468 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 16:02:23.195879 kubelet[2468]: E1105 16:02:23.195805 2468 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 5 16:02:23.195879 kubelet[2468]: I1105 16:02:23.195846 2468 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 16:02:23.199019 kubelet[2468]: E1105 16:02:23.198945 2468 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 5 16:02:23.885120 kubelet[2468]: I1105 16:02:23.884177 2468 apiserver.go:52] "Watching apiserver" Nov 5 16:02:23.933743 kubelet[2468]: I1105 16:02:23.933658 2468 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 16:02:26.726158 systemd[1]: Reload requested from client PID 2752 ('systemctl') (unit session-7.scope)... Nov 5 16:02:26.726179 systemd[1]: Reloading... Nov 5 16:02:26.837874 zram_generator::config[2793]: No configuration found. Nov 5 16:02:27.288310 systemd[1]: Reloading finished in 561 ms. Nov 5 16:02:27.328288 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 16:02:27.351026 systemd[1]: kubelet.service: Deactivated successfully. Nov 5 16:02:27.351504 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:02:27.351584 systemd[1]: kubelet.service: Consumed 1.625s CPU time, 131.4M memory peak. Nov 5 16:02:27.354645 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 16:02:27.677827 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:02:27.690266 (kubelet)[2841]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 16:02:27.761753 kubelet[2841]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 16:02:27.761753 kubelet[2841]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 16:02:27.761753 kubelet[2841]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 16:02:27.761753 kubelet[2841]: I1105 16:02:27.761738 2841 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 16:02:27.773518 kubelet[2841]: I1105 16:02:27.773122 2841 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 5 16:02:27.773518 kubelet[2841]: I1105 16:02:27.773224 2841 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 16:02:27.774941 kubelet[2841]: I1105 16:02:27.774894 2841 server.go:954] "Client rotation is on, will bootstrap in background" Nov 5 16:02:27.776422 kubelet[2841]: I1105 16:02:27.776372 2841 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 5 16:02:27.779520 kubelet[2841]: I1105 16:02:27.779455 2841 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 16:02:27.790707 kubelet[2841]: I1105 16:02:27.790653 2841 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 16:02:27.800053 kubelet[2841]: I1105 16:02:27.800005 2841 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 16:02:27.800349 kubelet[2841]: I1105 16:02:27.800292 2841 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 16:02:27.800611 kubelet[2841]: I1105 16:02:27.800339 2841 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 16:02:27.800718 kubelet[2841]: I1105 16:02:27.800613 2841 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 16:02:27.800718 kubelet[2841]: I1105 16:02:27.800626 2841 container_manager_linux.go:304] "Creating device plugin manager" Nov 5 16:02:27.800877 kubelet[2841]: I1105 16:02:27.800725 2841 state_mem.go:36] "Initialized new in-memory state store" Nov 5 16:02:27.802148 kubelet[2841]: I1105 16:02:27.802085 2841 kubelet.go:446] "Attempting to sync node with API server" Nov 5 16:02:27.802208 kubelet[2841]: I1105 16:02:27.802156 2841 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 16:02:27.802208 kubelet[2841]: I1105 16:02:27.802191 2841 kubelet.go:352] "Adding apiserver pod source" Nov 5 16:02:27.802208 kubelet[2841]: I1105 16:02:27.802206 2841 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 16:02:27.814716 kubelet[2841]: I1105 16:02:27.814666 2841 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 16:02:27.819671 kubelet[2841]: I1105 16:02:27.817899 2841 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 5 16:02:27.820077 kubelet[2841]: I1105 16:02:27.820053 2841 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 16:02:27.820153 kubelet[2841]: I1105 16:02:27.820098 2841 server.go:1287] "Started kubelet" Nov 5 16:02:27.823111 kubelet[2841]: I1105 16:02:27.822993 2841 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 16:02:27.824151 kubelet[2841]: I1105 16:02:27.823392 2841 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 16:02:27.824490 kubelet[2841]: I1105 16:02:27.824471 2841 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 16:02:27.825892 kubelet[2841]: I1105 16:02:27.825850 2841 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 16:02:27.827967 kubelet[2841]: I1105 16:02:27.827931 2841 server.go:479] "Adding debug handlers to kubelet server" Nov 5 16:02:27.828562 kubelet[2841]: I1105 16:02:27.828539 2841 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 16:02:27.832496 kubelet[2841]: I1105 16:02:27.832454 2841 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 16:02:27.832702 kubelet[2841]: I1105 16:02:27.832598 2841 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 16:02:27.832739 kubelet[2841]: I1105 16:02:27.832727 2841 reconciler.go:26] "Reconciler: start to sync state" Nov 5 16:02:27.834514 kubelet[2841]: I1105 16:02:27.834483 2841 factory.go:221] Registration of the systemd container factory successfully Nov 5 16:02:27.835818 kubelet[2841]: E1105 16:02:27.834892 2841 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 16:02:27.836091 kubelet[2841]: I1105 16:02:27.836057 2841 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 16:02:27.839552 kubelet[2841]: I1105 16:02:27.839508 2841 factory.go:221] Registration of the containerd container factory successfully Nov 5 16:02:27.849819 kubelet[2841]: I1105 16:02:27.849738 2841 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 5 16:02:27.854593 kubelet[2841]: I1105 16:02:27.854529 2841 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 5 16:02:27.854809 kubelet[2841]: I1105 16:02:27.854692 2841 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 5 16:02:27.854809 kubelet[2841]: I1105 16:02:27.854754 2841 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 16:02:27.854809 kubelet[2841]: I1105 16:02:27.854779 2841 kubelet.go:2382] "Starting kubelet main sync loop" Nov 5 16:02:27.855882 kubelet[2841]: E1105 16:02:27.855353 2841 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 16:02:27.896620 kubelet[2841]: I1105 16:02:27.896583 2841 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 16:02:27.896620 kubelet[2841]: I1105 16:02:27.896609 2841 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 16:02:27.896620 kubelet[2841]: I1105 16:02:27.896633 2841 state_mem.go:36] "Initialized new in-memory state store" Nov 5 16:02:27.896860 kubelet[2841]: I1105 16:02:27.896842 2841 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 5 16:02:27.896890 kubelet[2841]: I1105 16:02:27.896854 2841 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 5 16:02:27.896890 kubelet[2841]: I1105 16:02:27.896875 2841 policy_none.go:49] "None policy: Start" Nov 5 16:02:27.896890 kubelet[2841]: I1105 16:02:27.896887 2841 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 16:02:27.896954 kubelet[2841]: I1105 16:02:27.896898 2841 state_mem.go:35] "Initializing new in-memory state store" Nov 5 16:02:27.897031 kubelet[2841]: I1105 16:02:27.897016 2841 state_mem.go:75] "Updated machine memory state" Nov 5 16:02:27.901571 kubelet[2841]: I1105 16:02:27.901531 2841 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 5 16:02:27.901779 kubelet[2841]: I1105 16:02:27.901748 2841 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 16:02:27.902002 kubelet[2841]: I1105 16:02:27.901883 2841 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 16:02:27.903116 kubelet[2841]: I1105 16:02:27.902927 2841 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 16:02:27.905818 kubelet[2841]: E1105 16:02:27.905335 2841 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 16:02:27.956617 kubelet[2841]: I1105 16:02:27.956428 2841 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 16:02:27.958626 kubelet[2841]: I1105 16:02:27.957886 2841 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 16:02:27.958626 kubelet[2841]: I1105 16:02:27.958299 2841 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 16:02:28.010832 kubelet[2841]: I1105 16:02:28.010782 2841 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 16:02:28.022557 kubelet[2841]: I1105 16:02:28.022514 2841 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 5 16:02:28.022728 kubelet[2841]: I1105 16:02:28.022711 2841 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 5 16:02:28.033806 kubelet[2841]: I1105 16:02:28.033728 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 16:02:28.034532 kubelet[2841]: I1105 16:02:28.034506 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 16:02:28.034570 kubelet[2841]: I1105 16:02:28.034543 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/821b3f6fa792237d8f19702b3c98bee5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"821b3f6fa792237d8f19702b3c98bee5\") " pod="kube-system/kube-apiserver-localhost" Nov 5 16:02:28.034593 kubelet[2841]: I1105 16:02:28.034581 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/821b3f6fa792237d8f19702b3c98bee5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"821b3f6fa792237d8f19702b3c98bee5\") " pod="kube-system/kube-apiserver-localhost" Nov 5 16:02:28.034666 kubelet[2841]: I1105 16:02:28.034597 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 16:02:28.034722 kubelet[2841]: I1105 16:02:28.034695 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 16:02:28.034799 kubelet[2841]: I1105 16:02:28.034739 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/821b3f6fa792237d8f19702b3c98bee5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"821b3f6fa792237d8f19702b3c98bee5\") " pod="kube-system/kube-apiserver-localhost" Nov 5 16:02:28.034826 kubelet[2841]: I1105 16:02:28.034798 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 16:02:28.034826 kubelet[2841]: I1105 16:02:28.034821 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 5 16:02:28.266125 kubelet[2841]: E1105 16:02:28.266058 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:28.266300 kubelet[2841]: E1105 16:02:28.266229 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:28.268301 kubelet[2841]: E1105 16:02:28.268176 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:28.803757 kubelet[2841]: I1105 16:02:28.803711 2841 apiserver.go:52] "Watching apiserver" Nov 5 16:02:28.834651 kubelet[2841]: I1105 16:02:28.834596 2841 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 16:02:28.878180 kubelet[2841]: E1105 16:02:28.878137 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:28.878423 kubelet[2841]: E1105 16:02:28.878357 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:28.878558 kubelet[2841]: E1105 16:02:28.878525 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:28.899190 kubelet[2841]: I1105 16:02:28.899027 2841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.899011336 podStartE2EDuration="1.899011336s" podCreationTimestamp="2025-11-05 16:02:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 16:02:28.898805831 +0000 UTC m=+1.202941255" watchObservedRunningTime="2025-11-05 16:02:28.899011336 +0000 UTC m=+1.203146770" Nov 5 16:02:28.911566 kubelet[2841]: I1105 16:02:28.911505 2841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.911488683 podStartE2EDuration="1.911488683s" podCreationTimestamp="2025-11-05 16:02:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 16:02:28.905526578 +0000 UTC m=+1.209662012" watchObservedRunningTime="2025-11-05 16:02:28.911488683 +0000 UTC m=+1.215624127" Nov 5 16:02:28.921097 kubelet[2841]: I1105 16:02:28.920929 2841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.920911375 podStartE2EDuration="1.920911375s" podCreationTimestamp="2025-11-05 16:02:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 16:02:28.911625349 +0000 UTC m=+1.215760783" watchObservedRunningTime="2025-11-05 16:02:28.920911375 +0000 UTC m=+1.225046819" Nov 5 16:02:29.879930 kubelet[2841]: E1105 16:02:29.879815 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:29.880527 kubelet[2841]: E1105 16:02:29.880378 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:30.881753 kubelet[2841]: E1105 16:02:30.881707 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:31.517192 kubelet[2841]: I1105 16:02:31.517133 2841 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 5 16:02:31.517546 containerd[1621]: time="2025-11-05T16:02:31.517507383Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 5 16:02:31.517984 kubelet[2841]: I1105 16:02:31.517671 2841 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 5 16:02:31.876856 systemd[1]: Created slice kubepods-besteffort-pod82125dc3_5a1b_4712_a41d_4ac8f5792531.slice - libcontainer container kubepods-besteffort-pod82125dc3_5a1b_4712_a41d_4ac8f5792531.slice. Nov 5 16:02:31.884347 kubelet[2841]: E1105 16:02:31.884302 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:31.959816 kubelet[2841]: I1105 16:02:31.959716 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/82125dc3-5a1b-4712-a41d-4ac8f5792531-xtables-lock\") pod \"kube-proxy-btvtz\" (UID: \"82125dc3-5a1b-4712-a41d-4ac8f5792531\") " pod="kube-system/kube-proxy-btvtz" Nov 5 16:02:31.959816 kubelet[2841]: I1105 16:02:31.959810 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/82125dc3-5a1b-4712-a41d-4ac8f5792531-kube-proxy\") pod \"kube-proxy-btvtz\" (UID: \"82125dc3-5a1b-4712-a41d-4ac8f5792531\") " pod="kube-system/kube-proxy-btvtz" Nov 5 16:02:31.959986 kubelet[2841]: I1105 16:02:31.959840 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82125dc3-5a1b-4712-a41d-4ac8f5792531-lib-modules\") pod \"kube-proxy-btvtz\" (UID: \"82125dc3-5a1b-4712-a41d-4ac8f5792531\") " pod="kube-system/kube-proxy-btvtz" Nov 5 16:02:31.959986 kubelet[2841]: I1105 16:02:31.959862 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trkgj\" (UniqueName: \"kubernetes.io/projected/82125dc3-5a1b-4712-a41d-4ac8f5792531-kube-api-access-trkgj\") pod \"kube-proxy-btvtz\" (UID: \"82125dc3-5a1b-4712-a41d-4ac8f5792531\") " pod="kube-system/kube-proxy-btvtz" Nov 5 16:02:32.066599 kubelet[2841]: E1105 16:02:32.066543 2841 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 5 16:02:32.066599 kubelet[2841]: E1105 16:02:32.066581 2841 projected.go:194] Error preparing data for projected volume kube-api-access-trkgj for pod kube-system/kube-proxy-btvtz: configmap "kube-root-ca.crt" not found Nov 5 16:02:32.066835 kubelet[2841]: E1105 16:02:32.066656 2841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82125dc3-5a1b-4712-a41d-4ac8f5792531-kube-api-access-trkgj podName:82125dc3-5a1b-4712-a41d-4ac8f5792531 nodeName:}" failed. No retries permitted until 2025-11-05 16:02:32.566626792 +0000 UTC m=+4.870762226 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-trkgj" (UniqueName: "kubernetes.io/projected/82125dc3-5a1b-4712-a41d-4ac8f5792531-kube-api-access-trkgj") pod "kube-proxy-btvtz" (UID: "82125dc3-5a1b-4712-a41d-4ac8f5792531") : configmap "kube-root-ca.crt" not found Nov 5 16:02:32.620993 systemd[1]: Created slice kubepods-besteffort-pod3116df1d_cd1b_4953_bb76_0f9bd1e4cb03.slice - libcontainer container kubepods-besteffort-pod3116df1d_cd1b_4953_bb76_0f9bd1e4cb03.slice. Nov 5 16:02:32.667158 kubelet[2841]: I1105 16:02:32.666972 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4l6j\" (UniqueName: \"kubernetes.io/projected/3116df1d-cd1b-4953-bb76-0f9bd1e4cb03-kube-api-access-k4l6j\") pod \"tigera-operator-7dcd859c48-np6l8\" (UID: \"3116df1d-cd1b-4953-bb76-0f9bd1e4cb03\") " pod="tigera-operator/tigera-operator-7dcd859c48-np6l8" Nov 5 16:02:32.667158 kubelet[2841]: I1105 16:02:32.667134 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3116df1d-cd1b-4953-bb76-0f9bd1e4cb03-var-lib-calico\") pod \"tigera-operator-7dcd859c48-np6l8\" (UID: \"3116df1d-cd1b-4953-bb76-0f9bd1e4cb03\") " pod="tigera-operator/tigera-operator-7dcd859c48-np6l8" Nov 5 16:02:32.789184 kubelet[2841]: E1105 16:02:32.789105 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:32.791800 containerd[1621]: time="2025-11-05T16:02:32.791143564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-btvtz,Uid:82125dc3-5a1b-4712-a41d-4ac8f5792531,Namespace:kube-system,Attempt:0,}" Nov 5 16:02:32.925799 containerd[1621]: time="2025-11-05T16:02:32.925612438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-np6l8,Uid:3116df1d-cd1b-4953-bb76-0f9bd1e4cb03,Namespace:tigera-operator,Attempt:0,}" Nov 5 16:02:32.964963 containerd[1621]: time="2025-11-05T16:02:32.964878065Z" level=info msg="connecting to shim 3a8cb2aadb2ce5540a3cb808f8eaaef0320e78b8a1dbd3381b9a30fe25be04c3" address="unix:///run/containerd/s/194b9fede90875f4406fbb542ca1a087e275eb01e49931274c0af92fefcd15cd" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:02:33.011140 kubelet[2841]: E1105 16:02:33.011086 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:33.020557 containerd[1621]: time="2025-11-05T16:02:33.020444230Z" level=info msg="connecting to shim 25f38dfe05fe9f50ff31feb51cfe356befb4c8ea905378da92cd2a001edfe3d7" address="unix:///run/containerd/s/7f0cb0dd7393541d7a78c14dda6b2a4d228fe2dc550959d3a56b6988393e7fdf" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:02:33.036032 systemd[1]: Started cri-containerd-3a8cb2aadb2ce5540a3cb808f8eaaef0320e78b8a1dbd3381b9a30fe25be04c3.scope - libcontainer container 3a8cb2aadb2ce5540a3cb808f8eaaef0320e78b8a1dbd3381b9a30fe25be04c3. Nov 5 16:02:33.068076 systemd[1]: Started cri-containerd-25f38dfe05fe9f50ff31feb51cfe356befb4c8ea905378da92cd2a001edfe3d7.scope - libcontainer container 25f38dfe05fe9f50ff31feb51cfe356befb4c8ea905378da92cd2a001edfe3d7. Nov 5 16:02:33.103625 containerd[1621]: time="2025-11-05T16:02:33.103559782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-btvtz,Uid:82125dc3-5a1b-4712-a41d-4ac8f5792531,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a8cb2aadb2ce5540a3cb808f8eaaef0320e78b8a1dbd3381b9a30fe25be04c3\"" Nov 5 16:02:33.105028 kubelet[2841]: E1105 16:02:33.104989 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:33.109440 containerd[1621]: time="2025-11-05T16:02:33.109389340Z" level=info msg="CreateContainer within sandbox \"3a8cb2aadb2ce5540a3cb808f8eaaef0320e78b8a1dbd3381b9a30fe25be04c3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 5 16:02:33.128503 containerd[1621]: time="2025-11-05T16:02:33.128425947Z" level=info msg="Container 061458d27c93a412c4697761890598c21a13c6491bf311649aa9d354807d69f4: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:02:33.141172 containerd[1621]: time="2025-11-05T16:02:33.141109494Z" level=info msg="CreateContainer within sandbox \"3a8cb2aadb2ce5540a3cb808f8eaaef0320e78b8a1dbd3381b9a30fe25be04c3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"061458d27c93a412c4697761890598c21a13c6491bf311649aa9d354807d69f4\"" Nov 5 16:02:33.142217 containerd[1621]: time="2025-11-05T16:02:33.142177486Z" level=info msg="StartContainer for \"061458d27c93a412c4697761890598c21a13c6491bf311649aa9d354807d69f4\"" Nov 5 16:02:33.145262 containerd[1621]: time="2025-11-05T16:02:33.145233696Z" level=info msg="connecting to shim 061458d27c93a412c4697761890598c21a13c6491bf311649aa9d354807d69f4" address="unix:///run/containerd/s/194b9fede90875f4406fbb542ca1a087e275eb01e49931274c0af92fefcd15cd" protocol=ttrpc version=3 Nov 5 16:02:33.153047 containerd[1621]: time="2025-11-05T16:02:33.152971621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-np6l8,Uid:3116df1d-cd1b-4953-bb76-0f9bd1e4cb03,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"25f38dfe05fe9f50ff31feb51cfe356befb4c8ea905378da92cd2a001edfe3d7\"" Nov 5 16:02:33.157095 containerd[1621]: time="2025-11-05T16:02:33.156988171Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 5 16:02:33.177062 systemd[1]: Started cri-containerd-061458d27c93a412c4697761890598c21a13c6491bf311649aa9d354807d69f4.scope - libcontainer container 061458d27c93a412c4697761890598c21a13c6491bf311649aa9d354807d69f4. Nov 5 16:02:33.435696 containerd[1621]: time="2025-11-05T16:02:33.435503276Z" level=info msg="StartContainer for \"061458d27c93a412c4697761890598c21a13c6491bf311649aa9d354807d69f4\" returns successfully" Nov 5 16:02:33.895019 kubelet[2841]: E1105 16:02:33.894964 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:33.895215 kubelet[2841]: E1105 16:02:33.895123 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:34.661534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1371214236.mount: Deactivated successfully. Nov 5 16:02:35.296386 kubelet[2841]: E1105 16:02:35.296342 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:35.438827 kubelet[2841]: I1105 16:02:35.438347 2841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-btvtz" podStartSLOduration=4.438320751 podStartE2EDuration="4.438320751s" podCreationTimestamp="2025-11-05 16:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 16:02:33.919067857 +0000 UTC m=+6.223203281" watchObservedRunningTime="2025-11-05 16:02:35.438320751 +0000 UTC m=+7.742456186" Nov 5 16:02:35.820940 containerd[1621]: time="2025-11-05T16:02:35.820877962Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:35.822394 containerd[1621]: time="2025-11-05T16:02:35.822349351Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 5 16:02:35.823601 containerd[1621]: time="2025-11-05T16:02:35.823546374Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:35.825680 containerd[1621]: time="2025-11-05T16:02:35.825629841Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:35.826195 containerd[1621]: time="2025-11-05T16:02:35.826163330Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.669082495s" Nov 5 16:02:35.826195 containerd[1621]: time="2025-11-05T16:02:35.826193667Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 5 16:02:35.827968 containerd[1621]: time="2025-11-05T16:02:35.827943979Z" level=info msg="CreateContainer within sandbox \"25f38dfe05fe9f50ff31feb51cfe356befb4c8ea905378da92cd2a001edfe3d7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 5 16:02:35.837854 containerd[1621]: time="2025-11-05T16:02:35.837809614Z" level=info msg="Container f9d5290532c8cd7dafeaa3d9ae06da1ae6296aa64d177d0d4053d516c50aeb09: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:02:35.846890 containerd[1621]: time="2025-11-05T16:02:35.846786483Z" level=info msg="CreateContainer within sandbox \"25f38dfe05fe9f50ff31feb51cfe356befb4c8ea905378da92cd2a001edfe3d7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f9d5290532c8cd7dafeaa3d9ae06da1ae6296aa64d177d0d4053d516c50aeb09\"" Nov 5 16:02:35.847381 containerd[1621]: time="2025-11-05T16:02:35.847354608Z" level=info msg="StartContainer for \"f9d5290532c8cd7dafeaa3d9ae06da1ae6296aa64d177d0d4053d516c50aeb09\"" Nov 5 16:02:35.848299 containerd[1621]: time="2025-11-05T16:02:35.848276406Z" level=info msg="connecting to shim f9d5290532c8cd7dafeaa3d9ae06da1ae6296aa64d177d0d4053d516c50aeb09" address="unix:///run/containerd/s/7f0cb0dd7393541d7a78c14dda6b2a4d228fe2dc550959d3a56b6988393e7fdf" protocol=ttrpc version=3 Nov 5 16:02:35.882206 systemd[1]: Started cri-containerd-f9d5290532c8cd7dafeaa3d9ae06da1ae6296aa64d177d0d4053d516c50aeb09.scope - libcontainer container f9d5290532c8cd7dafeaa3d9ae06da1ae6296aa64d177d0d4053d516c50aeb09. Nov 5 16:02:35.900011 kubelet[2841]: E1105 16:02:35.899970 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:35.923481 containerd[1621]: time="2025-11-05T16:02:35.923436531Z" level=info msg="StartContainer for \"f9d5290532c8cd7dafeaa3d9ae06da1ae6296aa64d177d0d4053d516c50aeb09\" returns successfully" Nov 5 16:02:36.903394 kubelet[2841]: E1105 16:02:36.903356 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:43.205536 sudo[1814]: pam_unix(sudo:session): session closed for user root Nov 5 16:02:43.210800 sshd[1813]: Connection closed by 10.0.0.1 port 33620 Nov 5 16:02:43.210311 sshd-session[1810]: pam_unix(sshd:session): session closed for user core Nov 5 16:02:43.218433 systemd[1]: sshd@6-10.0.0.130:22-10.0.0.1:33620.service: Deactivated successfully. Nov 5 16:02:43.226861 systemd[1]: session-7.scope: Deactivated successfully. Nov 5 16:02:43.228912 systemd[1]: session-7.scope: Consumed 9.085s CPU time, 218.6M memory peak. Nov 5 16:02:43.238449 systemd-logind[1589]: Session 7 logged out. Waiting for processes to exit. Nov 5 16:02:43.240413 systemd-logind[1589]: Removed session 7. Nov 5 16:02:47.993849 kubelet[2841]: I1105 16:02:47.993682 2841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-np6l8" podStartSLOduration=13.321032155 podStartE2EDuration="15.993631724s" podCreationTimestamp="2025-11-05 16:02:32 +0000 UTC" firstStartedPulling="2025-11-05 16:02:33.154269745 +0000 UTC m=+5.458405169" lastFinishedPulling="2025-11-05 16:02:35.826869304 +0000 UTC m=+8.131004738" observedRunningTime="2025-11-05 16:02:36.911020428 +0000 UTC m=+9.215155892" watchObservedRunningTime="2025-11-05 16:02:47.993631724 +0000 UTC m=+20.297767158" Nov 5 16:02:48.013923 systemd[1]: Created slice kubepods-besteffort-pod4ab06f77_2216_41dd_b146_b678c54c719d.slice - libcontainer container kubepods-besteffort-pod4ab06f77_2216_41dd_b146_b678c54c719d.slice. Nov 5 16:02:48.100740 kubelet[2841]: I1105 16:02:48.100686 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4ab06f77-2216-41dd-b146-b678c54c719d-typha-certs\") pod \"calico-typha-99b949d9-rzvlj\" (UID: \"4ab06f77-2216-41dd-b146-b678c54c719d\") " pod="calico-system/calico-typha-99b949d9-rzvlj" Nov 5 16:02:48.100740 kubelet[2841]: I1105 16:02:48.100736 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ab06f77-2216-41dd-b146-b678c54c719d-tigera-ca-bundle\") pod \"calico-typha-99b949d9-rzvlj\" (UID: \"4ab06f77-2216-41dd-b146-b678c54c719d\") " pod="calico-system/calico-typha-99b949d9-rzvlj" Nov 5 16:02:48.100740 kubelet[2841]: I1105 16:02:48.100758 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8cmg\" (UniqueName: \"kubernetes.io/projected/4ab06f77-2216-41dd-b146-b678c54c719d-kube-api-access-h8cmg\") pod \"calico-typha-99b949d9-rzvlj\" (UID: \"4ab06f77-2216-41dd-b146-b678c54c719d\") " pod="calico-system/calico-typha-99b949d9-rzvlj" Nov 5 16:02:48.117390 systemd[1]: Created slice kubepods-besteffort-pod9062e02b_bdf8_41a2_b4c2_0b455062ddca.slice - libcontainer container kubepods-besteffort-pod9062e02b_bdf8_41a2_b4c2_0b455062ddca.slice. Nov 5 16:02:48.201331 kubelet[2841]: I1105 16:02:48.201254 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9062e02b-bdf8-41a2-b4c2-0b455062ddca-var-run-calico\") pod \"calico-node-gjss5\" (UID: \"9062e02b-bdf8-41a2-b4c2-0b455062ddca\") " pod="calico-system/calico-node-gjss5" Nov 5 16:02:48.201331 kubelet[2841]: I1105 16:02:48.201341 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9062e02b-bdf8-41a2-b4c2-0b455062ddca-xtables-lock\") pod \"calico-node-gjss5\" (UID: \"9062e02b-bdf8-41a2-b4c2-0b455062ddca\") " pod="calico-system/calico-node-gjss5" Nov 5 16:02:48.201564 kubelet[2841]: I1105 16:02:48.201388 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9062e02b-bdf8-41a2-b4c2-0b455062ddca-policysync\") pod \"calico-node-gjss5\" (UID: \"9062e02b-bdf8-41a2-b4c2-0b455062ddca\") " pod="calico-system/calico-node-gjss5" Nov 5 16:02:48.201564 kubelet[2841]: I1105 16:02:48.201411 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9062e02b-bdf8-41a2-b4c2-0b455062ddca-cni-net-dir\") pod \"calico-node-gjss5\" (UID: \"9062e02b-bdf8-41a2-b4c2-0b455062ddca\") " pod="calico-system/calico-node-gjss5" Nov 5 16:02:48.201564 kubelet[2841]: I1105 16:02:48.201432 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9062e02b-bdf8-41a2-b4c2-0b455062ddca-cni-bin-dir\") pod \"calico-node-gjss5\" (UID: \"9062e02b-bdf8-41a2-b4c2-0b455062ddca\") " pod="calico-system/calico-node-gjss5" Nov 5 16:02:48.201564 kubelet[2841]: I1105 16:02:48.201455 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9062e02b-bdf8-41a2-b4c2-0b455062ddca-cni-log-dir\") pod \"calico-node-gjss5\" (UID: \"9062e02b-bdf8-41a2-b4c2-0b455062ddca\") " pod="calico-system/calico-node-gjss5" Nov 5 16:02:48.201564 kubelet[2841]: I1105 16:02:48.201474 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9062e02b-bdf8-41a2-b4c2-0b455062ddca-node-certs\") pod \"calico-node-gjss5\" (UID: \"9062e02b-bdf8-41a2-b4c2-0b455062ddca\") " pod="calico-system/calico-node-gjss5" Nov 5 16:02:48.201758 kubelet[2841]: I1105 16:02:48.201495 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9062e02b-bdf8-41a2-b4c2-0b455062ddca-tigera-ca-bundle\") pod \"calico-node-gjss5\" (UID: \"9062e02b-bdf8-41a2-b4c2-0b455062ddca\") " pod="calico-system/calico-node-gjss5" Nov 5 16:02:48.201758 kubelet[2841]: I1105 16:02:48.201521 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9062e02b-bdf8-41a2-b4c2-0b455062ddca-flexvol-driver-host\") pod \"calico-node-gjss5\" (UID: \"9062e02b-bdf8-41a2-b4c2-0b455062ddca\") " pod="calico-system/calico-node-gjss5" Nov 5 16:02:48.201758 kubelet[2841]: I1105 16:02:48.201542 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9062e02b-bdf8-41a2-b4c2-0b455062ddca-lib-modules\") pod \"calico-node-gjss5\" (UID: \"9062e02b-bdf8-41a2-b4c2-0b455062ddca\") " pod="calico-system/calico-node-gjss5" Nov 5 16:02:48.201758 kubelet[2841]: I1105 16:02:48.201562 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9062e02b-bdf8-41a2-b4c2-0b455062ddca-var-lib-calico\") pod \"calico-node-gjss5\" (UID: \"9062e02b-bdf8-41a2-b4c2-0b455062ddca\") " pod="calico-system/calico-node-gjss5" Nov 5 16:02:48.201758 kubelet[2841]: I1105 16:02:48.201585 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xm98\" (UniqueName: \"kubernetes.io/projected/9062e02b-bdf8-41a2-b4c2-0b455062ddca-kube-api-access-8xm98\") pod \"calico-node-gjss5\" (UID: \"9062e02b-bdf8-41a2-b4c2-0b455062ddca\") " pod="calico-system/calico-node-gjss5" Nov 5 16:02:48.306289 kubelet[2841]: E1105 16:02:48.306151 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.306289 kubelet[2841]: W1105 16:02:48.306182 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.306289 kubelet[2841]: E1105 16:02:48.306224 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.309659 kubelet[2841]: E1105 16:02:48.309602 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.311795 kubelet[2841]: W1105 16:02:48.309631 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.311795 kubelet[2841]: E1105 16:02:48.309839 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.326548 kubelet[2841]: E1105 16:02:48.319044 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m7g79" podUID="09c014d2-b99d-493b-9e36-c9afae0fa214" Nov 5 16:02:48.326548 kubelet[2841]: E1105 16:02:48.319894 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.326548 kubelet[2841]: W1105 16:02:48.319906 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.326548 kubelet[2841]: E1105 16:02:48.319919 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.327990 kubelet[2841]: E1105 16:02:48.327953 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:48.329103 containerd[1621]: time="2025-11-05T16:02:48.329058564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-99b949d9-rzvlj,Uid:4ab06f77-2216-41dd-b146-b678c54c719d,Namespace:calico-system,Attempt:0,}" Nov 5 16:02:48.366675 kubelet[2841]: E1105 16:02:48.366617 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.366675 kubelet[2841]: W1105 16:02:48.366644 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.366675 kubelet[2841]: E1105 16:02:48.366668 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.366949 kubelet[2841]: E1105 16:02:48.366868 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.366949 kubelet[2841]: W1105 16:02:48.366876 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.366949 kubelet[2841]: E1105 16:02:48.366884 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.367072 kubelet[2841]: E1105 16:02:48.367036 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.367072 kubelet[2841]: W1105 16:02:48.367044 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.367072 kubelet[2841]: E1105 16:02:48.367053 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.367277 kubelet[2841]: E1105 16:02:48.367233 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.367277 kubelet[2841]: W1105 16:02:48.367242 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.367277 kubelet[2841]: E1105 16:02:48.367249 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.367451 kubelet[2841]: E1105 16:02:48.367400 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.367451 kubelet[2841]: W1105 16:02:48.367407 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.367451 kubelet[2841]: E1105 16:02:48.367414 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.367595 kubelet[2841]: E1105 16:02:48.367572 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.367595 kubelet[2841]: W1105 16:02:48.367584 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.367595 kubelet[2841]: E1105 16:02:48.367594 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.367804 kubelet[2841]: E1105 16:02:48.367757 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.367804 kubelet[2841]: W1105 16:02:48.367791 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.367804 kubelet[2841]: E1105 16:02:48.367801 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.368239 kubelet[2841]: E1105 16:02:48.368223 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.368239 kubelet[2841]: W1105 16:02:48.368236 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.368287 kubelet[2841]: E1105 16:02:48.368246 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.368506 kubelet[2841]: E1105 16:02:48.368480 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.368506 kubelet[2841]: W1105 16:02:48.368491 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.368506 kubelet[2841]: E1105 16:02:48.368501 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.368705 kubelet[2841]: E1105 16:02:48.368684 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.368705 kubelet[2841]: W1105 16:02:48.368695 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.368705 kubelet[2841]: E1105 16:02:48.368703 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.369000 kubelet[2841]: E1105 16:02:48.368874 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.369000 kubelet[2841]: W1105 16:02:48.368882 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.369000 kubelet[2841]: E1105 16:02:48.368890 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.369283 kubelet[2841]: E1105 16:02:48.369029 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.369283 kubelet[2841]: W1105 16:02:48.369035 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.369283 kubelet[2841]: E1105 16:02:48.369042 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.369283 kubelet[2841]: E1105 16:02:48.369178 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.369283 kubelet[2841]: W1105 16:02:48.369184 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.369283 kubelet[2841]: E1105 16:02:48.369191 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.369903 kubelet[2841]: E1105 16:02:48.369370 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.369903 kubelet[2841]: W1105 16:02:48.369377 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.369903 kubelet[2841]: E1105 16:02:48.369384 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.369903 kubelet[2841]: E1105 16:02:48.369545 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.369903 kubelet[2841]: W1105 16:02:48.369551 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.369903 kubelet[2841]: E1105 16:02:48.369558 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.369903 kubelet[2841]: E1105 16:02:48.369749 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.369903 kubelet[2841]: W1105 16:02:48.369757 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.369903 kubelet[2841]: E1105 16:02:48.369777 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.370382 kubelet[2841]: E1105 16:02:48.369991 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.370382 kubelet[2841]: W1105 16:02:48.369999 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.370382 kubelet[2841]: E1105 16:02:48.370008 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.370382 kubelet[2841]: E1105 16:02:48.370164 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.370382 kubelet[2841]: W1105 16:02:48.370170 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.370382 kubelet[2841]: E1105 16:02:48.370177 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.370799 kubelet[2841]: E1105 16:02:48.370730 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.370799 kubelet[2841]: W1105 16:02:48.370744 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.370799 kubelet[2841]: E1105 16:02:48.370753 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.370971 kubelet[2841]: E1105 16:02:48.370963 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.370971 kubelet[2841]: W1105 16:02:48.370971 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.371074 kubelet[2841]: E1105 16:02:48.370982 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.372378 containerd[1621]: time="2025-11-05T16:02:48.372322167Z" level=info msg="connecting to shim e87e9b9321c7ed5f6e878ab23d88c0cd8316c238a29e47fe40a605fc1903ac3d" address="unix:///run/containerd/s/cc212d1699f404afd9094a81e75b33eb7aef6aa11909d2c5fbae550a6e867002" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:02:48.397019 systemd[1]: Started cri-containerd-e87e9b9321c7ed5f6e878ab23d88c0cd8316c238a29e47fe40a605fc1903ac3d.scope - libcontainer container e87e9b9321c7ed5f6e878ab23d88c0cd8316c238a29e47fe40a605fc1903ac3d. Nov 5 16:02:48.403860 kubelet[2841]: E1105 16:02:48.403825 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.403860 kubelet[2841]: W1105 16:02:48.403847 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.403860 kubelet[2841]: E1105 16:02:48.403865 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.403860 kubelet[2841]: I1105 16:02:48.403892 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/09c014d2-b99d-493b-9e36-c9afae0fa214-varrun\") pod \"csi-node-driver-m7g79\" (UID: \"09c014d2-b99d-493b-9e36-c9afae0fa214\") " pod="calico-system/csi-node-driver-m7g79" Nov 5 16:02:48.404354 kubelet[2841]: E1105 16:02:48.404103 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.404354 kubelet[2841]: W1105 16:02:48.404112 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.404354 kubelet[2841]: E1105 16:02:48.404121 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.404354 kubelet[2841]: I1105 16:02:48.404134 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/09c014d2-b99d-493b-9e36-c9afae0fa214-socket-dir\") pod \"csi-node-driver-m7g79\" (UID: \"09c014d2-b99d-493b-9e36-c9afae0fa214\") " pod="calico-system/csi-node-driver-m7g79" Nov 5 16:02:48.404477 kubelet[2841]: E1105 16:02:48.404362 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.404477 kubelet[2841]: W1105 16:02:48.404371 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.404477 kubelet[2841]: E1105 16:02:48.404380 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.404477 kubelet[2841]: I1105 16:02:48.404414 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/09c014d2-b99d-493b-9e36-c9afae0fa214-kubelet-dir\") pod \"csi-node-driver-m7g79\" (UID: \"09c014d2-b99d-493b-9e36-c9afae0fa214\") " pod="calico-system/csi-node-driver-m7g79" Nov 5 16:02:48.405504 kubelet[2841]: E1105 16:02:48.404604 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.405504 kubelet[2841]: W1105 16:02:48.404625 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.405504 kubelet[2841]: E1105 16:02:48.404653 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.405504 kubelet[2841]: I1105 16:02:48.404668 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8zcg\" (UniqueName: \"kubernetes.io/projected/09c014d2-b99d-493b-9e36-c9afae0fa214-kube-api-access-s8zcg\") pod \"csi-node-driver-m7g79\" (UID: \"09c014d2-b99d-493b-9e36-c9afae0fa214\") " pod="calico-system/csi-node-driver-m7g79" Nov 5 16:02:48.405504 kubelet[2841]: E1105 16:02:48.404862 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.405504 kubelet[2841]: W1105 16:02:48.404885 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.405504 kubelet[2841]: E1105 16:02:48.404896 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.405504 kubelet[2841]: I1105 16:02:48.404910 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/09c014d2-b99d-493b-9e36-c9afae0fa214-registration-dir\") pod \"csi-node-driver-m7g79\" (UID: \"09c014d2-b99d-493b-9e36-c9afae0fa214\") " pod="calico-system/csi-node-driver-m7g79" Nov 5 16:02:48.406486 kubelet[2841]: E1105 16:02:48.406444 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.406883 kubelet[2841]: W1105 16:02:48.406824 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.407597 kubelet[2841]: E1105 16:02:48.407482 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.407597 kubelet[2841]: W1105 16:02:48.407500 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.407597 kubelet[2841]: E1105 16:02:48.407514 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.407723 kubelet[2841]: E1105 16:02:48.407656 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.407723 kubelet[2841]: W1105 16:02:48.407663 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.407723 kubelet[2841]: E1105 16:02:48.407671 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.408145 kubelet[2841]: E1105 16:02:48.407710 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.408145 kubelet[2841]: E1105 16:02:48.407870 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.408145 kubelet[2841]: W1105 16:02:48.407880 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.408145 kubelet[2841]: E1105 16:02:48.407900 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.408443 kubelet[2841]: E1105 16:02:48.408287 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.408443 kubelet[2841]: W1105 16:02:48.408316 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.408443 kubelet[2841]: E1105 16:02:48.408339 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.408730 kubelet[2841]: E1105 16:02:48.408556 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.408730 kubelet[2841]: W1105 16:02:48.408569 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.408730 kubelet[2841]: E1105 16:02:48.408583 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.408872 kubelet[2841]: E1105 16:02:48.408808 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.408872 kubelet[2841]: W1105 16:02:48.408816 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.408872 kubelet[2841]: E1105 16:02:48.408826 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.409215 kubelet[2841]: E1105 16:02:48.409041 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.409215 kubelet[2841]: W1105 16:02:48.409055 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.409215 kubelet[2841]: E1105 16:02:48.409064 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.409348 kubelet[2841]: E1105 16:02:48.409280 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.409348 kubelet[2841]: W1105 16:02:48.409288 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.409348 kubelet[2841]: E1105 16:02:48.409310 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.409783 kubelet[2841]: E1105 16:02:48.409543 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.409783 kubelet[2841]: W1105 16:02:48.409560 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.409783 kubelet[2841]: E1105 16:02:48.409570 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.422306 kubelet[2841]: E1105 16:02:48.422259 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:48.423891 containerd[1621]: time="2025-11-05T16:02:48.423850249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gjss5,Uid:9062e02b-bdf8-41a2-b4c2-0b455062ddca,Namespace:calico-system,Attempt:0,}" Nov 5 16:02:48.463854 containerd[1621]: time="2025-11-05T16:02:48.463793088Z" level=info msg="connecting to shim eca7e2f0bca80bb90a315041b0b144270a293d66ed8cc0294e3f33855858ab00" address="unix:///run/containerd/s/23e4c5514699ce8d441941f186cc94ce52a7bc1aa25853c80bdc3701601432ce" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:02:48.468255 containerd[1621]: time="2025-11-05T16:02:48.468225677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-99b949d9-rzvlj,Uid:4ab06f77-2216-41dd-b146-b678c54c719d,Namespace:calico-system,Attempt:0,} returns sandbox id \"e87e9b9321c7ed5f6e878ab23d88c0cd8316c238a29e47fe40a605fc1903ac3d\"" Nov 5 16:02:48.473132 kubelet[2841]: E1105 16:02:48.473089 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:48.474165 containerd[1621]: time="2025-11-05T16:02:48.474132689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 5 16:02:48.507424 kubelet[2841]: E1105 16:02:48.507375 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.507424 kubelet[2841]: W1105 16:02:48.507398 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.507424 kubelet[2841]: E1105 16:02:48.507420 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.508118 kubelet[2841]: E1105 16:02:48.508088 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.508118 kubelet[2841]: W1105 16:02:48.508100 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.508118 kubelet[2841]: E1105 16:02:48.508115 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.509205 kubelet[2841]: E1105 16:02:48.509169 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.509205 kubelet[2841]: W1105 16:02:48.509182 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.509205 kubelet[2841]: E1105 16:02:48.509198 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.509399 systemd[1]: Started cri-containerd-eca7e2f0bca80bb90a315041b0b144270a293d66ed8cc0294e3f33855858ab00.scope - libcontainer container eca7e2f0bca80bb90a315041b0b144270a293d66ed8cc0294e3f33855858ab00. Nov 5 16:02:48.510097 kubelet[2841]: E1105 16:02:48.510077 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.510097 kubelet[2841]: W1105 16:02:48.510087 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.510177 kubelet[2841]: E1105 16:02:48.510137 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.510904 kubelet[2841]: E1105 16:02:48.510882 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.510904 kubelet[2841]: W1105 16:02:48.510897 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.511403 kubelet[2841]: E1105 16:02:48.510979 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.511788 kubelet[2841]: E1105 16:02:48.511749 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.511836 kubelet[2841]: W1105 16:02:48.511761 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.511836 kubelet[2841]: E1105 16:02:48.511827 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.512118 kubelet[2841]: E1105 16:02:48.512104 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.512118 kubelet[2841]: W1105 16:02:48.512116 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.512186 kubelet[2841]: E1105 16:02:48.512156 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.512371 kubelet[2841]: E1105 16:02:48.512350 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.512371 kubelet[2841]: W1105 16:02:48.512361 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.512591 kubelet[2841]: E1105 16:02:48.512578 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.512906 kubelet[2841]: E1105 16:02:48.512888 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.512906 kubelet[2841]: W1105 16:02:48.512899 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.512974 kubelet[2841]: E1105 16:02:48.512947 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.513168 kubelet[2841]: E1105 16:02:48.513145 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.513168 kubelet[2841]: W1105 16:02:48.513155 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.513232 kubelet[2841]: E1105 16:02:48.513216 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.513396 kubelet[2841]: E1105 16:02:48.513379 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.513422 kubelet[2841]: W1105 16:02:48.513391 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.513487 kubelet[2841]: E1105 16:02:48.513470 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.513644 kubelet[2841]: E1105 16:02:48.513619 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.513644 kubelet[2841]: W1105 16:02:48.513631 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.513998 kubelet[2841]: E1105 16:02:48.513701 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.513998 kubelet[2841]: E1105 16:02:48.513890 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.513998 kubelet[2841]: W1105 16:02:48.513899 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.513998 kubelet[2841]: E1105 16:02:48.513933 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.514198 kubelet[2841]: E1105 16:02:48.514181 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.514198 kubelet[2841]: W1105 16:02:48.514192 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.515097 kubelet[2841]: E1105 16:02:48.514266 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.515097 kubelet[2841]: E1105 16:02:48.514438 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.515097 kubelet[2841]: W1105 16:02:48.514447 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.515097 kubelet[2841]: E1105 16:02:48.514481 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.515097 kubelet[2841]: E1105 16:02:48.514684 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.515097 kubelet[2841]: W1105 16:02:48.514693 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.515097 kubelet[2841]: E1105 16:02:48.514758 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.515097 kubelet[2841]: E1105 16:02:48.515101 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.515330 kubelet[2841]: W1105 16:02:48.515109 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.515330 kubelet[2841]: E1105 16:02:48.515286 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.515425 kubelet[2841]: E1105 16:02:48.515410 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.515455 kubelet[2841]: W1105 16:02:48.515420 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.515541 kubelet[2841]: E1105 16:02:48.515525 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.515725 kubelet[2841]: E1105 16:02:48.515707 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.515777 kubelet[2841]: W1105 16:02:48.515727 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.515809 kubelet[2841]: E1105 16:02:48.515789 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.516042 kubelet[2841]: E1105 16:02:48.516020 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.516042 kubelet[2841]: W1105 16:02:48.516031 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.516099 kubelet[2841]: E1105 16:02:48.516089 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.516540 kubelet[2841]: E1105 16:02:48.516524 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.516540 kubelet[2841]: W1105 16:02:48.516535 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.516611 kubelet[2841]: E1105 16:02:48.516586 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.517172 kubelet[2841]: E1105 16:02:48.517033 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.517172 kubelet[2841]: W1105 16:02:48.517044 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.517172 kubelet[2841]: E1105 16:02:48.517119 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.517381 kubelet[2841]: E1105 16:02:48.517290 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.517381 kubelet[2841]: W1105 16:02:48.517298 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.517549 kubelet[2841]: E1105 16:02:48.517415 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.517549 kubelet[2841]: E1105 16:02:48.517523 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.517549 kubelet[2841]: W1105 16:02:48.517529 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.517651 kubelet[2841]: E1105 16:02:48.517587 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.517756 kubelet[2841]: E1105 16:02:48.517730 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.517756 kubelet[2841]: W1105 16:02:48.517740 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.517756 kubelet[2841]: E1105 16:02:48.517748 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.526612 kubelet[2841]: E1105 16:02:48.526285 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:48.526612 kubelet[2841]: W1105 16:02:48.526317 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:48.526612 kubelet[2841]: E1105 16:02:48.526336 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:48.597033 containerd[1621]: time="2025-11-05T16:02:48.596899931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gjss5,Uid:9062e02b-bdf8-41a2-b4c2-0b455062ddca,Namespace:calico-system,Attempt:0,} returns sandbox id \"eca7e2f0bca80bb90a315041b0b144270a293d66ed8cc0294e3f33855858ab00\"" Nov 5 16:02:48.597879 kubelet[2841]: E1105 16:02:48.597841 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:49.856154 kubelet[2841]: E1105 16:02:49.855814 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m7g79" podUID="09c014d2-b99d-493b-9e36-c9afae0fa214" Nov 5 16:02:50.199072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4163035892.mount: Deactivated successfully. Nov 5 16:02:51.346894 containerd[1621]: time="2025-11-05T16:02:51.346561311Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:51.348244 containerd[1621]: time="2025-11-05T16:02:51.347640407Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 5 16:02:51.349276 containerd[1621]: time="2025-11-05T16:02:51.349255927Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:51.352591 containerd[1621]: time="2025-11-05T16:02:51.352548614Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:51.354725 containerd[1621]: time="2025-11-05T16:02:51.354675146Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.880472281s" Nov 5 16:02:51.354725 containerd[1621]: time="2025-11-05T16:02:51.354724371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 5 16:02:51.360129 containerd[1621]: time="2025-11-05T16:02:51.359840607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 5 16:02:51.378668 containerd[1621]: time="2025-11-05T16:02:51.378608176Z" level=info msg="CreateContainer within sandbox \"e87e9b9321c7ed5f6e878ab23d88c0cd8316c238a29e47fe40a605fc1903ac3d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 5 16:02:51.391039 containerd[1621]: time="2025-11-05T16:02:51.390994784Z" level=info msg="Container 00082a69de081e273f9de81e7c474cc6d9bcd7af5ef518ff8ecc8df52e777f9b: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:02:51.401954 containerd[1621]: time="2025-11-05T16:02:51.401898579Z" level=info msg="CreateContainer within sandbox \"e87e9b9321c7ed5f6e878ab23d88c0cd8316c238a29e47fe40a605fc1903ac3d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"00082a69de081e273f9de81e7c474cc6d9bcd7af5ef518ff8ecc8df52e777f9b\"" Nov 5 16:02:51.408224 containerd[1621]: time="2025-11-05T16:02:51.408167594Z" level=info msg="StartContainer for \"00082a69de081e273f9de81e7c474cc6d9bcd7af5ef518ff8ecc8df52e777f9b\"" Nov 5 16:02:51.410842 containerd[1621]: time="2025-11-05T16:02:51.410800912Z" level=info msg="connecting to shim 00082a69de081e273f9de81e7c474cc6d9bcd7af5ef518ff8ecc8df52e777f9b" address="unix:///run/containerd/s/cc212d1699f404afd9094a81e75b33eb7aef6aa11909d2c5fbae550a6e867002" protocol=ttrpc version=3 Nov 5 16:02:51.437014 systemd[1]: Started cri-containerd-00082a69de081e273f9de81e7c474cc6d9bcd7af5ef518ff8ecc8df52e777f9b.scope - libcontainer container 00082a69de081e273f9de81e7c474cc6d9bcd7af5ef518ff8ecc8df52e777f9b. Nov 5 16:02:51.535511 containerd[1621]: time="2025-11-05T16:02:51.535447428Z" level=info msg="StartContainer for \"00082a69de081e273f9de81e7c474cc6d9bcd7af5ef518ff8ecc8df52e777f9b\" returns successfully" Nov 5 16:02:51.873333 kubelet[2841]: E1105 16:02:51.872897 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m7g79" podUID="09c014d2-b99d-493b-9e36-c9afae0fa214" Nov 5 16:02:51.947296 kubelet[2841]: E1105 16:02:51.947252 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:51.996502 kubelet[2841]: E1105 16:02:51.996396 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:51.996502 kubelet[2841]: W1105 16:02:51.996433 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:51.997212 kubelet[2841]: E1105 16:02:51.997173 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:51.998603 kubelet[2841]: E1105 16:02:51.998569 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:51.998603 kubelet[2841]: W1105 16:02:51.998595 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:51.998879 kubelet[2841]: E1105 16:02:51.998614 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:51.999302 kubelet[2841]: E1105 16:02:51.999229 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:51.999302 kubelet[2841]: W1105 16:02:51.999246 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:51.999302 kubelet[2841]: E1105 16:02:51.999257 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:51.999594 kubelet[2841]: E1105 16:02:51.999570 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:51.999594 kubelet[2841]: W1105 16:02:51.999586 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:51.999594 kubelet[2841]: E1105 16:02:51.999598 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:51.999938 kubelet[2841]: E1105 16:02:51.999908 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:52.000375 kubelet[2841]: W1105 16:02:52.000344 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:52.000375 kubelet[2841]: E1105 16:02:52.000369 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:52.001292 kubelet[2841]: I1105 16:02:52.001230 2841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-99b949d9-rzvlj" podStartSLOduration=2.117058488 podStartE2EDuration="5.001208889s" podCreationTimestamp="2025-11-05 16:02:47 +0000 UTC" firstStartedPulling="2025-11-05 16:02:48.473898087 +0000 UTC m=+20.778033521" lastFinishedPulling="2025-11-05 16:02:51.358048488 +0000 UTC m=+23.662183922" observedRunningTime="2025-11-05 16:02:51.999264468 +0000 UTC m=+24.303399922" watchObservedRunningTime="2025-11-05 16:02:52.001208889 +0000 UTC m=+24.305344323" Nov 5 16:02:52.002527 kubelet[2841]: E1105 16:02:52.002458 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:52.002527 kubelet[2841]: W1105 16:02:52.002489 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:52.002527 kubelet[2841]: E1105 16:02:52.002509 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:52.004053 kubelet[2841]: E1105 16:02:52.003213 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:52.004053 kubelet[2841]: W1105 16:02:52.003225 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:52.004053 kubelet[2841]: E1105 16:02:52.003238 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:52.004887 kubelet[2841]: E1105 16:02:52.004856 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:52.004887 kubelet[2841]: W1105 16:02:52.004879 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:52.004999 kubelet[2841]: E1105 16:02:52.004893 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:52.005269 kubelet[2841]: E1105 16:02:52.005233 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:52.005318 kubelet[2841]: W1105 16:02:52.005267 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:52.005318 kubelet[2841]: E1105 16:02:52.005299 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:52.005610 kubelet[2841]: E1105 16:02:52.005591 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:52.005610 kubelet[2841]: W1105 16:02:52.005605 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:52.005697 kubelet[2841]: E1105 16:02:52.005616 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:52.005871 kubelet[2841]: E1105 16:02:52.005852 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:52.005871 kubelet[2841]: W1105 16:02:52.005867 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:52.005949 kubelet[2841]: E1105 16:02:52.005877 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:52.006137 kubelet[2841]: E1105 16:02:52.006067 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:52.006137 kubelet[2841]: W1105 16:02:52.006080 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:52.006137 kubelet[2841]: E1105 16:02:52.006090 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:52.008316 kubelet[2841]: E1105 16:02:52.006276 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:52.008316 kubelet[2841]: W1105 16:02:52.006288 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:52.008316 kubelet[2841]: E1105 16:02:52.006297 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:52.008316 kubelet[2841]: E1105 16:02:52.006532 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:52.008316 kubelet[2841]: W1105 16:02:52.006543 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:52.008316 kubelet[2841]: E1105 16:02:52.006554 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:52.008551 kubelet[2841]: E1105 16:02:52.008336 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:52.008551 kubelet[2841]: W1105 16:02:52.008346 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:52.008551 kubelet[2841]: E1105 16:02:52.008355 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:52.040256 kubelet[2841]: E1105 16:02:52.040131 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:52.040256 kubelet[2841]: W1105 16:02:52.040158 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:52.040256 kubelet[2841]: E1105 16:02:52.040182 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:52.043500 kubelet[2841]: E1105 16:02:52.042873 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:52.043500 kubelet[2841]: W1105 16:02:52.042887 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:52.043500 kubelet[2841]: E1105 16:02:52.042904 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:52.044178 kubelet[2841]: E1105 16:02:52.043947 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:52.044178 kubelet[2841]: W1105 16:02:52.043960 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:52.044178 kubelet[2841]: E1105 16:02:52.044014 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:52.045415 kubelet[2841]: E1105 16:02:52.044911 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:52.045415 kubelet[2841]: W1105 16:02:52.045288 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:52.045519 kubelet[2841]: E1105 16:02:52.045484 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:52.045788 kubelet[2841]: E1105 16:02:52.045710 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:52.045788 kubelet[2841]: W1105 16:02:52.045738 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:52.045983 kubelet[2841]: E1105 16:02:52.045900 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:52.046097 kubelet[2841]: E1105 16:02:52.046083 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:52.046189 kubelet[2841]: W1105 16:02:52.046148 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:52.046220 kubelet[2841]: E1105 16:02:52.046204 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:52.046446 kubelet[2841]: E1105 16:02:52.046419 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:52.046446 kubelet[2841]: W1105 16:02:52.046431 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:52.046639 kubelet[2841]: E1105 16:02:52.046572 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:52.047154 kubelet[2841]: E1105 16:02:52.046982 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:52.047154 kubelet[2841]: W1105 16:02:52.046996 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:52.047154 kubelet[2841]: E1105 16:02:52.047013 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:52.047479 kubelet[2841]: E1105 16:02:52.047462 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:52.047595 kubelet[2841]: W1105 16:02:52.047575 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:52.048056 kubelet[2841]: E1105 16:02:52.048036 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:52.049538 kubelet[2841]: E1105 16:02:52.049524 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:52.049843 kubelet[2841]: W1105 16:02:52.049731 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:52.049986 kubelet[2841]: E1105 16:02:52.049945 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:52.050508 kubelet[2841]: E1105 16:02:52.050462 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:52.050508 kubelet[2841]: W1105 16:02:52.050476 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:52.050908 kubelet[2841]: E1105 16:02:52.050891 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:52.051337 kubelet[2841]: E1105 16:02:52.051311 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:52.051445 kubelet[2841]: W1105 16:02:52.051421 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:52.051611 kubelet[2841]: E1105 16:02:52.051561 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:52.051913 kubelet[2841]: E1105 16:02:52.051842 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:52.051913 kubelet[2841]: W1105 16:02:52.051855 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:52.052041 kubelet[2841]: E1105 16:02:52.051987 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:52.052308 kubelet[2841]: E1105 16:02:52.052250 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:52.052308 kubelet[2841]: W1105 16:02:52.052263 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:52.052308 kubelet[2841]: E1105 16:02:52.052280 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:52.052586 kubelet[2841]: E1105 16:02:52.052568 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:52.052586 kubelet[2841]: W1105 16:02:52.052581 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:52.052685 kubelet[2841]: E1105 16:02:52.052597 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:52.052907 kubelet[2841]: E1105 16:02:52.052884 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:52.052907 kubelet[2841]: W1105 16:02:52.052899 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:52.053013 kubelet[2841]: E1105 16:02:52.052913 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:52.053651 kubelet[2841]: E1105 16:02:52.053205 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:52.053651 kubelet[2841]: W1105 16:02:52.053223 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:52.053651 kubelet[2841]: E1105 16:02:52.053235 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:52.053651 kubelet[2841]: E1105 16:02:52.053447 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:52.053651 kubelet[2841]: W1105 16:02:52.053455 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:52.053651 kubelet[2841]: E1105 16:02:52.053463 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:52.948408 kubelet[2841]: I1105 16:02:52.948321 2841 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 16:02:52.949149 kubelet[2841]: E1105 16:02:52.948854 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:53.033333 kubelet[2841]: E1105 16:02:53.030304 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:53.033333 kubelet[2841]: W1105 16:02:53.030354 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:53.033333 kubelet[2841]: E1105 16:02:53.030388 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:53.033333 kubelet[2841]: E1105 16:02:53.030714 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:53.033333 kubelet[2841]: W1105 16:02:53.030722 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:53.033333 kubelet[2841]: E1105 16:02:53.030732 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:53.033333 kubelet[2841]: E1105 16:02:53.030893 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:53.033333 kubelet[2841]: W1105 16:02:53.030902 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:53.033333 kubelet[2841]: E1105 16:02:53.030911 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:53.033333 kubelet[2841]: E1105 16:02:53.031085 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:53.033818 kubelet[2841]: W1105 16:02:53.031093 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:53.033818 kubelet[2841]: E1105 16:02:53.031102 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:53.033818 kubelet[2841]: E1105 16:02:53.031272 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:53.033818 kubelet[2841]: W1105 16:02:53.031280 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:53.033818 kubelet[2841]: E1105 16:02:53.031289 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:53.033818 kubelet[2841]: E1105 16:02:53.031455 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:53.033818 kubelet[2841]: W1105 16:02:53.031463 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:53.033818 kubelet[2841]: E1105 16:02:53.031471 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:53.033818 kubelet[2841]: E1105 16:02:53.031617 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:53.033818 kubelet[2841]: W1105 16:02:53.031624 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:53.034097 kubelet[2841]: E1105 16:02:53.031633 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:53.034097 kubelet[2841]: E1105 16:02:53.031790 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:53.034097 kubelet[2841]: W1105 16:02:53.031797 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:53.034097 kubelet[2841]: E1105 16:02:53.031806 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:53.034097 kubelet[2841]: E1105 16:02:53.031999 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:53.034097 kubelet[2841]: W1105 16:02:53.032007 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:53.034097 kubelet[2841]: E1105 16:02:53.032015 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:53.034097 kubelet[2841]: E1105 16:02:53.032153 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:53.034097 kubelet[2841]: W1105 16:02:53.032161 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:53.034097 kubelet[2841]: E1105 16:02:53.032169 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:53.034395 kubelet[2841]: E1105 16:02:53.032315 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:53.034395 kubelet[2841]: W1105 16:02:53.032323 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:53.034395 kubelet[2841]: E1105 16:02:53.032331 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:53.034395 kubelet[2841]: E1105 16:02:53.032488 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:53.034395 kubelet[2841]: W1105 16:02:53.032496 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:53.034395 kubelet[2841]: E1105 16:02:53.032504 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:53.034395 kubelet[2841]: E1105 16:02:53.032662 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:53.034395 kubelet[2841]: W1105 16:02:53.032676 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:53.034395 kubelet[2841]: E1105 16:02:53.032684 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:53.034395 kubelet[2841]: E1105 16:02:53.032849 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:53.034697 kubelet[2841]: W1105 16:02:53.032857 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:53.034697 kubelet[2841]: E1105 16:02:53.032865 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:53.034697 kubelet[2841]: E1105 16:02:53.033013 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:53.034697 kubelet[2841]: W1105 16:02:53.033021 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:53.034697 kubelet[2841]: E1105 16:02:53.033029 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:53.056580 kubelet[2841]: E1105 16:02:53.055877 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:53.056580 kubelet[2841]: W1105 16:02:53.056235 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:53.056580 kubelet[2841]: E1105 16:02:53.056262 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:53.057682 kubelet[2841]: E1105 16:02:53.057386 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:53.057682 kubelet[2841]: W1105 16:02:53.057409 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:53.057682 kubelet[2841]: E1105 16:02:53.057423 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:53.057953 kubelet[2841]: E1105 16:02:53.057821 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:53.057953 kubelet[2841]: W1105 16:02:53.057955 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:53.058270 kubelet[2841]: E1105 16:02:53.058064 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:53.058311 kubelet[2841]: E1105 16:02:53.058292 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:53.058311 kubelet[2841]: W1105 16:02:53.058304 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:53.058461 kubelet[2841]: E1105 16:02:53.058316 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:53.058696 kubelet[2841]: E1105 16:02:53.058657 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:53.058696 kubelet[2841]: W1105 16:02:53.058677 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:53.058696 kubelet[2841]: E1105 16:02:53.058690 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:53.059033 kubelet[2841]: E1105 16:02:53.059009 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:53.059033 kubelet[2841]: W1105 16:02:53.059025 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:53.059192 kubelet[2841]: E1105 16:02:53.059150 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:53.059291 kubelet[2841]: E1105 16:02:53.059273 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:53.059291 kubelet[2841]: W1105 16:02:53.059287 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:53.059576 kubelet[2841]: E1105 16:02:53.059440 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:53.059831 kubelet[2841]: E1105 16:02:53.059803 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:53.059831 kubelet[2841]: W1105 16:02:53.059824 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:53.059936 kubelet[2841]: E1105 16:02:53.059909 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:53.060226 kubelet[2841]: E1105 16:02:53.060199 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:53.060226 kubelet[2841]: W1105 16:02:53.060213 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:53.060323 kubelet[2841]: E1105 16:02:53.060232 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:53.062573 kubelet[2841]: E1105 16:02:53.062548 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:53.062573 kubelet[2841]: W1105 16:02:53.062563 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:53.062699 kubelet[2841]: E1105 16:02:53.062609 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:53.062982 kubelet[2841]: E1105 16:02:53.062960 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:53.062982 kubelet[2841]: W1105 16:02:53.062976 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:53.063086 kubelet[2841]: E1105 16:02:53.063067 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:53.063279 kubelet[2841]: E1105 16:02:53.063261 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:53.063279 kubelet[2841]: W1105 16:02:53.063272 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:53.063349 kubelet[2841]: E1105 16:02:53.063282 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:53.063548 kubelet[2841]: E1105 16:02:53.063512 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:53.063548 kubelet[2841]: W1105 16:02:53.063540 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:53.063676 kubelet[2841]: E1105 16:02:53.063569 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:53.063955 kubelet[2841]: E1105 16:02:53.063933 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:53.063955 kubelet[2841]: W1105 16:02:53.063947 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:53.064049 kubelet[2841]: E1105 16:02:53.063961 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:53.064422 kubelet[2841]: E1105 16:02:53.064387 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:53.064422 kubelet[2841]: W1105 16:02:53.064410 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:53.064579 kubelet[2841]: E1105 16:02:53.064543 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:53.064631 kubelet[2841]: E1105 16:02:53.064613 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:53.064631 kubelet[2841]: W1105 16:02:53.064623 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:53.064690 kubelet[2841]: E1105 16:02:53.064634 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:53.064930 kubelet[2841]: E1105 16:02:53.064908 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:53.064930 kubelet[2841]: W1105 16:02:53.064924 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:53.065010 kubelet[2841]: E1105 16:02:53.064936 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:53.066298 kubelet[2841]: E1105 16:02:53.066268 2841 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:02:53.066298 kubelet[2841]: W1105 16:02:53.066292 2841 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:02:53.066379 kubelet[2841]: E1105 16:02:53.066305 2841 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:02:53.165510 containerd[1621]: time="2025-11-05T16:02:53.165373974Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:53.166350 containerd[1621]: time="2025-11-05T16:02:53.166316686Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 5 16:02:53.167530 containerd[1621]: time="2025-11-05T16:02:53.167492205Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:53.169735 containerd[1621]: time="2025-11-05T16:02:53.169668619Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:53.170276 containerd[1621]: time="2025-11-05T16:02:53.170236610Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.810348612s" Nov 5 16:02:53.170276 containerd[1621]: time="2025-11-05T16:02:53.170269794Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 5 16:02:53.172435 containerd[1621]: time="2025-11-05T16:02:53.172391351Z" level=info msg="CreateContainer within sandbox \"eca7e2f0bca80bb90a315041b0b144270a293d66ed8cc0294e3f33855858ab00\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 5 16:02:53.186658 containerd[1621]: time="2025-11-05T16:02:53.186591453Z" level=info msg="Container 656ab685436a617d0b5a062c2f479d2d4f62450e2669c3c6de6f2d0651b0b262: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:02:53.199776 containerd[1621]: time="2025-11-05T16:02:53.199580557Z" level=info msg="CreateContainer within sandbox \"eca7e2f0bca80bb90a315041b0b144270a293d66ed8cc0294e3f33855858ab00\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"656ab685436a617d0b5a062c2f479d2d4f62450e2669c3c6de6f2d0651b0b262\"" Nov 5 16:02:53.200113 containerd[1621]: time="2025-11-05T16:02:53.200087602Z" level=info msg="StartContainer for \"656ab685436a617d0b5a062c2f479d2d4f62450e2669c3c6de6f2d0651b0b262\"" Nov 5 16:02:53.201759 containerd[1621]: time="2025-11-05T16:02:53.201723226Z" level=info msg="connecting to shim 656ab685436a617d0b5a062c2f479d2d4f62450e2669c3c6de6f2d0651b0b262" address="unix:///run/containerd/s/23e4c5514699ce8d441941f186cc94ce52a7bc1aa25853c80bdc3701601432ce" protocol=ttrpc version=3 Nov 5 16:02:53.227969 systemd[1]: Started cri-containerd-656ab685436a617d0b5a062c2f479d2d4f62450e2669c3c6de6f2d0651b0b262.scope - libcontainer container 656ab685436a617d0b5a062c2f479d2d4f62450e2669c3c6de6f2d0651b0b262. Nov 5 16:02:53.276115 containerd[1621]: time="2025-11-05T16:02:53.276069978Z" level=info msg="StartContainer for \"656ab685436a617d0b5a062c2f479d2d4f62450e2669c3c6de6f2d0651b0b262\" returns successfully" Nov 5 16:02:53.288224 systemd[1]: cri-containerd-656ab685436a617d0b5a062c2f479d2d4f62450e2669c3c6de6f2d0651b0b262.scope: Deactivated successfully. Nov 5 16:02:53.291162 containerd[1621]: time="2025-11-05T16:02:53.291105455Z" level=info msg="TaskExit event in podsandbox handler container_id:\"656ab685436a617d0b5a062c2f479d2d4f62450e2669c3c6de6f2d0651b0b262\" id:\"656ab685436a617d0b5a062c2f479d2d4f62450e2669c3c6de6f2d0651b0b262\" pid:3564 exited_at:{seconds:1762358573 nanos:290506063}" Nov 5 16:02:53.291162 containerd[1621]: time="2025-11-05T16:02:53.291156132Z" level=info msg="received exit event container_id:\"656ab685436a617d0b5a062c2f479d2d4f62450e2669c3c6de6f2d0651b0b262\" id:\"656ab685436a617d0b5a062c2f479d2d4f62450e2669c3c6de6f2d0651b0b262\" pid:3564 exited_at:{seconds:1762358573 nanos:290506063}" Nov 5 16:02:53.319814 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-656ab685436a617d0b5a062c2f479d2d4f62450e2669c3c6de6f2d0651b0b262-rootfs.mount: Deactivated successfully. Nov 5 16:02:53.855163 kubelet[2841]: E1105 16:02:53.855076 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m7g79" podUID="09c014d2-b99d-493b-9e36-c9afae0fa214" Nov 5 16:02:53.952583 kubelet[2841]: E1105 16:02:53.952530 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:53.953829 containerd[1621]: time="2025-11-05T16:02:53.953742318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 5 16:02:54.511744 kubelet[2841]: I1105 16:02:54.511621 2841 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 16:02:54.516731 kubelet[2841]: E1105 16:02:54.512059 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:54.954865 kubelet[2841]: E1105 16:02:54.954718 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:55.858077 kubelet[2841]: E1105 16:02:55.858007 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m7g79" podUID="09c014d2-b99d-493b-9e36-c9afae0fa214" Nov 5 16:02:57.855612 kubelet[2841]: E1105 16:02:57.855544 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m7g79" podUID="09c014d2-b99d-493b-9e36-c9afae0fa214" Nov 5 16:02:57.993563 containerd[1621]: time="2025-11-05T16:02:57.993506575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:57.994316 containerd[1621]: time="2025-11-05T16:02:57.994282263Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 5 16:02:57.995431 containerd[1621]: time="2025-11-05T16:02:57.995404615Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:57.997563 containerd[1621]: time="2025-11-05T16:02:57.997506275Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:57.998137 containerd[1621]: time="2025-11-05T16:02:57.998109481Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.044276028s" Nov 5 16:02:57.998137 containerd[1621]: time="2025-11-05T16:02:57.998137234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 5 16:02:58.000535 containerd[1621]: time="2025-11-05T16:02:58.000460579Z" level=info msg="CreateContainer within sandbox \"eca7e2f0bca80bb90a315041b0b144270a293d66ed8cc0294e3f33855858ab00\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 5 16:02:58.010223 containerd[1621]: time="2025-11-05T16:02:58.010168983Z" level=info msg="Container 39ebc29471b069b1914f7bfc9f5cec507c2521720cc02e8644c106e049060b39: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:02:58.022697 containerd[1621]: time="2025-11-05T16:02:58.022633619Z" level=info msg="CreateContainer within sandbox \"eca7e2f0bca80bb90a315041b0b144270a293d66ed8cc0294e3f33855858ab00\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"39ebc29471b069b1914f7bfc9f5cec507c2521720cc02e8644c106e049060b39\"" Nov 5 16:02:58.023135 containerd[1621]: time="2025-11-05T16:02:58.023114159Z" level=info msg="StartContainer for \"39ebc29471b069b1914f7bfc9f5cec507c2521720cc02e8644c106e049060b39\"" Nov 5 16:02:58.024686 containerd[1621]: time="2025-11-05T16:02:58.024641787Z" level=info msg="connecting to shim 39ebc29471b069b1914f7bfc9f5cec507c2521720cc02e8644c106e049060b39" address="unix:///run/containerd/s/23e4c5514699ce8d441941f186cc94ce52a7bc1aa25853c80bdc3701601432ce" protocol=ttrpc version=3 Nov 5 16:02:58.054965 systemd[1]: Started cri-containerd-39ebc29471b069b1914f7bfc9f5cec507c2521720cc02e8644c106e049060b39.scope - libcontainer container 39ebc29471b069b1914f7bfc9f5cec507c2521720cc02e8644c106e049060b39. Nov 5 16:02:58.101947 containerd[1621]: time="2025-11-05T16:02:58.101908395Z" level=info msg="StartContainer for \"39ebc29471b069b1914f7bfc9f5cec507c2521720cc02e8644c106e049060b39\" returns successfully" Nov 5 16:02:58.968223 kubelet[2841]: E1105 16:02:58.968176 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:02:59.612891 systemd[1]: cri-containerd-39ebc29471b069b1914f7bfc9f5cec507c2521720cc02e8644c106e049060b39.scope: Deactivated successfully. Nov 5 16:02:59.613819 systemd[1]: cri-containerd-39ebc29471b069b1914f7bfc9f5cec507c2521720cc02e8644c106e049060b39.scope: Consumed 618ms CPU time, 180.8M memory peak, 3.6M read from disk, 171.3M written to disk. Nov 5 16:02:59.615229 containerd[1621]: time="2025-11-05T16:02:59.615179620Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39ebc29471b069b1914f7bfc9f5cec507c2521720cc02e8644c106e049060b39\" id:\"39ebc29471b069b1914f7bfc9f5cec507c2521720cc02e8644c106e049060b39\" pid:3628 exited_at:{seconds:1762358579 nanos:614691375}" Nov 5 16:02:59.615914 containerd[1621]: time="2025-11-05T16:02:59.615888709Z" level=info msg="received exit event container_id:\"39ebc29471b069b1914f7bfc9f5cec507c2521720cc02e8644c106e049060b39\" id:\"39ebc29471b069b1914f7bfc9f5cec507c2521720cc02e8644c106e049060b39\" pid:3628 exited_at:{seconds:1762358579 nanos:614691375}" Nov 5 16:02:59.644422 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39ebc29471b069b1914f7bfc9f5cec507c2521720cc02e8644c106e049060b39-rootfs.mount: Deactivated successfully. Nov 5 16:02:59.689604 kubelet[2841]: I1105 16:02:59.689523 2841 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 5 16:02:59.970439 kubelet[2841]: E1105 16:02:59.970319 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:03:00.056248 systemd[1]: Created slice kubepods-besteffort-pod09c014d2_b99d_493b_9e36_c9afae0fa214.slice - libcontainer container kubepods-besteffort-pod09c014d2_b99d_493b_9e36_c9afae0fa214.slice. Nov 5 16:03:00.059361 containerd[1621]: time="2025-11-05T16:03:00.059319986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m7g79,Uid:09c014d2-b99d-493b-9e36-c9afae0fa214,Namespace:calico-system,Attempt:0,}" Nov 5 16:03:00.061731 systemd[1]: Created slice kubepods-besteffort-pode2f5a50e_d006_4d79_9642_9b2ea8b5bc20.slice - libcontainer container kubepods-besteffort-pode2f5a50e_d006_4d79_9642_9b2ea8b5bc20.slice. Nov 5 16:03:00.077896 systemd[1]: Created slice kubepods-besteffort-podee8d6811_815a_4547_b978_c3b4809dcbf0.slice - libcontainer container kubepods-besteffort-podee8d6811_815a_4547_b978_c3b4809dcbf0.slice. Nov 5 16:03:00.091029 systemd[1]: Created slice kubepods-besteffort-pod7606f5a8_f065_4081_b3bc_2344e91053d9.slice - libcontainer container kubepods-besteffort-pod7606f5a8_f065_4081_b3bc_2344e91053d9.slice. Nov 5 16:03:00.099334 systemd[1]: Created slice kubepods-burstable-pod813fc040_02be_4987_a338_2511b9fa3fec.slice - libcontainer container kubepods-burstable-pod813fc040_02be_4987_a338_2511b9fa3fec.slice. Nov 5 16:03:00.109745 systemd[1]: Created slice kubepods-besteffort-podbdb8325a_0fd4_4d9c_86b5_8a27d438208e.slice - libcontainer container kubepods-besteffort-podbdb8325a_0fd4_4d9c_86b5_8a27d438208e.slice. Nov 5 16:03:00.117413 kubelet[2841]: I1105 16:03:00.117194 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bdb8325a-0fd4-4d9c-86b5-8a27d438208e-whisker-backend-key-pair\") pod \"whisker-7848fdffd7-zh9r6\" (UID: \"bdb8325a-0fd4-4d9c-86b5-8a27d438208e\") " pod="calico-system/whisker-7848fdffd7-zh9r6" Nov 5 16:03:00.117641 kubelet[2841]: I1105 16:03:00.117594 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg9x7\" (UniqueName: \"kubernetes.io/projected/7606f5a8-f065-4081-b3bc-2344e91053d9-kube-api-access-rg9x7\") pod \"calico-apiserver-5f5b56f6b8-98rmx\" (UID: \"7606f5a8-f065-4081-b3bc-2344e91053d9\") " pod="calico-apiserver/calico-apiserver-5f5b56f6b8-98rmx" Nov 5 16:03:00.118797 kubelet[2841]: I1105 16:03:00.117746 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49cc43d2-d1dd-4d90-a5d3-9c3601306d8f-goldmane-ca-bundle\") pod \"goldmane-666569f655-r5tq9\" (UID: \"49cc43d2-d1dd-4d90-a5d3-9c3601306d8f\") " pod="calico-system/goldmane-666569f655-r5tq9" Nov 5 16:03:00.118797 kubelet[2841]: I1105 16:03:00.117794 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ng6w\" (UniqueName: \"kubernetes.io/projected/49cc43d2-d1dd-4d90-a5d3-9c3601306d8f-kube-api-access-8ng6w\") pod \"goldmane-666569f655-r5tq9\" (UID: \"49cc43d2-d1dd-4d90-a5d3-9c3601306d8f\") " pod="calico-system/goldmane-666569f655-r5tq9" Nov 5 16:03:00.118797 kubelet[2841]: I1105 16:03:00.117844 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e2f5a50e-d006-4d79-9642-9b2ea8b5bc20-calico-apiserver-certs\") pod \"calico-apiserver-5f5b56f6b8-w6xvl\" (UID: \"e2f5a50e-d006-4d79-9642-9b2ea8b5bc20\") " pod="calico-apiserver/calico-apiserver-5f5b56f6b8-w6xvl" Nov 5 16:03:00.118797 kubelet[2841]: I1105 16:03:00.117871 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6jjx\" (UniqueName: \"kubernetes.io/projected/813fc040-02be-4987-a338-2511b9fa3fec-kube-api-access-k6jjx\") pod \"coredns-668d6bf9bc-s8jjg\" (UID: \"813fc040-02be-4987-a338-2511b9fa3fec\") " pod="kube-system/coredns-668d6bf9bc-s8jjg" Nov 5 16:03:00.118797 kubelet[2841]: I1105 16:03:00.117899 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7606f5a8-f065-4081-b3bc-2344e91053d9-calico-apiserver-certs\") pod \"calico-apiserver-5f5b56f6b8-98rmx\" (UID: \"7606f5a8-f065-4081-b3bc-2344e91053d9\") " pod="calico-apiserver/calico-apiserver-5f5b56f6b8-98rmx" Nov 5 16:03:00.119031 kubelet[2841]: I1105 16:03:00.117924 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t7b4\" (UniqueName: \"kubernetes.io/projected/2f03bfaf-897c-4a77-856b-68e383753ef9-kube-api-access-2t7b4\") pod \"coredns-668d6bf9bc-svhlp\" (UID: \"2f03bfaf-897c-4a77-856b-68e383753ef9\") " pod="kube-system/coredns-668d6bf9bc-svhlp" Nov 5 16:03:00.119031 kubelet[2841]: I1105 16:03:00.118025 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdb8325a-0fd4-4d9c-86b5-8a27d438208e-whisker-ca-bundle\") pod \"whisker-7848fdffd7-zh9r6\" (UID: \"bdb8325a-0fd4-4d9c-86b5-8a27d438208e\") " pod="calico-system/whisker-7848fdffd7-zh9r6" Nov 5 16:03:00.119031 kubelet[2841]: I1105 16:03:00.118058 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcvwk\" (UniqueName: \"kubernetes.io/projected/ee8d6811-815a-4547-b978-c3b4809dcbf0-kube-api-access-tcvwk\") pod \"calico-kube-controllers-6894bd5d4c-lzwwl\" (UID: \"ee8d6811-815a-4547-b978-c3b4809dcbf0\") " pod="calico-system/calico-kube-controllers-6894bd5d4c-lzwwl" Nov 5 16:03:00.119031 kubelet[2841]: I1105 16:03:00.118118 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f03bfaf-897c-4a77-856b-68e383753ef9-config-volume\") pod \"coredns-668d6bf9bc-svhlp\" (UID: \"2f03bfaf-897c-4a77-856b-68e383753ef9\") " pod="kube-system/coredns-668d6bf9bc-svhlp" Nov 5 16:03:00.119031 kubelet[2841]: I1105 16:03:00.118148 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4fwz\" (UniqueName: \"kubernetes.io/projected/e2f5a50e-d006-4d79-9642-9b2ea8b5bc20-kube-api-access-n4fwz\") pod \"calico-apiserver-5f5b56f6b8-w6xvl\" (UID: \"e2f5a50e-d006-4d79-9642-9b2ea8b5bc20\") " pod="calico-apiserver/calico-apiserver-5f5b56f6b8-w6xvl" Nov 5 16:03:00.119189 kubelet[2841]: I1105 16:03:00.118177 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/49cc43d2-d1dd-4d90-a5d3-9c3601306d8f-goldmane-key-pair\") pod \"goldmane-666569f655-r5tq9\" (UID: \"49cc43d2-d1dd-4d90-a5d3-9c3601306d8f\") " pod="calico-system/goldmane-666569f655-r5tq9" Nov 5 16:03:00.119189 kubelet[2841]: I1105 16:03:00.118200 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb5lh\" (UniqueName: \"kubernetes.io/projected/bdb8325a-0fd4-4d9c-86b5-8a27d438208e-kube-api-access-xb5lh\") pod \"whisker-7848fdffd7-zh9r6\" (UID: \"bdb8325a-0fd4-4d9c-86b5-8a27d438208e\") " pod="calico-system/whisker-7848fdffd7-zh9r6" Nov 5 16:03:00.119189 kubelet[2841]: I1105 16:03:00.118224 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/813fc040-02be-4987-a338-2511b9fa3fec-config-volume\") pod \"coredns-668d6bf9bc-s8jjg\" (UID: \"813fc040-02be-4987-a338-2511b9fa3fec\") " pod="kube-system/coredns-668d6bf9bc-s8jjg" Nov 5 16:03:00.119189 kubelet[2841]: I1105 16:03:00.118383 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee8d6811-815a-4547-b978-c3b4809dcbf0-tigera-ca-bundle\") pod \"calico-kube-controllers-6894bd5d4c-lzwwl\" (UID: \"ee8d6811-815a-4547-b978-c3b4809dcbf0\") " pod="calico-system/calico-kube-controllers-6894bd5d4c-lzwwl" Nov 5 16:03:00.119189 kubelet[2841]: I1105 16:03:00.118416 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49cc43d2-d1dd-4d90-a5d3-9c3601306d8f-config\") pod \"goldmane-666569f655-r5tq9\" (UID: \"49cc43d2-d1dd-4d90-a5d3-9c3601306d8f\") " pod="calico-system/goldmane-666569f655-r5tq9" Nov 5 16:03:00.145285 systemd[1]: Created slice kubepods-besteffort-pod49cc43d2_d1dd_4d90_a5d3_9c3601306d8f.slice - libcontainer container kubepods-besteffort-pod49cc43d2_d1dd_4d90_a5d3_9c3601306d8f.slice. Nov 5 16:03:00.149031 systemd[1]: Created slice kubepods-burstable-pod2f03bfaf_897c_4a77_856b_68e383753ef9.slice - libcontainer container kubepods-burstable-pod2f03bfaf_897c_4a77_856b_68e383753ef9.slice. Nov 5 16:03:00.201669 containerd[1621]: time="2025-11-05T16:03:00.201601954Z" level=error msg="Failed to destroy network for sandbox \"2d8c2a6fdd9f7da197e4e4a2fa327f2d309ac105cdaf668fd2fad4e99e36b243\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:00.203619 containerd[1621]: time="2025-11-05T16:03:00.203572286Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m7g79,Uid:09c014d2-b99d-493b-9e36-c9afae0fa214,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d8c2a6fdd9f7da197e4e4a2fa327f2d309ac105cdaf668fd2fad4e99e36b243\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:00.203848 systemd[1]: run-netns-cni\x2d0e7d281f\x2d3b7b\x2d43e2\x2dc46b\x2dff0d1fa72f8a.mount: Deactivated successfully. Nov 5 16:03:00.204458 kubelet[2841]: E1105 16:03:00.203848 2841 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d8c2a6fdd9f7da197e4e4a2fa327f2d309ac105cdaf668fd2fad4e99e36b243\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:00.204458 kubelet[2841]: E1105 16:03:00.204109 2841 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d8c2a6fdd9f7da197e4e4a2fa327f2d309ac105cdaf668fd2fad4e99e36b243\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-m7g79" Nov 5 16:03:00.204458 kubelet[2841]: E1105 16:03:00.204130 2841 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d8c2a6fdd9f7da197e4e4a2fa327f2d309ac105cdaf668fd2fad4e99e36b243\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-m7g79" Nov 5 16:03:00.204708 kubelet[2841]: E1105 16:03:00.204185 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-m7g79_calico-system(09c014d2-b99d-493b-9e36-c9afae0fa214)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-m7g79_calico-system(09c014d2-b99d-493b-9e36-c9afae0fa214)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d8c2a6fdd9f7da197e4e4a2fa327f2d309ac105cdaf668fd2fad4e99e36b243\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-m7g79" podUID="09c014d2-b99d-493b-9e36-c9afae0fa214" Nov 5 16:03:00.372592 containerd[1621]: time="2025-11-05T16:03:00.372527631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5b56f6b8-w6xvl,Uid:e2f5a50e-d006-4d79-9642-9b2ea8b5bc20,Namespace:calico-apiserver,Attempt:0,}" Nov 5 16:03:00.388460 containerd[1621]: time="2025-11-05T16:03:00.388407248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6894bd5d4c-lzwwl,Uid:ee8d6811-815a-4547-b978-c3b4809dcbf0,Namespace:calico-system,Attempt:0,}" Nov 5 16:03:00.396821 containerd[1621]: time="2025-11-05T16:03:00.396584559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5b56f6b8-98rmx,Uid:7606f5a8-f065-4081-b3bc-2344e91053d9,Namespace:calico-apiserver,Attempt:0,}" Nov 5 16:03:00.403989 kubelet[2841]: E1105 16:03:00.403931 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:03:00.404634 containerd[1621]: time="2025-11-05T16:03:00.404601586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s8jjg,Uid:813fc040-02be-4987-a338-2511b9fa3fec,Namespace:kube-system,Attempt:0,}" Nov 5 16:03:00.424163 containerd[1621]: time="2025-11-05T16:03:00.424101822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7848fdffd7-zh9r6,Uid:bdb8325a-0fd4-4d9c-86b5-8a27d438208e,Namespace:calico-system,Attempt:0,}" Nov 5 16:03:00.454449 containerd[1621]: time="2025-11-05T16:03:00.454343639Z" level=error msg="Failed to destroy network for sandbox \"ec6347506361d3912d36999ff7f043871c6e21c3c38673d7f34ac3cbe91a797b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:00.455101 kubelet[2841]: E1105 16:03:00.455040 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:03:00.457783 containerd[1621]: time="2025-11-05T16:03:00.457464894Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5b56f6b8-w6xvl,Uid:e2f5a50e-d006-4d79-9642-9b2ea8b5bc20,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec6347506361d3912d36999ff7f043871c6e21c3c38673d7f34ac3cbe91a797b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:00.458043 kubelet[2841]: E1105 16:03:00.458013 2841 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec6347506361d3912d36999ff7f043871c6e21c3c38673d7f34ac3cbe91a797b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:00.459879 kubelet[2841]: E1105 16:03:00.459683 2841 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec6347506361d3912d36999ff7f043871c6e21c3c38673d7f34ac3cbe91a797b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f5b56f6b8-w6xvl" Nov 5 16:03:00.459879 kubelet[2841]: E1105 16:03:00.459726 2841 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec6347506361d3912d36999ff7f043871c6e21c3c38673d7f34ac3cbe91a797b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f5b56f6b8-w6xvl" Nov 5 16:03:00.459879 kubelet[2841]: E1105 16:03:00.459822 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f5b56f6b8-w6xvl_calico-apiserver(e2f5a50e-d006-4d79-9642-9b2ea8b5bc20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f5b56f6b8-w6xvl_calico-apiserver(e2f5a50e-d006-4d79-9642-9b2ea8b5bc20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec6347506361d3912d36999ff7f043871c6e21c3c38673d7f34ac3cbe91a797b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f5b56f6b8-w6xvl" podUID="e2f5a50e-d006-4d79-9642-9b2ea8b5bc20" Nov 5 16:03:00.460025 containerd[1621]: time="2025-11-05T16:03:00.459846492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-svhlp,Uid:2f03bfaf-897c-4a77-856b-68e383753ef9,Namespace:kube-system,Attempt:0,}" Nov 5 16:03:00.460565 containerd[1621]: time="2025-11-05T16:03:00.460519821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-r5tq9,Uid:49cc43d2-d1dd-4d90-a5d3-9c3601306d8f,Namespace:calico-system,Attempt:0,}" Nov 5 16:03:00.508201 containerd[1621]: time="2025-11-05T16:03:00.508140915Z" level=error msg="Failed to destroy network for sandbox \"3487f4bf46fc006fcc1aaffedf978def260684f91c33ea74262063a1286b4929\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:00.511749 containerd[1621]: time="2025-11-05T16:03:00.511697573Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6894bd5d4c-lzwwl,Uid:ee8d6811-815a-4547-b978-c3b4809dcbf0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3487f4bf46fc006fcc1aaffedf978def260684f91c33ea74262063a1286b4929\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:00.512314 kubelet[2841]: E1105 16:03:00.512267 2841 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3487f4bf46fc006fcc1aaffedf978def260684f91c33ea74262063a1286b4929\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:00.512413 kubelet[2841]: E1105 16:03:00.512372 2841 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3487f4bf46fc006fcc1aaffedf978def260684f91c33ea74262063a1286b4929\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6894bd5d4c-lzwwl" Nov 5 16:03:00.512466 kubelet[2841]: E1105 16:03:00.512420 2841 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3487f4bf46fc006fcc1aaffedf978def260684f91c33ea74262063a1286b4929\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6894bd5d4c-lzwwl" Nov 5 16:03:00.512790 kubelet[2841]: E1105 16:03:00.512607 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6894bd5d4c-lzwwl_calico-system(ee8d6811-815a-4547-b978-c3b4809dcbf0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6894bd5d4c-lzwwl_calico-system(ee8d6811-815a-4547-b978-c3b4809dcbf0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3487f4bf46fc006fcc1aaffedf978def260684f91c33ea74262063a1286b4929\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6894bd5d4c-lzwwl" podUID="ee8d6811-815a-4547-b978-c3b4809dcbf0" Nov 5 16:03:00.513810 containerd[1621]: time="2025-11-05T16:03:00.513653948Z" level=error msg="Failed to destroy network for sandbox \"39c528ec405ca21815b6962a5e589c0246c0b830f7597cf9d3fbe535bd9cab99\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:00.516238 containerd[1621]: time="2025-11-05T16:03:00.516195583Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5b56f6b8-98rmx,Uid:7606f5a8-f065-4081-b3bc-2344e91053d9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"39c528ec405ca21815b6962a5e589c0246c0b830f7597cf9d3fbe535bd9cab99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:00.517589 kubelet[2841]: E1105 16:03:00.517549 2841 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39c528ec405ca21815b6962a5e589c0246c0b830f7597cf9d3fbe535bd9cab99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:00.517696 kubelet[2841]: E1105 16:03:00.517611 2841 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39c528ec405ca21815b6962a5e589c0246c0b830f7597cf9d3fbe535bd9cab99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f5b56f6b8-98rmx" Nov 5 16:03:00.517696 kubelet[2841]: E1105 16:03:00.517633 2841 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39c528ec405ca21815b6962a5e589c0246c0b830f7597cf9d3fbe535bd9cab99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f5b56f6b8-98rmx" Nov 5 16:03:00.517696 kubelet[2841]: E1105 16:03:00.517673 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f5b56f6b8-98rmx_calico-apiserver(7606f5a8-f065-4081-b3bc-2344e91053d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f5b56f6b8-98rmx_calico-apiserver(7606f5a8-f065-4081-b3bc-2344e91053d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39c528ec405ca21815b6962a5e589c0246c0b830f7597cf9d3fbe535bd9cab99\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f5b56f6b8-98rmx" podUID="7606f5a8-f065-4081-b3bc-2344e91053d9" Nov 5 16:03:00.557180 containerd[1621]: time="2025-11-05T16:03:00.557129457Z" level=error msg="Failed to destroy network for sandbox \"5b532f5c264c515930eb8366a2ce9886789a743ee8e30858b0071bc6c5a3091e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:00.559161 containerd[1621]: time="2025-11-05T16:03:00.559041967Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s8jjg,Uid:813fc040-02be-4987-a338-2511b9fa3fec,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b532f5c264c515930eb8366a2ce9886789a743ee8e30858b0071bc6c5a3091e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:00.559946 kubelet[2841]: E1105 16:03:00.559450 2841 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b532f5c264c515930eb8366a2ce9886789a743ee8e30858b0071bc6c5a3091e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:00.559946 kubelet[2841]: E1105 16:03:00.559544 2841 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b532f5c264c515930eb8366a2ce9886789a743ee8e30858b0071bc6c5a3091e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-s8jjg" Nov 5 16:03:00.559946 kubelet[2841]: E1105 16:03:00.559572 2841 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b532f5c264c515930eb8366a2ce9886789a743ee8e30858b0071bc6c5a3091e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-s8jjg" Nov 5 16:03:00.560087 kubelet[2841]: E1105 16:03:00.559630 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-s8jjg_kube-system(813fc040-02be-4987-a338-2511b9fa3fec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-s8jjg_kube-system(813fc040-02be-4987-a338-2511b9fa3fec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b532f5c264c515930eb8366a2ce9886789a743ee8e30858b0071bc6c5a3091e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-s8jjg" podUID="813fc040-02be-4987-a338-2511b9fa3fec" Nov 5 16:03:00.562294 containerd[1621]: time="2025-11-05T16:03:00.562257933Z" level=error msg="Failed to destroy network for sandbox \"112ef29a49c7051fefaa5596c9347e9d8cb959af9e62d9d326d64d25fc9b0478\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:00.564159 containerd[1621]: time="2025-11-05T16:03:00.564076454Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-svhlp,Uid:2f03bfaf-897c-4a77-856b-68e383753ef9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"112ef29a49c7051fefaa5596c9347e9d8cb959af9e62d9d326d64d25fc9b0478\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:00.565164 kubelet[2841]: E1105 16:03:00.564912 2841 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"112ef29a49c7051fefaa5596c9347e9d8cb959af9e62d9d326d64d25fc9b0478\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:00.565164 kubelet[2841]: E1105 16:03:00.564995 2841 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"112ef29a49c7051fefaa5596c9347e9d8cb959af9e62d9d326d64d25fc9b0478\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-svhlp" Nov 5 16:03:00.565164 kubelet[2841]: E1105 16:03:00.565021 2841 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"112ef29a49c7051fefaa5596c9347e9d8cb959af9e62d9d326d64d25fc9b0478\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-svhlp" Nov 5 16:03:00.565334 kubelet[2841]: E1105 16:03:00.565074 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-svhlp_kube-system(2f03bfaf-897c-4a77-856b-68e383753ef9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-svhlp_kube-system(2f03bfaf-897c-4a77-856b-68e383753ef9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"112ef29a49c7051fefaa5596c9347e9d8cb959af9e62d9d326d64d25fc9b0478\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-svhlp" podUID="2f03bfaf-897c-4a77-856b-68e383753ef9" Nov 5 16:03:00.573663 containerd[1621]: time="2025-11-05T16:03:00.573466959Z" level=error msg="Failed to destroy network for sandbox \"5610fdd65efe81ce60232f5cc75e167110cb05450e438c7cce55423a3448ca4a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:00.575605 containerd[1621]: time="2025-11-05T16:03:00.575548403Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7848fdffd7-zh9r6,Uid:bdb8325a-0fd4-4d9c-86b5-8a27d438208e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5610fdd65efe81ce60232f5cc75e167110cb05450e438c7cce55423a3448ca4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:00.576079 kubelet[2841]: E1105 16:03:00.576029 2841 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5610fdd65efe81ce60232f5cc75e167110cb05450e438c7cce55423a3448ca4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:00.576148 kubelet[2841]: E1105 16:03:00.576100 2841 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5610fdd65efe81ce60232f5cc75e167110cb05450e438c7cce55423a3448ca4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7848fdffd7-zh9r6" Nov 5 16:03:00.576148 kubelet[2841]: E1105 16:03:00.576125 2841 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5610fdd65efe81ce60232f5cc75e167110cb05450e438c7cce55423a3448ca4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7848fdffd7-zh9r6" Nov 5 16:03:00.576211 kubelet[2841]: E1105 16:03:00.576171 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7848fdffd7-zh9r6_calico-system(bdb8325a-0fd4-4d9c-86b5-8a27d438208e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7848fdffd7-zh9r6_calico-system(bdb8325a-0fd4-4d9c-86b5-8a27d438208e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5610fdd65efe81ce60232f5cc75e167110cb05450e438c7cce55423a3448ca4a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7848fdffd7-zh9r6" podUID="bdb8325a-0fd4-4d9c-86b5-8a27d438208e" Nov 5 16:03:00.590378 containerd[1621]: time="2025-11-05T16:03:00.590304769Z" level=error msg="Failed to destroy network for sandbox \"706823650465854e30db6402c6d8b41eb722f551f1ea0ada1a9acf9bc45df6be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:00.591838 containerd[1621]: time="2025-11-05T16:03:00.591749585Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-r5tq9,Uid:49cc43d2-d1dd-4d90-a5d3-9c3601306d8f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"706823650465854e30db6402c6d8b41eb722f551f1ea0ada1a9acf9bc45df6be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:00.592092 kubelet[2841]: E1105 16:03:00.592056 2841 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"706823650465854e30db6402c6d8b41eb722f551f1ea0ada1a9acf9bc45df6be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:00.592161 kubelet[2841]: E1105 16:03:00.592113 2841 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"706823650465854e30db6402c6d8b41eb722f551f1ea0ada1a9acf9bc45df6be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-r5tq9" Nov 5 16:03:00.592161 kubelet[2841]: E1105 16:03:00.592130 2841 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"706823650465854e30db6402c6d8b41eb722f551f1ea0ada1a9acf9bc45df6be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-r5tq9" Nov 5 16:03:00.592214 kubelet[2841]: E1105 16:03:00.592190 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-r5tq9_calico-system(49cc43d2-d1dd-4d90-a5d3-9c3601306d8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-r5tq9_calico-system(49cc43d2-d1dd-4d90-a5d3-9c3601306d8f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"706823650465854e30db6402c6d8b41eb722f551f1ea0ada1a9acf9bc45df6be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-r5tq9" podUID="49cc43d2-d1dd-4d90-a5d3-9c3601306d8f" Nov 5 16:03:00.976219 kubelet[2841]: E1105 16:03:00.975878 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:03:00.976714 containerd[1621]: time="2025-11-05T16:03:00.976561714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 5 16:03:10.327230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4223378514.mount: Deactivated successfully. Nov 5 16:03:11.422718 containerd[1621]: time="2025-11-05T16:03:11.422644184Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:11.423795 containerd[1621]: time="2025-11-05T16:03:11.423739089Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 5 16:03:11.425193 containerd[1621]: time="2025-11-05T16:03:11.425135871Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:11.439370 containerd[1621]: time="2025-11-05T16:03:11.439289372Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:11.440316 containerd[1621]: time="2025-11-05T16:03:11.440252746Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 10.463647239s" Nov 5 16:03:11.440316 containerd[1621]: time="2025-11-05T16:03:11.440306639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 5 16:03:11.456942 containerd[1621]: time="2025-11-05T16:03:11.456894808Z" level=info msg="CreateContainer within sandbox \"eca7e2f0bca80bb90a315041b0b144270a293d66ed8cc0294e3f33855858ab00\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 5 16:03:11.494088 containerd[1621]: time="2025-11-05T16:03:11.494036579Z" level=info msg="Container 8b75530ab8e6bada166598eebe4e114cf89d7be8cf8cf245ea7fae05c80daf28: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:03:11.510431 containerd[1621]: time="2025-11-05T16:03:11.510380353Z" level=info msg="CreateContainer within sandbox \"eca7e2f0bca80bb90a315041b0b144270a293d66ed8cc0294e3f33855858ab00\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8b75530ab8e6bada166598eebe4e114cf89d7be8cf8cf245ea7fae05c80daf28\"" Nov 5 16:03:11.511010 containerd[1621]: time="2025-11-05T16:03:11.510980415Z" level=info msg="StartContainer for \"8b75530ab8e6bada166598eebe4e114cf89d7be8cf8cf245ea7fae05c80daf28\"" Nov 5 16:03:11.512798 containerd[1621]: time="2025-11-05T16:03:11.512731130Z" level=info msg="connecting to shim 8b75530ab8e6bada166598eebe4e114cf89d7be8cf8cf245ea7fae05c80daf28" address="unix:///run/containerd/s/23e4c5514699ce8d441941f186cc94ce52a7bc1aa25853c80bdc3701601432ce" protocol=ttrpc version=3 Nov 5 16:03:11.539092 systemd[1]: Started cri-containerd-8b75530ab8e6bada166598eebe4e114cf89d7be8cf8cf245ea7fae05c80daf28.scope - libcontainer container 8b75530ab8e6bada166598eebe4e114cf89d7be8cf8cf245ea7fae05c80daf28. Nov 5 16:03:11.645191 containerd[1621]: time="2025-11-05T16:03:11.645139194Z" level=info msg="StartContainer for \"8b75530ab8e6bada166598eebe4e114cf89d7be8cf8cf245ea7fae05c80daf28\" returns successfully" Nov 5 16:03:11.726272 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 5 16:03:11.727839 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 5 16:03:11.857532 containerd[1621]: time="2025-11-05T16:03:11.857486123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5b56f6b8-98rmx,Uid:7606f5a8-f065-4081-b3bc-2344e91053d9,Namespace:calico-apiserver,Attempt:0,}" Nov 5 16:03:11.858204 containerd[1621]: time="2025-11-05T16:03:11.857855076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m7g79,Uid:09c014d2-b99d-493b-9e36-c9afae0fa214,Namespace:calico-system,Attempt:0,}" Nov 5 16:03:11.893607 kubelet[2841]: I1105 16:03:11.893545 2841 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bdb8325a-0fd4-4d9c-86b5-8a27d438208e-whisker-backend-key-pair\") pod \"bdb8325a-0fd4-4d9c-86b5-8a27d438208e\" (UID: \"bdb8325a-0fd4-4d9c-86b5-8a27d438208e\") " Nov 5 16:03:11.894919 kubelet[2841]: I1105 16:03:11.894566 2841 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdb8325a-0fd4-4d9c-86b5-8a27d438208e-whisker-ca-bundle\") pod \"bdb8325a-0fd4-4d9c-86b5-8a27d438208e\" (UID: \"bdb8325a-0fd4-4d9c-86b5-8a27d438208e\") " Nov 5 16:03:11.894919 kubelet[2841]: I1105 16:03:11.894602 2841 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xb5lh\" (UniqueName: \"kubernetes.io/projected/bdb8325a-0fd4-4d9c-86b5-8a27d438208e-kube-api-access-xb5lh\") pod \"bdb8325a-0fd4-4d9c-86b5-8a27d438208e\" (UID: \"bdb8325a-0fd4-4d9c-86b5-8a27d438208e\") " Nov 5 16:03:11.897787 kubelet[2841]: I1105 16:03:11.897681 2841 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdb8325a-0fd4-4d9c-86b5-8a27d438208e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "bdb8325a-0fd4-4d9c-86b5-8a27d438208e" (UID: "bdb8325a-0fd4-4d9c-86b5-8a27d438208e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 16:03:11.901237 kubelet[2841]: I1105 16:03:11.901166 2841 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdb8325a-0fd4-4d9c-86b5-8a27d438208e-kube-api-access-xb5lh" (OuterVolumeSpecName: "kube-api-access-xb5lh") pod "bdb8325a-0fd4-4d9c-86b5-8a27d438208e" (UID: "bdb8325a-0fd4-4d9c-86b5-8a27d438208e"). InnerVolumeSpecName "kube-api-access-xb5lh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 16:03:11.903313 kubelet[2841]: I1105 16:03:11.903287 2841 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdb8325a-0fd4-4d9c-86b5-8a27d438208e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "bdb8325a-0fd4-4d9c-86b5-8a27d438208e" (UID: "bdb8325a-0fd4-4d9c-86b5-8a27d438208e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 5 16:03:11.995105 kubelet[2841]: I1105 16:03:11.995015 2841 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bdb8325a-0fd4-4d9c-86b5-8a27d438208e-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 5 16:03:11.995998 kubelet[2841]: I1105 16:03:11.995963 2841 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdb8325a-0fd4-4d9c-86b5-8a27d438208e-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 5 16:03:11.995998 kubelet[2841]: I1105 16:03:11.995982 2841 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xb5lh\" (UniqueName: \"kubernetes.io/projected/bdb8325a-0fd4-4d9c-86b5-8a27d438208e-kube-api-access-xb5lh\") on node \"localhost\" DevicePath \"\"" Nov 5 16:03:12.005609 kubelet[2841]: E1105 16:03:12.005232 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:03:12.013105 systemd[1]: Removed slice kubepods-besteffort-podbdb8325a_0fd4_4d9c_86b5_8a27d438208e.slice - libcontainer container kubepods-besteffort-podbdb8325a_0fd4_4d9c_86b5_8a27d438208e.slice. Nov 5 16:03:12.030399 kubelet[2841]: I1105 16:03:12.030344 2841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gjss5" podStartSLOduration=1.184424337 podStartE2EDuration="24.030324001s" podCreationTimestamp="2025-11-05 16:02:48 +0000 UTC" firstStartedPulling="2025-11-05 16:02:48.598717747 +0000 UTC m=+20.902853181" lastFinishedPulling="2025-11-05 16:03:11.44461741 +0000 UTC m=+43.748752845" observedRunningTime="2025-11-05 16:03:12.029633165 +0000 UTC m=+44.333768599" watchObservedRunningTime="2025-11-05 16:03:12.030324001 +0000 UTC m=+44.334459435" Nov 5 16:03:12.092249 systemd[1]: Created slice kubepods-besteffort-podb92d66c6_1332_4195_9d17_9fdc4c32310f.slice - libcontainer container kubepods-besteffort-podb92d66c6_1332_4195_9d17_9fdc4c32310f.slice. Nov 5 16:03:12.107002 systemd-networkd[1510]: calibeed7f1d10c: Link UP Nov 5 16:03:12.107232 systemd-networkd[1510]: calibeed7f1d10c: Gained carrier Nov 5 16:03:12.128143 containerd[1621]: 2025-11-05 16:03:11.905 [INFO][3995] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 16:03:12.128143 containerd[1621]: 2025-11-05 16:03:11.930 [INFO][3995] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--m7g79-eth0 csi-node-driver- calico-system 09c014d2-b99d-493b-9e36-c9afae0fa214 716 0 2025-11-05 16:02:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-m7g79 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibeed7f1d10c [] [] }} ContainerID="2a4337f168827317106b5c5eaecb0441222d9d20e53b84ac27af1bf9cbd675ce" Namespace="calico-system" Pod="csi-node-driver-m7g79" WorkloadEndpoint="localhost-k8s-csi--node--driver--m7g79-" Nov 5 16:03:12.128143 containerd[1621]: 2025-11-05 16:03:11.930 [INFO][3995] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2a4337f168827317106b5c5eaecb0441222d9d20e53b84ac27af1bf9cbd675ce" Namespace="calico-system" Pod="csi-node-driver-m7g79" WorkloadEndpoint="localhost-k8s-csi--node--driver--m7g79-eth0" Nov 5 16:03:12.128143 containerd[1621]: 2025-11-05 16:03:12.021 [INFO][4027] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a4337f168827317106b5c5eaecb0441222d9d20e53b84ac27af1bf9cbd675ce" HandleID="k8s-pod-network.2a4337f168827317106b5c5eaecb0441222d9d20e53b84ac27af1bf9cbd675ce" Workload="localhost-k8s-csi--node--driver--m7g79-eth0" Nov 5 16:03:12.128381 containerd[1621]: 2025-11-05 16:03:12.022 [INFO][4027] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2a4337f168827317106b5c5eaecb0441222d9d20e53b84ac27af1bf9cbd675ce" HandleID="k8s-pod-network.2a4337f168827317106b5c5eaecb0441222d9d20e53b84ac27af1bf9cbd675ce" Workload="localhost-k8s-csi--node--driver--m7g79-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003f2520), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-m7g79", "timestamp":"2025-11-05 16:03:12.021509606 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 16:03:12.128381 containerd[1621]: 2025-11-05 16:03:12.022 [INFO][4027] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 16:03:12.128381 containerd[1621]: 2025-11-05 16:03:12.022 [INFO][4027] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 16:03:12.128381 containerd[1621]: 2025-11-05 16:03:12.022 [INFO][4027] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 16:03:12.128381 containerd[1621]: 2025-11-05 16:03:12.035 [INFO][4027] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2a4337f168827317106b5c5eaecb0441222d9d20e53b84ac27af1bf9cbd675ce" host="localhost" Nov 5 16:03:12.128381 containerd[1621]: 2025-11-05 16:03:12.043 [INFO][4027] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 16:03:12.128381 containerd[1621]: 2025-11-05 16:03:12.057 [INFO][4027] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 16:03:12.128381 containerd[1621]: 2025-11-05 16:03:12.060 [INFO][4027] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 16:03:12.128381 containerd[1621]: 2025-11-05 16:03:12.063 [INFO][4027] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 16:03:12.128381 containerd[1621]: 2025-11-05 16:03:12.064 [INFO][4027] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2a4337f168827317106b5c5eaecb0441222d9d20e53b84ac27af1bf9cbd675ce" host="localhost" Nov 5 16:03:12.128754 containerd[1621]: 2025-11-05 16:03:12.067 [INFO][4027] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2a4337f168827317106b5c5eaecb0441222d9d20e53b84ac27af1bf9cbd675ce Nov 5 16:03:12.128754 containerd[1621]: 2025-11-05 16:03:12.072 [INFO][4027] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2a4337f168827317106b5c5eaecb0441222d9d20e53b84ac27af1bf9cbd675ce" host="localhost" Nov 5 16:03:12.128754 containerd[1621]: 2025-11-05 16:03:12.086 [INFO][4027] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.2a4337f168827317106b5c5eaecb0441222d9d20e53b84ac27af1bf9cbd675ce" host="localhost" Nov 5 16:03:12.128754 containerd[1621]: 2025-11-05 16:03:12.086 [INFO][4027] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.2a4337f168827317106b5c5eaecb0441222d9d20e53b84ac27af1bf9cbd675ce" host="localhost" Nov 5 16:03:12.128754 containerd[1621]: 2025-11-05 16:03:12.086 [INFO][4027] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 16:03:12.128754 containerd[1621]: 2025-11-05 16:03:12.086 [INFO][4027] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="2a4337f168827317106b5c5eaecb0441222d9d20e53b84ac27af1bf9cbd675ce" HandleID="k8s-pod-network.2a4337f168827317106b5c5eaecb0441222d9d20e53b84ac27af1bf9cbd675ce" Workload="localhost-k8s-csi--node--driver--m7g79-eth0" Nov 5 16:03:12.129034 containerd[1621]: 2025-11-05 16:03:12.098 [INFO][3995] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2a4337f168827317106b5c5eaecb0441222d9d20e53b84ac27af1bf9cbd675ce" Namespace="calico-system" Pod="csi-node-driver-m7g79" WorkloadEndpoint="localhost-k8s-csi--node--driver--m7g79-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--m7g79-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"09c014d2-b99d-493b-9e36-c9afae0fa214", ResourceVersion:"716", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 2, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-m7g79", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibeed7f1d10c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:12.129116 containerd[1621]: 2025-11-05 16:03:12.098 [INFO][3995] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="2a4337f168827317106b5c5eaecb0441222d9d20e53b84ac27af1bf9cbd675ce" Namespace="calico-system" Pod="csi-node-driver-m7g79" WorkloadEndpoint="localhost-k8s-csi--node--driver--m7g79-eth0" Nov 5 16:03:12.129116 containerd[1621]: 2025-11-05 16:03:12.098 [INFO][3995] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibeed7f1d10c ContainerID="2a4337f168827317106b5c5eaecb0441222d9d20e53b84ac27af1bf9cbd675ce" Namespace="calico-system" Pod="csi-node-driver-m7g79" WorkloadEndpoint="localhost-k8s-csi--node--driver--m7g79-eth0" Nov 5 16:03:12.129116 containerd[1621]: 2025-11-05 16:03:12.108 [INFO][3995] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a4337f168827317106b5c5eaecb0441222d9d20e53b84ac27af1bf9cbd675ce" Namespace="calico-system" Pod="csi-node-driver-m7g79" WorkloadEndpoint="localhost-k8s-csi--node--driver--m7g79-eth0" Nov 5 16:03:12.129339 containerd[1621]: 2025-11-05 16:03:12.109 [INFO][3995] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2a4337f168827317106b5c5eaecb0441222d9d20e53b84ac27af1bf9cbd675ce" Namespace="calico-system" Pod="csi-node-driver-m7g79" WorkloadEndpoint="localhost-k8s-csi--node--driver--m7g79-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--m7g79-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"09c014d2-b99d-493b-9e36-c9afae0fa214", ResourceVersion:"716", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 2, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2a4337f168827317106b5c5eaecb0441222d9d20e53b84ac27af1bf9cbd675ce", Pod:"csi-node-driver-m7g79", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibeed7f1d10c", MAC:"ce:0d:0e:9a:96:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:12.129413 containerd[1621]: 2025-11-05 16:03:12.125 [INFO][3995] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2a4337f168827317106b5c5eaecb0441222d9d20e53b84ac27af1bf9cbd675ce" Namespace="calico-system" Pod="csi-node-driver-m7g79" WorkloadEndpoint="localhost-k8s-csi--node--driver--m7g79-eth0" Nov 5 16:03:12.171236 systemd-networkd[1510]: cali82656bd483a: Link UP Nov 5 16:03:12.171633 systemd-networkd[1510]: cali82656bd483a: Gained carrier Nov 5 16:03:12.191961 containerd[1621]: 2025-11-05 16:03:11.905 [INFO][3998] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 16:03:12.191961 containerd[1621]: 2025-11-05 16:03:11.930 [INFO][3998] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5f5b56f6b8--98rmx-eth0 calico-apiserver-5f5b56f6b8- calico-apiserver 7606f5a8-f065-4081-b3bc-2344e91053d9 841 0 2025-11-05 16:02:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f5b56f6b8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5f5b56f6b8-98rmx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali82656bd483a [] [] }} ContainerID="352920e939abb37a47601fa329053b1276b2468b52644a3c3307502b879ea26d" Namespace="calico-apiserver" Pod="calico-apiserver-5f5b56f6b8-98rmx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f5b56f6b8--98rmx-" Nov 5 16:03:12.191961 containerd[1621]: 2025-11-05 16:03:11.930 [INFO][3998] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="352920e939abb37a47601fa329053b1276b2468b52644a3c3307502b879ea26d" Namespace="calico-apiserver" Pod="calico-apiserver-5f5b56f6b8-98rmx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f5b56f6b8--98rmx-eth0" Nov 5 16:03:12.191961 containerd[1621]: 2025-11-05 16:03:12.021 [INFO][4025] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="352920e939abb37a47601fa329053b1276b2468b52644a3c3307502b879ea26d" HandleID="k8s-pod-network.352920e939abb37a47601fa329053b1276b2468b52644a3c3307502b879ea26d" Workload="localhost-k8s-calico--apiserver--5f5b56f6b8--98rmx-eth0" Nov 5 16:03:12.192234 containerd[1621]: 2025-11-05 16:03:12.022 [INFO][4025] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="352920e939abb37a47601fa329053b1276b2468b52644a3c3307502b879ea26d" HandleID="k8s-pod-network.352920e939abb37a47601fa329053b1276b2468b52644a3c3307502b879ea26d" Workload="localhost-k8s-calico--apiserver--5f5b56f6b8--98rmx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000588b60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5f5b56f6b8-98rmx", "timestamp":"2025-11-05 16:03:12.021593707 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 16:03:12.192234 containerd[1621]: 2025-11-05 16:03:12.022 [INFO][4025] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 16:03:12.192234 containerd[1621]: 2025-11-05 16:03:12.087 [INFO][4025] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 16:03:12.192234 containerd[1621]: 2025-11-05 16:03:12.087 [INFO][4025] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 16:03:12.192234 containerd[1621]: 2025-11-05 16:03:12.135 [INFO][4025] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.352920e939abb37a47601fa329053b1276b2468b52644a3c3307502b879ea26d" host="localhost" Nov 5 16:03:12.192234 containerd[1621]: 2025-11-05 16:03:12.142 [INFO][4025] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 16:03:12.192234 containerd[1621]: 2025-11-05 16:03:12.149 [INFO][4025] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 16:03:12.192234 containerd[1621]: 2025-11-05 16:03:12.152 [INFO][4025] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 16:03:12.192234 containerd[1621]: 2025-11-05 16:03:12.154 [INFO][4025] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 16:03:12.192234 containerd[1621]: 2025-11-05 16:03:12.154 [INFO][4025] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.352920e939abb37a47601fa329053b1276b2468b52644a3c3307502b879ea26d" host="localhost" Nov 5 16:03:12.192439 containerd[1621]: 2025-11-05 16:03:12.156 [INFO][4025] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.352920e939abb37a47601fa329053b1276b2468b52644a3c3307502b879ea26d Nov 5 16:03:12.192439 containerd[1621]: 2025-11-05 16:03:12.159 [INFO][4025] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.352920e939abb37a47601fa329053b1276b2468b52644a3c3307502b879ea26d" host="localhost" Nov 5 16:03:12.192439 containerd[1621]: 2025-11-05 16:03:12.165 [INFO][4025] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.352920e939abb37a47601fa329053b1276b2468b52644a3c3307502b879ea26d" host="localhost" Nov 5 16:03:12.192439 containerd[1621]: 2025-11-05 16:03:12.165 [INFO][4025] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.352920e939abb37a47601fa329053b1276b2468b52644a3c3307502b879ea26d" host="localhost" Nov 5 16:03:12.192439 containerd[1621]: 2025-11-05 16:03:12.165 [INFO][4025] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 16:03:12.192439 containerd[1621]: 2025-11-05 16:03:12.165 [INFO][4025] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="352920e939abb37a47601fa329053b1276b2468b52644a3c3307502b879ea26d" HandleID="k8s-pod-network.352920e939abb37a47601fa329053b1276b2468b52644a3c3307502b879ea26d" Workload="localhost-k8s-calico--apiserver--5f5b56f6b8--98rmx-eth0" Nov 5 16:03:12.192563 containerd[1621]: 2025-11-05 16:03:12.169 [INFO][3998] cni-plugin/k8s.go 418: Populated endpoint ContainerID="352920e939abb37a47601fa329053b1276b2468b52644a3c3307502b879ea26d" Namespace="calico-apiserver" Pod="calico-apiserver-5f5b56f6b8-98rmx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f5b56f6b8--98rmx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f5b56f6b8--98rmx-eth0", GenerateName:"calico-apiserver-5f5b56f6b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"7606f5a8-f065-4081-b3bc-2344e91053d9", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 2, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f5b56f6b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5f5b56f6b8-98rmx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali82656bd483a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:12.192634 containerd[1621]: 2025-11-05 16:03:12.169 [INFO][3998] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="352920e939abb37a47601fa329053b1276b2468b52644a3c3307502b879ea26d" Namespace="calico-apiserver" Pod="calico-apiserver-5f5b56f6b8-98rmx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f5b56f6b8--98rmx-eth0" Nov 5 16:03:12.192634 containerd[1621]: 2025-11-05 16:03:12.169 [INFO][3998] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali82656bd483a ContainerID="352920e939abb37a47601fa329053b1276b2468b52644a3c3307502b879ea26d" Namespace="calico-apiserver" Pod="calico-apiserver-5f5b56f6b8-98rmx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f5b56f6b8--98rmx-eth0" Nov 5 16:03:12.192634 containerd[1621]: 2025-11-05 16:03:12.174 [INFO][3998] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="352920e939abb37a47601fa329053b1276b2468b52644a3c3307502b879ea26d" Namespace="calico-apiserver" Pod="calico-apiserver-5f5b56f6b8-98rmx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f5b56f6b8--98rmx-eth0" Nov 5 16:03:12.192695 containerd[1621]: 2025-11-05 16:03:12.174 [INFO][3998] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="352920e939abb37a47601fa329053b1276b2468b52644a3c3307502b879ea26d" Namespace="calico-apiserver" Pod="calico-apiserver-5f5b56f6b8-98rmx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f5b56f6b8--98rmx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f5b56f6b8--98rmx-eth0", GenerateName:"calico-apiserver-5f5b56f6b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"7606f5a8-f065-4081-b3bc-2344e91053d9", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 2, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f5b56f6b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"352920e939abb37a47601fa329053b1276b2468b52644a3c3307502b879ea26d", Pod:"calico-apiserver-5f5b56f6b8-98rmx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali82656bd483a", MAC:"9a:d0:09:66:0e:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:12.192747 containerd[1621]: 2025-11-05 16:03:12.185 [INFO][3998] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="352920e939abb37a47601fa329053b1276b2468b52644a3c3307502b879ea26d" Namespace="calico-apiserver" Pod="calico-apiserver-5f5b56f6b8-98rmx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f5b56f6b8--98rmx-eth0" Nov 5 16:03:12.197278 kubelet[2841]: I1105 16:03:12.197198 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb68s\" (UniqueName: \"kubernetes.io/projected/b92d66c6-1332-4195-9d17-9fdc4c32310f-kube-api-access-bb68s\") pod \"whisker-6745b64cf-lv9qv\" (UID: \"b92d66c6-1332-4195-9d17-9fdc4c32310f\") " pod="calico-system/whisker-6745b64cf-lv9qv" Nov 5 16:03:12.197406 kubelet[2841]: I1105 16:03:12.197374 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b92d66c6-1332-4195-9d17-9fdc4c32310f-whisker-backend-key-pair\") pod \"whisker-6745b64cf-lv9qv\" (UID: \"b92d66c6-1332-4195-9d17-9fdc4c32310f\") " pod="calico-system/whisker-6745b64cf-lv9qv" Nov 5 16:03:12.197498 kubelet[2841]: I1105 16:03:12.197467 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b92d66c6-1332-4195-9d17-9fdc4c32310f-whisker-ca-bundle\") pod \"whisker-6745b64cf-lv9qv\" (UID: \"b92d66c6-1332-4195-9d17-9fdc4c32310f\") " pod="calico-system/whisker-6745b64cf-lv9qv" Nov 5 16:03:12.249587 containerd[1621]: time="2025-11-05T16:03:12.249318492Z" level=info msg="connecting to shim 2a4337f168827317106b5c5eaecb0441222d9d20e53b84ac27af1bf9cbd675ce" address="unix:///run/containerd/s/543b36201921aa8e719e3543d613b0a602e439848309b79623efcfe5af45c6ab" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:03:12.249710 containerd[1621]: time="2025-11-05T16:03:12.249654621Z" level=info msg="connecting to shim 352920e939abb37a47601fa329053b1276b2468b52644a3c3307502b879ea26d" address="unix:///run/containerd/s/0fde16f63948a98254c97b64d8a40b3663ad98dca97568a3838b1a48920a955c" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:03:12.282110 systemd[1]: Started cri-containerd-2a4337f168827317106b5c5eaecb0441222d9d20e53b84ac27af1bf9cbd675ce.scope - libcontainer container 2a4337f168827317106b5c5eaecb0441222d9d20e53b84ac27af1bf9cbd675ce. Nov 5 16:03:12.286985 systemd[1]: Started cri-containerd-352920e939abb37a47601fa329053b1276b2468b52644a3c3307502b879ea26d.scope - libcontainer container 352920e939abb37a47601fa329053b1276b2468b52644a3c3307502b879ea26d. Nov 5 16:03:12.295996 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 16:03:12.305316 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 16:03:12.326196 containerd[1621]: time="2025-11-05T16:03:12.326130367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m7g79,Uid:09c014d2-b99d-493b-9e36-c9afae0fa214,Namespace:calico-system,Attempt:0,} returns sandbox id \"2a4337f168827317106b5c5eaecb0441222d9d20e53b84ac27af1bf9cbd675ce\"" Nov 5 16:03:12.331446 containerd[1621]: time="2025-11-05T16:03:12.331396735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 16:03:12.346332 containerd[1621]: time="2025-11-05T16:03:12.346275357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5b56f6b8-98rmx,Uid:7606f5a8-f065-4081-b3bc-2344e91053d9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"352920e939abb37a47601fa329053b1276b2468b52644a3c3307502b879ea26d\"" Nov 5 16:03:12.400474 containerd[1621]: time="2025-11-05T16:03:12.400314389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6745b64cf-lv9qv,Uid:b92d66c6-1332-4195-9d17-9fdc4c32310f,Namespace:calico-system,Attempt:0,}" Nov 5 16:03:12.457749 systemd[1]: var-lib-kubelet-pods-bdb8325a\x2d0fd4\x2d4d9c\x2d86b5\x2d8a27d438208e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxb5lh.mount: Deactivated successfully. Nov 5 16:03:12.457877 systemd[1]: var-lib-kubelet-pods-bdb8325a\x2d0fd4\x2d4d9c\x2d86b5\x2d8a27d438208e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 5 16:03:12.519658 systemd-networkd[1510]: cali3c1a5096232: Link UP Nov 5 16:03:12.519960 systemd-networkd[1510]: cali3c1a5096232: Gained carrier Nov 5 16:03:12.536232 containerd[1621]: 2025-11-05 16:03:12.429 [INFO][4165] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 16:03:12.536232 containerd[1621]: 2025-11-05 16:03:12.440 [INFO][4165] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6745b64cf--lv9qv-eth0 whisker-6745b64cf- calico-system b92d66c6-1332-4195-9d17-9fdc4c32310f 917 0 2025-11-05 16:03:12 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6745b64cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6745b64cf-lv9qv eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali3c1a5096232 [] [] }} ContainerID="101b2591b698d260744aa132d1cb37489363fe3a3dc2ee7f71e7f61f75b99f6a" Namespace="calico-system" Pod="whisker-6745b64cf-lv9qv" WorkloadEndpoint="localhost-k8s-whisker--6745b64cf--lv9qv-" Nov 5 16:03:12.536232 containerd[1621]: 2025-11-05 16:03:12.441 [INFO][4165] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="101b2591b698d260744aa132d1cb37489363fe3a3dc2ee7f71e7f61f75b99f6a" Namespace="calico-system" Pod="whisker-6745b64cf-lv9qv" WorkloadEndpoint="localhost-k8s-whisker--6745b64cf--lv9qv-eth0" Nov 5 16:03:12.536232 containerd[1621]: 2025-11-05 16:03:12.474 [INFO][4179] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="101b2591b698d260744aa132d1cb37489363fe3a3dc2ee7f71e7f61f75b99f6a" HandleID="k8s-pod-network.101b2591b698d260744aa132d1cb37489363fe3a3dc2ee7f71e7f61f75b99f6a" Workload="localhost-k8s-whisker--6745b64cf--lv9qv-eth0" Nov 5 16:03:12.537009 containerd[1621]: 2025-11-05 16:03:12.475 [INFO][4179] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="101b2591b698d260744aa132d1cb37489363fe3a3dc2ee7f71e7f61f75b99f6a" HandleID="k8s-pod-network.101b2591b698d260744aa132d1cb37489363fe3a3dc2ee7f71e7f61f75b99f6a" Workload="localhost-k8s-whisker--6745b64cf--lv9qv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002de920), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6745b64cf-lv9qv", "timestamp":"2025-11-05 16:03:12.474990988 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 16:03:12.537009 containerd[1621]: 2025-11-05 16:03:12.475 [INFO][4179] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 16:03:12.537009 containerd[1621]: 2025-11-05 16:03:12.475 [INFO][4179] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 16:03:12.537009 containerd[1621]: 2025-11-05 16:03:12.475 [INFO][4179] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 16:03:12.537009 containerd[1621]: 2025-11-05 16:03:12.485 [INFO][4179] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.101b2591b698d260744aa132d1cb37489363fe3a3dc2ee7f71e7f61f75b99f6a" host="localhost" Nov 5 16:03:12.537009 containerd[1621]: 2025-11-05 16:03:12.489 [INFO][4179] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 16:03:12.537009 containerd[1621]: 2025-11-05 16:03:12.495 [INFO][4179] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 16:03:12.537009 containerd[1621]: 2025-11-05 16:03:12.497 [INFO][4179] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 16:03:12.537009 containerd[1621]: 2025-11-05 16:03:12.500 [INFO][4179] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 16:03:12.537009 containerd[1621]: 2025-11-05 16:03:12.500 [INFO][4179] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.101b2591b698d260744aa132d1cb37489363fe3a3dc2ee7f71e7f61f75b99f6a" host="localhost" Nov 5 16:03:12.537317 containerd[1621]: 2025-11-05 16:03:12.501 [INFO][4179] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.101b2591b698d260744aa132d1cb37489363fe3a3dc2ee7f71e7f61f75b99f6a Nov 5 16:03:12.537317 containerd[1621]: 2025-11-05 16:03:12.506 [INFO][4179] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.101b2591b698d260744aa132d1cb37489363fe3a3dc2ee7f71e7f61f75b99f6a" host="localhost" Nov 5 16:03:12.537317 containerd[1621]: 2025-11-05 16:03:12.513 [INFO][4179] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.101b2591b698d260744aa132d1cb37489363fe3a3dc2ee7f71e7f61f75b99f6a" host="localhost" Nov 5 16:03:12.537317 containerd[1621]: 2025-11-05 16:03:12.513 [INFO][4179] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.101b2591b698d260744aa132d1cb37489363fe3a3dc2ee7f71e7f61f75b99f6a" host="localhost" Nov 5 16:03:12.537317 containerd[1621]: 2025-11-05 16:03:12.513 [INFO][4179] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 16:03:12.537317 containerd[1621]: 2025-11-05 16:03:12.513 [INFO][4179] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="101b2591b698d260744aa132d1cb37489363fe3a3dc2ee7f71e7f61f75b99f6a" HandleID="k8s-pod-network.101b2591b698d260744aa132d1cb37489363fe3a3dc2ee7f71e7f61f75b99f6a" Workload="localhost-k8s-whisker--6745b64cf--lv9qv-eth0" Nov 5 16:03:12.537516 containerd[1621]: 2025-11-05 16:03:12.517 [INFO][4165] cni-plugin/k8s.go 418: Populated endpoint ContainerID="101b2591b698d260744aa132d1cb37489363fe3a3dc2ee7f71e7f61f75b99f6a" Namespace="calico-system" Pod="whisker-6745b64cf-lv9qv" WorkloadEndpoint="localhost-k8s-whisker--6745b64cf--lv9qv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6745b64cf--lv9qv-eth0", GenerateName:"whisker-6745b64cf-", Namespace:"calico-system", SelfLink:"", UID:"b92d66c6-1332-4195-9d17-9fdc4c32310f", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 3, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6745b64cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6745b64cf-lv9qv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3c1a5096232", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:12.537516 containerd[1621]: 2025-11-05 16:03:12.517 [INFO][4165] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="101b2591b698d260744aa132d1cb37489363fe3a3dc2ee7f71e7f61f75b99f6a" Namespace="calico-system" Pod="whisker-6745b64cf-lv9qv" WorkloadEndpoint="localhost-k8s-whisker--6745b64cf--lv9qv-eth0" Nov 5 16:03:12.537615 containerd[1621]: 2025-11-05 16:03:12.517 [INFO][4165] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c1a5096232 ContainerID="101b2591b698d260744aa132d1cb37489363fe3a3dc2ee7f71e7f61f75b99f6a" Namespace="calico-system" Pod="whisker-6745b64cf-lv9qv" WorkloadEndpoint="localhost-k8s-whisker--6745b64cf--lv9qv-eth0" Nov 5 16:03:12.537615 containerd[1621]: 2025-11-05 16:03:12.520 [INFO][4165] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="101b2591b698d260744aa132d1cb37489363fe3a3dc2ee7f71e7f61f75b99f6a" Namespace="calico-system" Pod="whisker-6745b64cf-lv9qv" WorkloadEndpoint="localhost-k8s-whisker--6745b64cf--lv9qv-eth0" Nov 5 16:03:12.537669 containerd[1621]: 2025-11-05 16:03:12.520 [INFO][4165] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="101b2591b698d260744aa132d1cb37489363fe3a3dc2ee7f71e7f61f75b99f6a" Namespace="calico-system" Pod="whisker-6745b64cf-lv9qv" WorkloadEndpoint="localhost-k8s-whisker--6745b64cf--lv9qv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6745b64cf--lv9qv-eth0", GenerateName:"whisker-6745b64cf-", Namespace:"calico-system", SelfLink:"", UID:"b92d66c6-1332-4195-9d17-9fdc4c32310f", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 3, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6745b64cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"101b2591b698d260744aa132d1cb37489363fe3a3dc2ee7f71e7f61f75b99f6a", Pod:"whisker-6745b64cf-lv9qv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3c1a5096232", MAC:"4a:37:09:73:59:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:12.537730 containerd[1621]: 2025-11-05 16:03:12.532 [INFO][4165] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="101b2591b698d260744aa132d1cb37489363fe3a3dc2ee7f71e7f61f75b99f6a" Namespace="calico-system" Pod="whisker-6745b64cf-lv9qv" WorkloadEndpoint="localhost-k8s-whisker--6745b64cf--lv9qv-eth0" Nov 5 16:03:12.633555 containerd[1621]: time="2025-11-05T16:03:12.633508118Z" level=info msg="connecting to shim 101b2591b698d260744aa132d1cb37489363fe3a3dc2ee7f71e7f61f75b99f6a" address="unix:///run/containerd/s/a46e63cb23ceaa35aaccd80d9d4d60ea1192f8b43be1857532b71d88a9452e5e" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:03:12.664128 systemd[1]: Started cri-containerd-101b2591b698d260744aa132d1cb37489363fe3a3dc2ee7f71e7f61f75b99f6a.scope - libcontainer container 101b2591b698d260744aa132d1cb37489363fe3a3dc2ee7f71e7f61f75b99f6a. Nov 5 16:03:12.677251 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 16:03:12.712250 containerd[1621]: time="2025-11-05T16:03:12.712205755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6745b64cf-lv9qv,Uid:b92d66c6-1332-4195-9d17-9fdc4c32310f,Namespace:calico-system,Attempt:0,} returns sandbox id \"101b2591b698d260744aa132d1cb37489363fe3a3dc2ee7f71e7f61f75b99f6a\"" Nov 5 16:03:12.713945 containerd[1621]: time="2025-11-05T16:03:12.713882908Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:12.715438 containerd[1621]: time="2025-11-05T16:03:12.715394076Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 16:03:12.720950 containerd[1621]: time="2025-11-05T16:03:12.720897916Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 16:03:12.721121 kubelet[2841]: E1105 16:03:12.721080 2841 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 16:03:12.721753 kubelet[2841]: E1105 16:03:12.721704 2841 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 16:03:12.722090 containerd[1621]: time="2025-11-05T16:03:12.722060640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 16:03:12.731245 kubelet[2841]: E1105 16:03:12.731124 2841 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s8zcg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-m7g79_calico-system(09c014d2-b99d-493b-9e36-c9afae0fa214): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:12.856293 kubelet[2841]: E1105 16:03:12.856231 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:03:12.856793 containerd[1621]: time="2025-11-05T16:03:12.856721472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s8jjg,Uid:813fc040-02be-4987-a338-2511b9fa3fec,Namespace:kube-system,Attempt:0,}" Nov 5 16:03:12.985567 systemd-networkd[1510]: calie61ab6f06e1: Link UP Nov 5 16:03:12.988067 systemd-networkd[1510]: calie61ab6f06e1: Gained carrier Nov 5 16:03:13.012920 containerd[1621]: 2025-11-05 16:03:12.880 [INFO][4242] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 16:03:13.012920 containerd[1621]: 2025-11-05 16:03:12.892 [INFO][4242] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--s8jjg-eth0 coredns-668d6bf9bc- kube-system 813fc040-02be-4987-a338-2511b9fa3fec 834 0 2025-11-05 16:02:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-s8jjg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie61ab6f06e1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="724bca5fb41f52ddecf809348ecb6eaf8f5eebb98893f0c4c81e13472c40cd7a" Namespace="kube-system" Pod="coredns-668d6bf9bc-s8jjg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s8jjg-" Nov 5 16:03:13.012920 containerd[1621]: 2025-11-05 16:03:12.892 [INFO][4242] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="724bca5fb41f52ddecf809348ecb6eaf8f5eebb98893f0c4c81e13472c40cd7a" Namespace="kube-system" Pod="coredns-668d6bf9bc-s8jjg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s8jjg-eth0" Nov 5 16:03:13.012920 containerd[1621]: 2025-11-05 16:03:12.929 [INFO][4251] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="724bca5fb41f52ddecf809348ecb6eaf8f5eebb98893f0c4c81e13472c40cd7a" HandleID="k8s-pod-network.724bca5fb41f52ddecf809348ecb6eaf8f5eebb98893f0c4c81e13472c40cd7a" Workload="localhost-k8s-coredns--668d6bf9bc--s8jjg-eth0" Nov 5 16:03:13.013219 containerd[1621]: 2025-11-05 16:03:12.929 [INFO][4251] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="724bca5fb41f52ddecf809348ecb6eaf8f5eebb98893f0c4c81e13472c40cd7a" HandleID="k8s-pod-network.724bca5fb41f52ddecf809348ecb6eaf8f5eebb98893f0c4c81e13472c40cd7a" Workload="localhost-k8s-coredns--668d6bf9bc--s8jjg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ab050), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-s8jjg", "timestamp":"2025-11-05 16:03:12.929130254 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 16:03:13.013219 containerd[1621]: 2025-11-05 16:03:12.929 [INFO][4251] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 16:03:13.013219 containerd[1621]: 2025-11-05 16:03:12.929 [INFO][4251] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 16:03:13.013219 containerd[1621]: 2025-11-05 16:03:12.929 [INFO][4251] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 16:03:13.013219 containerd[1621]: 2025-11-05 16:03:12.936 [INFO][4251] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.724bca5fb41f52ddecf809348ecb6eaf8f5eebb98893f0c4c81e13472c40cd7a" host="localhost" Nov 5 16:03:13.013219 containerd[1621]: 2025-11-05 16:03:12.941 [INFO][4251] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 16:03:13.013219 containerd[1621]: 2025-11-05 16:03:12.945 [INFO][4251] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 16:03:13.013219 containerd[1621]: 2025-11-05 16:03:12.947 [INFO][4251] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 16:03:13.013219 containerd[1621]: 2025-11-05 16:03:12.951 [INFO][4251] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 16:03:13.013219 containerd[1621]: 2025-11-05 16:03:12.951 [INFO][4251] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.724bca5fb41f52ddecf809348ecb6eaf8f5eebb98893f0c4c81e13472c40cd7a" host="localhost" Nov 5 16:03:13.016116 containerd[1621]: 2025-11-05 16:03:12.952 [INFO][4251] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.724bca5fb41f52ddecf809348ecb6eaf8f5eebb98893f0c4c81e13472c40cd7a Nov 5 16:03:13.016116 containerd[1621]: 2025-11-05 16:03:12.958 [INFO][4251] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.724bca5fb41f52ddecf809348ecb6eaf8f5eebb98893f0c4c81e13472c40cd7a" host="localhost" Nov 5 16:03:13.016116 containerd[1621]: 2025-11-05 16:03:12.968 [INFO][4251] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.724bca5fb41f52ddecf809348ecb6eaf8f5eebb98893f0c4c81e13472c40cd7a" host="localhost" Nov 5 16:03:13.016116 containerd[1621]: 2025-11-05 16:03:12.968 [INFO][4251] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.724bca5fb41f52ddecf809348ecb6eaf8f5eebb98893f0c4c81e13472c40cd7a" host="localhost" Nov 5 16:03:13.016116 containerd[1621]: 2025-11-05 16:03:12.968 [INFO][4251] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 16:03:13.016116 containerd[1621]: 2025-11-05 16:03:12.968 [INFO][4251] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="724bca5fb41f52ddecf809348ecb6eaf8f5eebb98893f0c4c81e13472c40cd7a" HandleID="k8s-pod-network.724bca5fb41f52ddecf809348ecb6eaf8f5eebb98893f0c4c81e13472c40cd7a" Workload="localhost-k8s-coredns--668d6bf9bc--s8jjg-eth0" Nov 5 16:03:13.016244 containerd[1621]: 2025-11-05 16:03:12.978 [INFO][4242] cni-plugin/k8s.go 418: Populated endpoint ContainerID="724bca5fb41f52ddecf809348ecb6eaf8f5eebb98893f0c4c81e13472c40cd7a" Namespace="kube-system" Pod="coredns-668d6bf9bc-s8jjg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s8jjg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--s8jjg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"813fc040-02be-4987-a338-2511b9fa3fec", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 2, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-s8jjg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie61ab6f06e1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:13.016306 containerd[1621]: 2025-11-05 16:03:12.978 [INFO][4242] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="724bca5fb41f52ddecf809348ecb6eaf8f5eebb98893f0c4c81e13472c40cd7a" Namespace="kube-system" Pod="coredns-668d6bf9bc-s8jjg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s8jjg-eth0" Nov 5 16:03:13.016306 containerd[1621]: 2025-11-05 16:03:12.978 [INFO][4242] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie61ab6f06e1 ContainerID="724bca5fb41f52ddecf809348ecb6eaf8f5eebb98893f0c4c81e13472c40cd7a" Namespace="kube-system" Pod="coredns-668d6bf9bc-s8jjg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s8jjg-eth0" Nov 5 16:03:13.016306 containerd[1621]: 2025-11-05 16:03:12.988 [INFO][4242] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="724bca5fb41f52ddecf809348ecb6eaf8f5eebb98893f0c4c81e13472c40cd7a" Namespace="kube-system" Pod="coredns-668d6bf9bc-s8jjg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s8jjg-eth0" Nov 5 16:03:13.016375 containerd[1621]: 2025-11-05 16:03:12.989 [INFO][4242] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="724bca5fb41f52ddecf809348ecb6eaf8f5eebb98893f0c4c81e13472c40cd7a" Namespace="kube-system" Pod="coredns-668d6bf9bc-s8jjg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s8jjg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--s8jjg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"813fc040-02be-4987-a338-2511b9fa3fec", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 2, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"724bca5fb41f52ddecf809348ecb6eaf8f5eebb98893f0c4c81e13472c40cd7a", Pod:"coredns-668d6bf9bc-s8jjg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie61ab6f06e1", MAC:"c6:8c:26:07:59:92", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:13.016375 containerd[1621]: 2025-11-05 16:03:13.005 [INFO][4242] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="724bca5fb41f52ddecf809348ecb6eaf8f5eebb98893f0c4c81e13472c40cd7a" Namespace="kube-system" Pod="coredns-668d6bf9bc-s8jjg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s8jjg-eth0" Nov 5 16:03:13.075786 containerd[1621]: time="2025-11-05T16:03:13.075726021Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:13.191075 containerd[1621]: time="2025-11-05T16:03:13.190995492Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 16:03:13.191239 containerd[1621]: time="2025-11-05T16:03:13.191089590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 16:03:13.191427 kubelet[2841]: E1105 16:03:13.191370 2841 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:03:13.191852 kubelet[2841]: E1105 16:03:13.191436 2841 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:03:13.192307 containerd[1621]: time="2025-11-05T16:03:13.192267813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 16:03:13.195412 kubelet[2841]: E1105 16:03:13.193896 2841 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rg9x7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f5b56f6b8-98rmx_calico-apiserver(7606f5a8-f065-4081-b3bc-2344e91053d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:13.195412 kubelet[2841]: E1105 16:03:13.195110 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f5b56f6b8-98rmx" podUID="7606f5a8-f065-4081-b3bc-2344e91053d9" Nov 5 16:03:13.233981 containerd[1621]: time="2025-11-05T16:03:13.233929778Z" level=info msg="connecting to shim 724bca5fb41f52ddecf809348ecb6eaf8f5eebb98893f0c4c81e13472c40cd7a" address="unix:///run/containerd/s/79f138810f4b49843e69db4df51b6ff7f3f45544bc3e5c50727579b0d2226d6d" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:03:13.275917 systemd[1]: Started cri-containerd-724bca5fb41f52ddecf809348ecb6eaf8f5eebb98893f0c4c81e13472c40cd7a.scope - libcontainer container 724bca5fb41f52ddecf809348ecb6eaf8f5eebb98893f0c4c81e13472c40cd7a. Nov 5 16:03:13.297475 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 16:03:13.456915 containerd[1621]: time="2025-11-05T16:03:13.456839385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s8jjg,Uid:813fc040-02be-4987-a338-2511b9fa3fec,Namespace:kube-system,Attempt:0,} returns sandbox id \"724bca5fb41f52ddecf809348ecb6eaf8f5eebb98893f0c4c81e13472c40cd7a\"" Nov 5 16:03:13.458967 kubelet[2841]: E1105 16:03:13.458926 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:03:13.465496 containerd[1621]: time="2025-11-05T16:03:13.465442381Z" level=info msg="CreateContainer within sandbox \"724bca5fb41f52ddecf809348ecb6eaf8f5eebb98893f0c4c81e13472c40cd7a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 16:03:13.484968 containerd[1621]: time="2025-11-05T16:03:13.484908274Z" level=info msg="Container da59acc18abb69b6816e558de1d90f08926ec66ef3cf83bc8e942dfac9dd4ba2: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:03:13.487478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2315274472.mount: Deactivated successfully. Nov 5 16:03:13.494529 containerd[1621]: time="2025-11-05T16:03:13.494472029Z" level=info msg="CreateContainer within sandbox \"724bca5fb41f52ddecf809348ecb6eaf8f5eebb98893f0c4c81e13472c40cd7a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"da59acc18abb69b6816e558de1d90f08926ec66ef3cf83bc8e942dfac9dd4ba2\"" Nov 5 16:03:13.495791 containerd[1621]: time="2025-11-05T16:03:13.495580229Z" level=info msg="StartContainer for \"da59acc18abb69b6816e558de1d90f08926ec66ef3cf83bc8e942dfac9dd4ba2\"" Nov 5 16:03:13.496643 containerd[1621]: time="2025-11-05T16:03:13.496620750Z" level=info msg="connecting to shim da59acc18abb69b6816e558de1d90f08926ec66ef3cf83bc8e942dfac9dd4ba2" address="unix:///run/containerd/s/79f138810f4b49843e69db4df51b6ff7f3f45544bc3e5c50727579b0d2226d6d" protocol=ttrpc version=3 Nov 5 16:03:13.516813 systemd-networkd[1510]: cali82656bd483a: Gained IPv6LL Nov 5 16:03:13.522081 containerd[1621]: time="2025-11-05T16:03:13.521644202Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:13.523015 containerd[1621]: time="2025-11-05T16:03:13.522973203Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 16:03:13.523131 containerd[1621]: time="2025-11-05T16:03:13.523048205Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 16:03:13.523419 kubelet[2841]: E1105 16:03:13.523225 2841 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 16:03:13.523419 kubelet[2841]: E1105 16:03:13.523295 2841 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 16:03:13.523648 containerd[1621]: time="2025-11-05T16:03:13.523618922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 16:03:13.524138 kubelet[2841]: E1105 16:03:13.524044 2841 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:dd37dbad30e5453fb8d468d048f6be85,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bb68s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6745b64cf-lv9qv_calico-system(b92d66c6-1332-4195-9d17-9fdc4c32310f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:13.525054 systemd[1]: Started cri-containerd-da59acc18abb69b6816e558de1d90f08926ec66ef3cf83bc8e942dfac9dd4ba2.scope - libcontainer container da59acc18abb69b6816e558de1d90f08926ec66ef3cf83bc8e942dfac9dd4ba2. Nov 5 16:03:13.571875 containerd[1621]: time="2025-11-05T16:03:13.571428910Z" level=info msg="StartContainer for \"da59acc18abb69b6816e558de1d90f08926ec66ef3cf83bc8e942dfac9dd4ba2\" returns successfully" Nov 5 16:03:13.791185 systemd-networkd[1510]: vxlan.calico: Link UP Nov 5 16:03:13.791199 systemd-networkd[1510]: vxlan.calico: Gained carrier Nov 5 16:03:13.856503 containerd[1621]: time="2025-11-05T16:03:13.856345709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6894bd5d4c-lzwwl,Uid:ee8d6811-815a-4547-b978-c3b4809dcbf0,Namespace:calico-system,Attempt:0,}" Nov 5 16:03:13.857988 containerd[1621]: time="2025-11-05T16:03:13.857665592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-r5tq9,Uid:49cc43d2-d1dd-4d90-a5d3-9c3601306d8f,Namespace:calico-system,Attempt:0,}" Nov 5 16:03:13.866816 kubelet[2841]: I1105 16:03:13.866751 2841 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdb8325a-0fd4-4d9c-86b5-8a27d438208e" path="/var/lib/kubelet/pods/bdb8325a-0fd4-4d9c-86b5-8a27d438208e/volumes" Nov 5 16:03:13.899983 systemd-networkd[1510]: calibeed7f1d10c: Gained IPv6LL Nov 5 16:03:13.961013 containerd[1621]: time="2025-11-05T16:03:13.960949906Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:13.978899 containerd[1621]: time="2025-11-05T16:03:13.978817448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 16:03:13.980760 containerd[1621]: time="2025-11-05T16:03:13.980669694Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 16:03:13.981446 kubelet[2841]: E1105 16:03:13.981368 2841 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 16:03:13.981446 kubelet[2841]: E1105 16:03:13.981422 2841 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 16:03:13.981832 kubelet[2841]: E1105 16:03:13.981699 2841 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s8zcg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-m7g79_calico-system(09c014d2-b99d-493b-9e36-c9afae0fa214): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:13.982019 containerd[1621]: time="2025-11-05T16:03:13.981976592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 16:03:13.984102 kubelet[2841]: E1105 16:03:13.984003 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-m7g79" podUID="09c014d2-b99d-493b-9e36-c9afae0fa214" Nov 5 16:03:14.023501 kubelet[2841]: E1105 16:03:14.023459 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:03:14.024731 kubelet[2841]: E1105 16:03:14.024653 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f5b56f6b8-98rmx" podUID="7606f5a8-f065-4081-b3bc-2344e91053d9" Nov 5 16:03:14.025331 kubelet[2841]: E1105 16:03:14.025228 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-m7g79" podUID="09c014d2-b99d-493b-9e36-c9afae0fa214" Nov 5 16:03:14.043418 systemd-networkd[1510]: califf3810e3613: Link UP Nov 5 16:03:14.045009 systemd-networkd[1510]: califf3810e3613: Gained carrier Nov 5 16:03:14.065383 kubelet[2841]: I1105 16:03:14.065267 2841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-s8jjg" podStartSLOduration=42.065245846 podStartE2EDuration="42.065245846s" podCreationTimestamp="2025-11-05 16:02:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 16:03:14.064036725 +0000 UTC m=+46.368172159" watchObservedRunningTime="2025-11-05 16:03:14.065245846 +0000 UTC m=+46.369381280" Nov 5 16:03:14.070681 containerd[1621]: 2025-11-05 16:03:13.939 [INFO][4514] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--r5tq9-eth0 goldmane-666569f655- calico-system 49cc43d2-d1dd-4d90-a5d3-9c3601306d8f 842 0 2025-11-05 16:02:46 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-r5tq9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] califf3810e3613 [] [] }} ContainerID="310db32dd1b87f014b79f4dfd447b10765554b0f49db97e25986fb24edc9f46f" Namespace="calico-system" Pod="goldmane-666569f655-r5tq9" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r5tq9-" Nov 5 16:03:14.070681 containerd[1621]: 2025-11-05 16:03:13.939 [INFO][4514] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="310db32dd1b87f014b79f4dfd447b10765554b0f49db97e25986fb24edc9f46f" Namespace="calico-system" Pod="goldmane-666569f655-r5tq9" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r5tq9-eth0" Nov 5 16:03:14.070681 containerd[1621]: 2025-11-05 16:03:13.986 [INFO][4542] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="310db32dd1b87f014b79f4dfd447b10765554b0f49db97e25986fb24edc9f46f" HandleID="k8s-pod-network.310db32dd1b87f014b79f4dfd447b10765554b0f49db97e25986fb24edc9f46f" Workload="localhost-k8s-goldmane--666569f655--r5tq9-eth0" Nov 5 16:03:14.070681 containerd[1621]: 2025-11-05 16:03:13.986 [INFO][4542] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="310db32dd1b87f014b79f4dfd447b10765554b0f49db97e25986fb24edc9f46f" HandleID="k8s-pod-network.310db32dd1b87f014b79f4dfd447b10765554b0f49db97e25986fb24edc9f46f" Workload="localhost-k8s-goldmane--666569f655--r5tq9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a5420), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-r5tq9", "timestamp":"2025-11-05 16:03:13.986371037 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 16:03:14.070681 containerd[1621]: 2025-11-05 16:03:13.986 [INFO][4542] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 16:03:14.070681 containerd[1621]: 2025-11-05 16:03:13.986 [INFO][4542] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 16:03:14.070681 containerd[1621]: 2025-11-05 16:03:13.986 [INFO][4542] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 16:03:14.070681 containerd[1621]: 2025-11-05 16:03:13.995 [INFO][4542] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.310db32dd1b87f014b79f4dfd447b10765554b0f49db97e25986fb24edc9f46f" host="localhost" Nov 5 16:03:14.070681 containerd[1621]: 2025-11-05 16:03:14.004 [INFO][4542] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 16:03:14.070681 containerd[1621]: 2025-11-05 16:03:14.009 [INFO][4542] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 16:03:14.070681 containerd[1621]: 2025-11-05 16:03:14.012 [INFO][4542] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 16:03:14.070681 containerd[1621]: 2025-11-05 16:03:14.015 [INFO][4542] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 16:03:14.070681 containerd[1621]: 2025-11-05 16:03:14.015 [INFO][4542] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.310db32dd1b87f014b79f4dfd447b10765554b0f49db97e25986fb24edc9f46f" host="localhost" Nov 5 16:03:14.070681 containerd[1621]: 2025-11-05 16:03:14.017 [INFO][4542] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.310db32dd1b87f014b79f4dfd447b10765554b0f49db97e25986fb24edc9f46f Nov 5 16:03:14.070681 containerd[1621]: 2025-11-05 16:03:14.021 [INFO][4542] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.310db32dd1b87f014b79f4dfd447b10765554b0f49db97e25986fb24edc9f46f" host="localhost" Nov 5 16:03:14.070681 containerd[1621]: 2025-11-05 16:03:14.033 [INFO][4542] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.310db32dd1b87f014b79f4dfd447b10765554b0f49db97e25986fb24edc9f46f" host="localhost" Nov 5 16:03:14.070681 containerd[1621]: 2025-11-05 16:03:14.033 [INFO][4542] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.310db32dd1b87f014b79f4dfd447b10765554b0f49db97e25986fb24edc9f46f" host="localhost" Nov 5 16:03:14.070681 containerd[1621]: 2025-11-05 16:03:14.033 [INFO][4542] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 16:03:14.070681 containerd[1621]: 2025-11-05 16:03:14.033 [INFO][4542] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="310db32dd1b87f014b79f4dfd447b10765554b0f49db97e25986fb24edc9f46f" HandleID="k8s-pod-network.310db32dd1b87f014b79f4dfd447b10765554b0f49db97e25986fb24edc9f46f" Workload="localhost-k8s-goldmane--666569f655--r5tq9-eth0" Nov 5 16:03:14.071344 containerd[1621]: 2025-11-05 16:03:14.037 [INFO][4514] cni-plugin/k8s.go 418: Populated endpoint ContainerID="310db32dd1b87f014b79f4dfd447b10765554b0f49db97e25986fb24edc9f46f" Namespace="calico-system" Pod="goldmane-666569f655-r5tq9" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r5tq9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--r5tq9-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"49cc43d2-d1dd-4d90-a5d3-9c3601306d8f", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 2, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-r5tq9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califf3810e3613", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:14.071344 containerd[1621]: 2025-11-05 16:03:14.038 [INFO][4514] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="310db32dd1b87f014b79f4dfd447b10765554b0f49db97e25986fb24edc9f46f" Namespace="calico-system" Pod="goldmane-666569f655-r5tq9" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r5tq9-eth0" Nov 5 16:03:14.071344 containerd[1621]: 2025-11-05 16:03:14.038 [INFO][4514] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf3810e3613 ContainerID="310db32dd1b87f014b79f4dfd447b10765554b0f49db97e25986fb24edc9f46f" Namespace="calico-system" Pod="goldmane-666569f655-r5tq9" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r5tq9-eth0" Nov 5 16:03:14.071344 containerd[1621]: 2025-11-05 16:03:14.045 [INFO][4514] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="310db32dd1b87f014b79f4dfd447b10765554b0f49db97e25986fb24edc9f46f" Namespace="calico-system" Pod="goldmane-666569f655-r5tq9" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r5tq9-eth0" Nov 5 16:03:14.071344 containerd[1621]: 2025-11-05 16:03:14.049 [INFO][4514] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="310db32dd1b87f014b79f4dfd447b10765554b0f49db97e25986fb24edc9f46f" Namespace="calico-system" Pod="goldmane-666569f655-r5tq9" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r5tq9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--r5tq9-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"49cc43d2-d1dd-4d90-a5d3-9c3601306d8f", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 2, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"310db32dd1b87f014b79f4dfd447b10765554b0f49db97e25986fb24edc9f46f", Pod:"goldmane-666569f655-r5tq9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califf3810e3613", MAC:"e2:a5:4e:f2:23:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:14.071344 containerd[1621]: 2025-11-05 16:03:14.064 [INFO][4514] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="310db32dd1b87f014b79f4dfd447b10765554b0f49db97e25986fb24edc9f46f" Namespace="calico-system" Pod="goldmane-666569f655-r5tq9" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r5tq9-eth0" Nov 5 16:03:14.092901 systemd-networkd[1510]: cali3c1a5096232: Gained IPv6LL Nov 5 16:03:14.113924 containerd[1621]: time="2025-11-05T16:03:14.113808915Z" level=info msg="connecting to shim 310db32dd1b87f014b79f4dfd447b10765554b0f49db97e25986fb24edc9f46f" address="unix:///run/containerd/s/5be270fa2671b42ccb49e59d10fed5f130abf4f16601555f92d4abbf919b7d93" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:03:14.144907 systemd[1]: Started cri-containerd-310db32dd1b87f014b79f4dfd447b10765554b0f49db97e25986fb24edc9f46f.scope - libcontainer container 310db32dd1b87f014b79f4dfd447b10765554b0f49db97e25986fb24edc9f46f. Nov 5 16:03:14.156903 systemd-networkd[1510]: calie61ab6f06e1: Gained IPv6LL Nov 5 16:03:14.158634 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 16:03:14.263699 containerd[1621]: time="2025-11-05T16:03:14.263632513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-r5tq9,Uid:49cc43d2-d1dd-4d90-a5d3-9c3601306d8f,Namespace:calico-system,Attempt:0,} returns sandbox id \"310db32dd1b87f014b79f4dfd447b10765554b0f49db97e25986fb24edc9f46f\"" Nov 5 16:03:14.507157 containerd[1621]: time="2025-11-05T16:03:14.507112107Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:14.557416 containerd[1621]: time="2025-11-05T16:03:14.557348101Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 16:03:14.557416 containerd[1621]: time="2025-11-05T16:03:14.557413065Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 16:03:14.557691 kubelet[2841]: E1105 16:03:14.557647 2841 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 16:03:14.558097 kubelet[2841]: E1105 16:03:14.557700 2841 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 16:03:14.558141 containerd[1621]: time="2025-11-05T16:03:14.558027183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 16:03:14.558235 kubelet[2841]: E1105 16:03:14.558180 2841 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bb68s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6745b64cf-lv9qv_calico-system(b92d66c6-1332-4195-9d17-9fdc4c32310f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:14.559451 kubelet[2841]: E1105 16:03:14.559403 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6745b64cf-lv9qv" podUID="b92d66c6-1332-4195-9d17-9fdc4c32310f" Nov 5 16:03:14.577857 systemd-networkd[1510]: cali194d1c3d7ac: Link UP Nov 5 16:03:14.578117 systemd-networkd[1510]: cali194d1c3d7ac: Gained carrier Nov 5 16:03:14.855829 kubelet[2841]: E1105 16:03:14.855644 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:03:14.856087 containerd[1621]: time="2025-11-05T16:03:14.856054494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-svhlp,Uid:2f03bfaf-897c-4a77-856b-68e383753ef9,Namespace:kube-system,Attempt:0,}" Nov 5 16:03:14.856396 containerd[1621]: time="2025-11-05T16:03:14.856236310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5b56f6b8-w6xvl,Uid:e2f5a50e-d006-4d79-9642-9b2ea8b5bc20,Namespace:calico-apiserver,Attempt:0,}" Nov 5 16:03:14.987714 containerd[1621]: 2025-11-05 16:03:13.938 [INFO][4513] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6894bd5d4c--lzwwl-eth0 calico-kube-controllers-6894bd5d4c- calico-system ee8d6811-815a-4547-b978-c3b4809dcbf0 840 0 2025-11-05 16:02:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6894bd5d4c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6894bd5d4c-lzwwl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali194d1c3d7ac [] [] }} ContainerID="4cc20a927194c1509c809820ac3378dc681209c05253903b7368d9526bdf6727" Namespace="calico-system" Pod="calico-kube-controllers-6894bd5d4c-lzwwl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6894bd5d4c--lzwwl-" Nov 5 16:03:14.987714 containerd[1621]: 2025-11-05 16:03:13.938 [INFO][4513] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4cc20a927194c1509c809820ac3378dc681209c05253903b7368d9526bdf6727" Namespace="calico-system" Pod="calico-kube-controllers-6894bd5d4c-lzwwl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6894bd5d4c--lzwwl-eth0" Nov 5 16:03:14.987714 containerd[1621]: 2025-11-05 16:03:13.994 [INFO][4544] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4cc20a927194c1509c809820ac3378dc681209c05253903b7368d9526bdf6727" HandleID="k8s-pod-network.4cc20a927194c1509c809820ac3378dc681209c05253903b7368d9526bdf6727" Workload="localhost-k8s-calico--kube--controllers--6894bd5d4c--lzwwl-eth0" Nov 5 16:03:14.987714 containerd[1621]: 2025-11-05 16:03:13.994 [INFO][4544] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4cc20a927194c1509c809820ac3378dc681209c05253903b7368d9526bdf6727" HandleID="k8s-pod-network.4cc20a927194c1509c809820ac3378dc681209c05253903b7368d9526bdf6727" Workload="localhost-k8s-calico--kube--controllers--6894bd5d4c--lzwwl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fbd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6894bd5d4c-lzwwl", "timestamp":"2025-11-05 16:03:13.994149373 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 16:03:14.987714 containerd[1621]: 2025-11-05 16:03:13.994 [INFO][4544] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 16:03:14.987714 containerd[1621]: 2025-11-05 16:03:14.033 [INFO][4544] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 16:03:14.987714 containerd[1621]: 2025-11-05 16:03:14.034 [INFO][4544] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 16:03:14.987714 containerd[1621]: 2025-11-05 16:03:14.094 [INFO][4544] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4cc20a927194c1509c809820ac3378dc681209c05253903b7368d9526bdf6727" host="localhost" Nov 5 16:03:14.987714 containerd[1621]: 2025-11-05 16:03:14.109 [INFO][4544] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 16:03:14.987714 containerd[1621]: 2025-11-05 16:03:14.119 [INFO][4544] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 16:03:14.987714 containerd[1621]: 2025-11-05 16:03:14.121 [INFO][4544] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 16:03:14.987714 containerd[1621]: 2025-11-05 16:03:14.124 [INFO][4544] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 16:03:14.987714 containerd[1621]: 2025-11-05 16:03:14.125 [INFO][4544] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4cc20a927194c1509c809820ac3378dc681209c05253903b7368d9526bdf6727" host="localhost" Nov 5 16:03:14.987714 containerd[1621]: 2025-11-05 16:03:14.126 [INFO][4544] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4cc20a927194c1509c809820ac3378dc681209c05253903b7368d9526bdf6727 Nov 5 16:03:14.987714 containerd[1621]: 2025-11-05 16:03:14.373 [INFO][4544] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4cc20a927194c1509c809820ac3378dc681209c05253903b7368d9526bdf6727" host="localhost" Nov 5 16:03:14.987714 containerd[1621]: 2025-11-05 16:03:14.571 [INFO][4544] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.4cc20a927194c1509c809820ac3378dc681209c05253903b7368d9526bdf6727" host="localhost" Nov 5 16:03:14.987714 containerd[1621]: 2025-11-05 16:03:14.571 [INFO][4544] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.4cc20a927194c1509c809820ac3378dc681209c05253903b7368d9526bdf6727" host="localhost" Nov 5 16:03:14.987714 containerd[1621]: 2025-11-05 16:03:14.571 [INFO][4544] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 16:03:14.987714 containerd[1621]: 2025-11-05 16:03:14.571 [INFO][4544] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="4cc20a927194c1509c809820ac3378dc681209c05253903b7368d9526bdf6727" HandleID="k8s-pod-network.4cc20a927194c1509c809820ac3378dc681209c05253903b7368d9526bdf6727" Workload="localhost-k8s-calico--kube--controllers--6894bd5d4c--lzwwl-eth0" Nov 5 16:03:14.988676 containerd[1621]: 2025-11-05 16:03:14.575 [INFO][4513] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4cc20a927194c1509c809820ac3378dc681209c05253903b7368d9526bdf6727" Namespace="calico-system" Pod="calico-kube-controllers-6894bd5d4c-lzwwl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6894bd5d4c--lzwwl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6894bd5d4c--lzwwl-eth0", GenerateName:"calico-kube-controllers-6894bd5d4c-", Namespace:"calico-system", SelfLink:"", UID:"ee8d6811-815a-4547-b978-c3b4809dcbf0", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 2, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6894bd5d4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6894bd5d4c-lzwwl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali194d1c3d7ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:14.988676 containerd[1621]: 2025-11-05 16:03:14.575 [INFO][4513] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="4cc20a927194c1509c809820ac3378dc681209c05253903b7368d9526bdf6727" Namespace="calico-system" Pod="calico-kube-controllers-6894bd5d4c-lzwwl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6894bd5d4c--lzwwl-eth0" Nov 5 16:03:14.988676 containerd[1621]: 2025-11-05 16:03:14.575 [INFO][4513] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali194d1c3d7ac ContainerID="4cc20a927194c1509c809820ac3378dc681209c05253903b7368d9526bdf6727" Namespace="calico-system" Pod="calico-kube-controllers-6894bd5d4c-lzwwl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6894bd5d4c--lzwwl-eth0" Nov 5 16:03:14.988676 containerd[1621]: 2025-11-05 16:03:14.577 [INFO][4513] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4cc20a927194c1509c809820ac3378dc681209c05253903b7368d9526bdf6727" Namespace="calico-system" Pod="calico-kube-controllers-6894bd5d4c-lzwwl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6894bd5d4c--lzwwl-eth0" Nov 5 16:03:14.988676 containerd[1621]: 2025-11-05 16:03:14.578 [INFO][4513] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4cc20a927194c1509c809820ac3378dc681209c05253903b7368d9526bdf6727" Namespace="calico-system" Pod="calico-kube-controllers-6894bd5d4c-lzwwl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6894bd5d4c--lzwwl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6894bd5d4c--lzwwl-eth0", GenerateName:"calico-kube-controllers-6894bd5d4c-", Namespace:"calico-system", SelfLink:"", UID:"ee8d6811-815a-4547-b978-c3b4809dcbf0", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 2, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6894bd5d4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4cc20a927194c1509c809820ac3378dc681209c05253903b7368d9526bdf6727", Pod:"calico-kube-controllers-6894bd5d4c-lzwwl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali194d1c3d7ac", MAC:"6e:b1:89:90:5b:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:14.988676 containerd[1621]: 2025-11-05 16:03:14.980 [INFO][4513] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4cc20a927194c1509c809820ac3378dc681209c05253903b7368d9526bdf6727" Namespace="calico-system" Pod="calico-kube-controllers-6894bd5d4c-lzwwl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6894bd5d4c--lzwwl-eth0" Nov 5 16:03:15.027140 kubelet[2841]: E1105 16:03:15.027056 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:03:15.031794 kubelet[2841]: E1105 16:03:15.030965 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6745b64cf-lv9qv" podUID="b92d66c6-1332-4195-9d17-9fdc4c32310f" Nov 5 16:03:15.046678 containerd[1621]: time="2025-11-05T16:03:15.045220197Z" level=info msg="connecting to shim 4cc20a927194c1509c809820ac3378dc681209c05253903b7368d9526bdf6727" address="unix:///run/containerd/s/e2248b676a2ddb21b16cd81d8435d98599b5dbc8b17ed2af32e323a80fab121f" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:03:15.098081 systemd[1]: Started cri-containerd-4cc20a927194c1509c809820ac3378dc681209c05253903b7368d9526bdf6727.scope - libcontainer container 4cc20a927194c1509c809820ac3378dc681209c05253903b7368d9526bdf6727. Nov 5 16:03:15.129682 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 16:03:15.134751 containerd[1621]: time="2025-11-05T16:03:15.134631290Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:15.137443 containerd[1621]: time="2025-11-05T16:03:15.137390780Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 16:03:15.137539 containerd[1621]: time="2025-11-05T16:03:15.137503254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 16:03:15.138265 kubelet[2841]: E1105 16:03:15.138184 2841 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 16:03:15.138388 kubelet[2841]: E1105 16:03:15.138276 2841 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 16:03:15.138652 kubelet[2841]: E1105 16:03:15.138585 2841 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8ng6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-r5tq9_calico-system(49cc43d2-d1dd-4d90-a5d3-9c3601306d8f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:15.139970 kubelet[2841]: E1105 16:03:15.139935 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r5tq9" podUID="49cc43d2-d1dd-4d90-a5d3-9c3601306d8f" Nov 5 16:03:15.183139 systemd-networkd[1510]: calic5587a74ecd: Link UP Nov 5 16:03:15.184239 systemd-networkd[1510]: calic5587a74ecd: Gained carrier Nov 5 16:03:15.250708 containerd[1621]: time="2025-11-05T16:03:15.250640400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6894bd5d4c-lzwwl,Uid:ee8d6811-815a-4547-b978-c3b4809dcbf0,Namespace:calico-system,Attempt:0,} returns sandbox id \"4cc20a927194c1509c809820ac3378dc681209c05253903b7368d9526bdf6727\"" Nov 5 16:03:15.252120 containerd[1621]: time="2025-11-05T16:03:15.252088314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 16:03:15.351088 containerd[1621]: 2025-11-05 16:03:15.057 [INFO][4663] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5f5b56f6b8--w6xvl-eth0 calico-apiserver-5f5b56f6b8- calico-apiserver e2f5a50e-d006-4d79-9642-9b2ea8b5bc20 837 0 2025-11-05 16:02:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f5b56f6b8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5f5b56f6b8-w6xvl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic5587a74ecd [] [] }} ContainerID="6e40e629a239e46c7c452ff32ebdfc0b210c1af4cb42f68885d5316c67172870" Namespace="calico-apiserver" Pod="calico-apiserver-5f5b56f6b8-w6xvl" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f5b56f6b8--w6xvl-" Nov 5 16:03:15.351088 containerd[1621]: 2025-11-05 16:03:15.057 [INFO][4663] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6e40e629a239e46c7c452ff32ebdfc0b210c1af4cb42f68885d5316c67172870" Namespace="calico-apiserver" Pod="calico-apiserver-5f5b56f6b8-w6xvl" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f5b56f6b8--w6xvl-eth0" Nov 5 16:03:15.351088 containerd[1621]: 2025-11-05 16:03:15.117 [INFO][4715] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6e40e629a239e46c7c452ff32ebdfc0b210c1af4cb42f68885d5316c67172870" HandleID="k8s-pod-network.6e40e629a239e46c7c452ff32ebdfc0b210c1af4cb42f68885d5316c67172870" Workload="localhost-k8s-calico--apiserver--5f5b56f6b8--w6xvl-eth0" Nov 5 16:03:15.351088 containerd[1621]: 2025-11-05 16:03:15.118 [INFO][4715] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6e40e629a239e46c7c452ff32ebdfc0b210c1af4cb42f68885d5316c67172870" HandleID="k8s-pod-network.6e40e629a239e46c7c452ff32ebdfc0b210c1af4cb42f68885d5316c67172870" Workload="localhost-k8s-calico--apiserver--5f5b56f6b8--w6xvl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000408250), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5f5b56f6b8-w6xvl", "timestamp":"2025-11-05 16:03:15.11789297 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 16:03:15.351088 containerd[1621]: 2025-11-05 16:03:15.118 [INFO][4715] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 16:03:15.351088 containerd[1621]: 2025-11-05 16:03:15.118 [INFO][4715] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 16:03:15.351088 containerd[1621]: 2025-11-05 16:03:15.118 [INFO][4715] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 16:03:15.351088 containerd[1621]: 2025-11-05 16:03:15.126 [INFO][4715] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6e40e629a239e46c7c452ff32ebdfc0b210c1af4cb42f68885d5316c67172870" host="localhost" Nov 5 16:03:15.351088 containerd[1621]: 2025-11-05 16:03:15.132 [INFO][4715] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 16:03:15.351088 containerd[1621]: 2025-11-05 16:03:15.140 [INFO][4715] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 16:03:15.351088 containerd[1621]: 2025-11-05 16:03:15.143 [INFO][4715] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 16:03:15.351088 containerd[1621]: 2025-11-05 16:03:15.146 [INFO][4715] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 16:03:15.351088 containerd[1621]: 2025-11-05 16:03:15.146 [INFO][4715] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6e40e629a239e46c7c452ff32ebdfc0b210c1af4cb42f68885d5316c67172870" host="localhost" Nov 5 16:03:15.351088 containerd[1621]: 2025-11-05 16:03:15.148 [INFO][4715] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6e40e629a239e46c7c452ff32ebdfc0b210c1af4cb42f68885d5316c67172870 Nov 5 16:03:15.351088 containerd[1621]: 2025-11-05 16:03:15.153 [INFO][4715] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6e40e629a239e46c7c452ff32ebdfc0b210c1af4cb42f68885d5316c67172870" host="localhost" Nov 5 16:03:15.351088 containerd[1621]: 2025-11-05 16:03:15.165 [INFO][4715] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.6e40e629a239e46c7c452ff32ebdfc0b210c1af4cb42f68885d5316c67172870" host="localhost" Nov 5 16:03:15.351088 containerd[1621]: 2025-11-05 16:03:15.166 [INFO][4715] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.6e40e629a239e46c7c452ff32ebdfc0b210c1af4cb42f68885d5316c67172870" host="localhost" Nov 5 16:03:15.351088 containerd[1621]: 2025-11-05 16:03:15.166 [INFO][4715] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 16:03:15.351088 containerd[1621]: 2025-11-05 16:03:15.166 [INFO][4715] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="6e40e629a239e46c7c452ff32ebdfc0b210c1af4cb42f68885d5316c67172870" HandleID="k8s-pod-network.6e40e629a239e46c7c452ff32ebdfc0b210c1af4cb42f68885d5316c67172870" Workload="localhost-k8s-calico--apiserver--5f5b56f6b8--w6xvl-eth0" Nov 5 16:03:15.351898 containerd[1621]: 2025-11-05 16:03:15.175 [INFO][4663] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6e40e629a239e46c7c452ff32ebdfc0b210c1af4cb42f68885d5316c67172870" Namespace="calico-apiserver" Pod="calico-apiserver-5f5b56f6b8-w6xvl" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f5b56f6b8--w6xvl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f5b56f6b8--w6xvl-eth0", GenerateName:"calico-apiserver-5f5b56f6b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"e2f5a50e-d006-4d79-9642-9b2ea8b5bc20", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 2, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f5b56f6b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5f5b56f6b8-w6xvl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic5587a74ecd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:15.351898 containerd[1621]: 2025-11-05 16:03:15.175 [INFO][4663] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="6e40e629a239e46c7c452ff32ebdfc0b210c1af4cb42f68885d5316c67172870" Namespace="calico-apiserver" Pod="calico-apiserver-5f5b56f6b8-w6xvl" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f5b56f6b8--w6xvl-eth0" Nov 5 16:03:15.351898 containerd[1621]: 2025-11-05 16:03:15.176 [INFO][4663] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic5587a74ecd ContainerID="6e40e629a239e46c7c452ff32ebdfc0b210c1af4cb42f68885d5316c67172870" Namespace="calico-apiserver" Pod="calico-apiserver-5f5b56f6b8-w6xvl" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f5b56f6b8--w6xvl-eth0" Nov 5 16:03:15.351898 containerd[1621]: 2025-11-05 16:03:15.184 [INFO][4663] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6e40e629a239e46c7c452ff32ebdfc0b210c1af4cb42f68885d5316c67172870" Namespace="calico-apiserver" Pod="calico-apiserver-5f5b56f6b8-w6xvl" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f5b56f6b8--w6xvl-eth0" Nov 5 16:03:15.351898 containerd[1621]: 2025-11-05 16:03:15.187 [INFO][4663] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6e40e629a239e46c7c452ff32ebdfc0b210c1af4cb42f68885d5316c67172870" Namespace="calico-apiserver" Pod="calico-apiserver-5f5b56f6b8-w6xvl" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f5b56f6b8--w6xvl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f5b56f6b8--w6xvl-eth0", GenerateName:"calico-apiserver-5f5b56f6b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"e2f5a50e-d006-4d79-9642-9b2ea8b5bc20", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 2, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f5b56f6b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6e40e629a239e46c7c452ff32ebdfc0b210c1af4cb42f68885d5316c67172870", Pod:"calico-apiserver-5f5b56f6b8-w6xvl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic5587a74ecd", MAC:"be:75:a1:3c:39:cb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:15.351898 containerd[1621]: 2025-11-05 16:03:15.348 [INFO][4663] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6e40e629a239e46c7c452ff32ebdfc0b210c1af4cb42f68885d5316c67172870" Namespace="calico-apiserver" Pod="calico-apiserver-5f5b56f6b8-w6xvl" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f5b56f6b8--w6xvl-eth0" Nov 5 16:03:15.470387 containerd[1621]: time="2025-11-05T16:03:15.469595950Z" level=info msg="connecting to shim 6e40e629a239e46c7c452ff32ebdfc0b210c1af4cb42f68885d5316c67172870" address="unix:///run/containerd/s/bdf202fc3a9be9946cd85e7a5017a8920a6a79c6316bf2088e89a7962aed9ad0" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:03:15.502392 systemd-networkd[1510]: cali63ed61271f0: Link UP Nov 5 16:03:15.504963 systemd-networkd[1510]: cali63ed61271f0: Gained carrier Nov 5 16:03:15.505994 systemd[1]: Started cri-containerd-6e40e629a239e46c7c452ff32ebdfc0b210c1af4cb42f68885d5316c67172870.scope - libcontainer container 6e40e629a239e46c7c452ff32ebdfc0b210c1af4cb42f68885d5316c67172870. Nov 5 16:03:15.543142 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 16:03:15.543413 containerd[1621]: 2025-11-05 16:03:15.100 [INFO][4675] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--svhlp-eth0 coredns-668d6bf9bc- kube-system 2f03bfaf-897c-4a77-856b-68e383753ef9 844 0 2025-11-05 16:02:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-svhlp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali63ed61271f0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6b6a660647996d7f2260e4d82ed082dd53f0293b079fb76ea2ebc2c961c62452" Namespace="kube-system" Pod="coredns-668d6bf9bc-svhlp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--svhlp-" Nov 5 16:03:15.543413 containerd[1621]: 2025-11-05 16:03:15.100 [INFO][4675] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6b6a660647996d7f2260e4d82ed082dd53f0293b079fb76ea2ebc2c961c62452" Namespace="kube-system" Pod="coredns-668d6bf9bc-svhlp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--svhlp-eth0" Nov 5 16:03:15.543413 containerd[1621]: 2025-11-05 16:03:15.148 [INFO][4736] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b6a660647996d7f2260e4d82ed082dd53f0293b079fb76ea2ebc2c961c62452" HandleID="k8s-pod-network.6b6a660647996d7f2260e4d82ed082dd53f0293b079fb76ea2ebc2c961c62452" Workload="localhost-k8s-coredns--668d6bf9bc--svhlp-eth0" Nov 5 16:03:15.543413 containerd[1621]: 2025-11-05 16:03:15.148 [INFO][4736] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6b6a660647996d7f2260e4d82ed082dd53f0293b079fb76ea2ebc2c961c62452" HandleID="k8s-pod-network.6b6a660647996d7f2260e4d82ed082dd53f0293b079fb76ea2ebc2c961c62452" Workload="localhost-k8s-coredns--668d6bf9bc--svhlp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123dc0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-svhlp", "timestamp":"2025-11-05 16:03:15.148104009 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 16:03:15.543413 containerd[1621]: 2025-11-05 16:03:15.148 [INFO][4736] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 16:03:15.543413 containerd[1621]: 2025-11-05 16:03:15.166 [INFO][4736] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 16:03:15.543413 containerd[1621]: 2025-11-05 16:03:15.166 [INFO][4736] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 16:03:15.543413 containerd[1621]: 2025-11-05 16:03:15.226 [INFO][4736] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6b6a660647996d7f2260e4d82ed082dd53f0293b079fb76ea2ebc2c961c62452" host="localhost" Nov 5 16:03:15.543413 containerd[1621]: 2025-11-05 16:03:15.449 [INFO][4736] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 16:03:15.543413 containerd[1621]: 2025-11-05 16:03:15.454 [INFO][4736] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 16:03:15.543413 containerd[1621]: 2025-11-05 16:03:15.456 [INFO][4736] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 16:03:15.543413 containerd[1621]: 2025-11-05 16:03:15.459 [INFO][4736] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 16:03:15.543413 containerd[1621]: 2025-11-05 16:03:15.459 [INFO][4736] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6b6a660647996d7f2260e4d82ed082dd53f0293b079fb76ea2ebc2c961c62452" host="localhost" Nov 5 16:03:15.543413 containerd[1621]: 2025-11-05 16:03:15.463 [INFO][4736] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6b6a660647996d7f2260e4d82ed082dd53f0293b079fb76ea2ebc2c961c62452 Nov 5 16:03:15.543413 containerd[1621]: 2025-11-05 16:03:15.471 [INFO][4736] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6b6a660647996d7f2260e4d82ed082dd53f0293b079fb76ea2ebc2c961c62452" host="localhost" Nov 5 16:03:15.543413 containerd[1621]: 2025-11-05 16:03:15.483 [INFO][4736] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.6b6a660647996d7f2260e4d82ed082dd53f0293b079fb76ea2ebc2c961c62452" host="localhost" Nov 5 16:03:15.543413 containerd[1621]: 2025-11-05 16:03:15.484 [INFO][4736] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.6b6a660647996d7f2260e4d82ed082dd53f0293b079fb76ea2ebc2c961c62452" host="localhost" Nov 5 16:03:15.543413 containerd[1621]: 2025-11-05 16:03:15.484 [INFO][4736] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 16:03:15.543413 containerd[1621]: 2025-11-05 16:03:15.484 [INFO][4736] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="6b6a660647996d7f2260e4d82ed082dd53f0293b079fb76ea2ebc2c961c62452" HandleID="k8s-pod-network.6b6a660647996d7f2260e4d82ed082dd53f0293b079fb76ea2ebc2c961c62452" Workload="localhost-k8s-coredns--668d6bf9bc--svhlp-eth0" Nov 5 16:03:15.544288 containerd[1621]: 2025-11-05 16:03:15.490 [INFO][4675] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6b6a660647996d7f2260e4d82ed082dd53f0293b079fb76ea2ebc2c961c62452" Namespace="kube-system" Pod="coredns-668d6bf9bc-svhlp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--svhlp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--svhlp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2f03bfaf-897c-4a77-856b-68e383753ef9", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 2, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-svhlp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali63ed61271f0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:15.544288 containerd[1621]: 2025-11-05 16:03:15.491 [INFO][4675] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="6b6a660647996d7f2260e4d82ed082dd53f0293b079fb76ea2ebc2c961c62452" Namespace="kube-system" Pod="coredns-668d6bf9bc-svhlp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--svhlp-eth0" Nov 5 16:03:15.544288 containerd[1621]: 2025-11-05 16:03:15.491 [INFO][4675] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali63ed61271f0 ContainerID="6b6a660647996d7f2260e4d82ed082dd53f0293b079fb76ea2ebc2c961c62452" Namespace="kube-system" Pod="coredns-668d6bf9bc-svhlp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--svhlp-eth0" Nov 5 16:03:15.544288 containerd[1621]: 2025-11-05 16:03:15.511 [INFO][4675] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6b6a660647996d7f2260e4d82ed082dd53f0293b079fb76ea2ebc2c961c62452" Namespace="kube-system" Pod="coredns-668d6bf9bc-svhlp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--svhlp-eth0" Nov 5 16:03:15.544288 containerd[1621]: 2025-11-05 16:03:15.511 [INFO][4675] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6b6a660647996d7f2260e4d82ed082dd53f0293b079fb76ea2ebc2c961c62452" Namespace="kube-system" Pod="coredns-668d6bf9bc-svhlp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--svhlp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--svhlp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2f03bfaf-897c-4a77-856b-68e383753ef9", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 2, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6b6a660647996d7f2260e4d82ed082dd53f0293b079fb76ea2ebc2c961c62452", Pod:"coredns-668d6bf9bc-svhlp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali63ed61271f0", MAC:"4a:73:71:2f:a4:63", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:15.544288 containerd[1621]: 2025-11-05 16:03:15.537 [INFO][4675] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6b6a660647996d7f2260e4d82ed082dd53f0293b079fb76ea2ebc2c961c62452" Namespace="kube-system" Pod="coredns-668d6bf9bc-svhlp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--svhlp-eth0" Nov 5 16:03:15.577431 containerd[1621]: time="2025-11-05T16:03:15.576887605Z" level=info msg="connecting to shim 6b6a660647996d7f2260e4d82ed082dd53f0293b079fb76ea2ebc2c961c62452" address="unix:///run/containerd/s/ff8a6983fb78aab99f9bf68162a5009f52d4aad28a2b38bb38a2dd214a6c42e6" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:03:15.591015 containerd[1621]: time="2025-11-05T16:03:15.590963452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5b56f6b8-w6xvl,Uid:e2f5a50e-d006-4d79-9642-9b2ea8b5bc20,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6e40e629a239e46c7c452ff32ebdfc0b210c1af4cb42f68885d5316c67172870\"" Nov 5 16:03:15.616948 systemd[1]: Started cri-containerd-6b6a660647996d7f2260e4d82ed082dd53f0293b079fb76ea2ebc2c961c62452.scope - libcontainer container 6b6a660647996d7f2260e4d82ed082dd53f0293b079fb76ea2ebc2c961c62452. Nov 5 16:03:15.631120 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 16:03:15.666653 containerd[1621]: time="2025-11-05T16:03:15.666580976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-svhlp,Uid:2f03bfaf-897c-4a77-856b-68e383753ef9,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b6a660647996d7f2260e4d82ed082dd53f0293b079fb76ea2ebc2c961c62452\"" Nov 5 16:03:15.667642 kubelet[2841]: E1105 16:03:15.667606 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:03:15.670907 containerd[1621]: time="2025-11-05T16:03:15.670755176Z" level=info msg="CreateContainer within sandbox \"6b6a660647996d7f2260e4d82ed082dd53f0293b079fb76ea2ebc2c961c62452\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 16:03:15.681562 containerd[1621]: time="2025-11-05T16:03:15.681496379Z" level=info msg="Container 1df853d46bf318c4ac215816754e76c297e89bc8e4a375434eee0ff1b5afe3a9: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:03:15.689494 containerd[1621]: time="2025-11-05T16:03:15.689436814Z" level=info msg="CreateContainer within sandbox \"6b6a660647996d7f2260e4d82ed082dd53f0293b079fb76ea2ebc2c961c62452\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1df853d46bf318c4ac215816754e76c297e89bc8e4a375434eee0ff1b5afe3a9\"" Nov 5 16:03:15.690147 containerd[1621]: time="2025-11-05T16:03:15.690088422Z" level=info msg="StartContainer for \"1df853d46bf318c4ac215816754e76c297e89bc8e4a375434eee0ff1b5afe3a9\"" Nov 5 16:03:15.691123 containerd[1621]: time="2025-11-05T16:03:15.691084988Z" level=info msg="connecting to shim 1df853d46bf318c4ac215816754e76c297e89bc8e4a375434eee0ff1b5afe3a9" address="unix:///run/containerd/s/ff8a6983fb78aab99f9bf68162a5009f52d4aad28a2b38bb38a2dd214a6c42e6" protocol=ttrpc version=3 Nov 5 16:03:15.691930 systemd-networkd[1510]: cali194d1c3d7ac: Gained IPv6LL Nov 5 16:03:15.728078 systemd[1]: Started cri-containerd-1df853d46bf318c4ac215816754e76c297e89bc8e4a375434eee0ff1b5afe3a9.scope - libcontainer container 1df853d46bf318c4ac215816754e76c297e89bc8e4a375434eee0ff1b5afe3a9. Nov 5 16:03:15.755940 systemd-networkd[1510]: vxlan.calico: Gained IPv6LL Nov 5 16:03:15.757525 systemd-networkd[1510]: califf3810e3613: Gained IPv6LL Nov 5 16:03:15.763814 containerd[1621]: time="2025-11-05T16:03:15.763754986Z" level=info msg="StartContainer for \"1df853d46bf318c4ac215816754e76c297e89bc8e4a375434eee0ff1b5afe3a9\" returns successfully" Nov 5 16:03:16.031146 kubelet[2841]: E1105 16:03:16.030021 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:03:16.032758 kubelet[2841]: E1105 16:03:16.032708 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:03:16.034131 kubelet[2841]: E1105 16:03:16.034079 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r5tq9" podUID="49cc43d2-d1dd-4d90-a5d3-9c3601306d8f" Nov 5 16:03:16.046795 kubelet[2841]: I1105 16:03:16.044916 2841 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-svhlp" podStartSLOduration=44.044899278 podStartE2EDuration="44.044899278s" podCreationTimestamp="2025-11-05 16:02:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 16:03:16.044073949 +0000 UTC m=+48.348209373" watchObservedRunningTime="2025-11-05 16:03:16.044899278 +0000 UTC m=+48.349034742" Nov 5 16:03:16.381630 containerd[1621]: time="2025-11-05T16:03:16.381464056Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:16.471143 containerd[1621]: time="2025-11-05T16:03:16.471077369Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 16:03:16.471309 containerd[1621]: time="2025-11-05T16:03:16.471110592Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 16:03:16.471547 kubelet[2841]: E1105 16:03:16.471476 2841 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 16:03:16.471658 kubelet[2841]: E1105 16:03:16.471555 2841 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 16:03:16.472038 kubelet[2841]: E1105 16:03:16.471955 2841 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tcvwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6894bd5d4c-lzwwl_calico-system(ee8d6811-815a-4547-b978-c3b4809dcbf0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:16.472537 containerd[1621]: time="2025-11-05T16:03:16.472482791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 16:03:16.473457 kubelet[2841]: E1105 16:03:16.473422 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6894bd5d4c-lzwwl" podUID="ee8d6811-815a-4547-b978-c3b4809dcbf0" Nov 5 16:03:16.652586 systemd-networkd[1510]: calic5587a74ecd: Gained IPv6LL Nov 5 16:03:16.715948 systemd-networkd[1510]: cali63ed61271f0: Gained IPv6LL Nov 5 16:03:16.956546 containerd[1621]: time="2025-11-05T16:03:16.956409700Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:16.958031 containerd[1621]: time="2025-11-05T16:03:16.957963896Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 16:03:16.958197 containerd[1621]: time="2025-11-05T16:03:16.957995686Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 16:03:16.958342 kubelet[2841]: E1105 16:03:16.958287 2841 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:03:16.958700 kubelet[2841]: E1105 16:03:16.958351 2841 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:03:16.958700 kubelet[2841]: E1105 16:03:16.958507 2841 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n4fwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f5b56f6b8-w6xvl_calico-apiserver(e2f5a50e-d006-4d79-9642-9b2ea8b5bc20): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:16.959785 kubelet[2841]: E1105 16:03:16.959730 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f5b56f6b8-w6xvl" podUID="e2f5a50e-d006-4d79-9642-9b2ea8b5bc20" Nov 5 16:03:17.034835 kubelet[2841]: E1105 16:03:17.034798 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:03:17.035920 kubelet[2841]: E1105 16:03:17.035875 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f5b56f6b8-w6xvl" podUID="e2f5a50e-d006-4d79-9642-9b2ea8b5bc20" Nov 5 16:03:17.035920 kubelet[2841]: E1105 16:03:17.035899 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6894bd5d4c-lzwwl" podUID="ee8d6811-815a-4547-b978-c3b4809dcbf0" Nov 5 16:03:17.221577 kubelet[2841]: I1105 16:03:17.221421 2841 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 16:03:17.222097 kubelet[2841]: E1105 16:03:17.222074 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:03:17.348544 containerd[1621]: time="2025-11-05T16:03:17.348501626Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8b75530ab8e6bada166598eebe4e114cf89d7be8cf8cf245ea7fae05c80daf28\" id:\"5af339ce0dead745912f2104c12bf9bde6a403793eb29a90467299e7669eb38f\" pid:4921 exited_at:{seconds:1762358597 nanos:348156119}" Nov 5 16:03:17.439968 containerd[1621]: time="2025-11-05T16:03:17.439905329Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8b75530ab8e6bada166598eebe4e114cf89d7be8cf8cf245ea7fae05c80daf28\" id:\"db6475da6988662f2c452f5f2d909b8e9256b9af40984b6c4844fd95b5ca4213\" pid:4946 exited_at:{seconds:1762358597 nanos:439605860}" Nov 5 16:03:18.040681 kubelet[2841]: E1105 16:03:18.040597 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:03:18.041161 kubelet[2841]: E1105 16:03:18.040745 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:03:22.008127 systemd[1]: Started sshd@7-10.0.0.130:22-10.0.0.1:52876.service - OpenSSH per-connection server daemon (10.0.0.1:52876). Nov 5 16:03:22.092702 sshd[4972]: Accepted publickey for core from 10.0.0.1 port 52876 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 16:03:22.095538 sshd-session[4972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:03:22.102865 systemd-logind[1589]: New session 8 of user core. Nov 5 16:03:22.107964 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 5 16:03:22.285292 sshd[4975]: Connection closed by 10.0.0.1 port 52876 Nov 5 16:03:22.287036 sshd-session[4972]: pam_unix(sshd:session): session closed for user core Nov 5 16:03:22.292557 systemd[1]: sshd@7-10.0.0.130:22-10.0.0.1:52876.service: Deactivated successfully. Nov 5 16:03:22.294895 systemd[1]: session-8.scope: Deactivated successfully. Nov 5 16:03:22.296043 systemd-logind[1589]: Session 8 logged out. Waiting for processes to exit. Nov 5 16:03:22.297665 systemd-logind[1589]: Removed session 8. Nov 5 16:03:25.865233 containerd[1621]: time="2025-11-05T16:03:25.864925798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 16:03:26.216681 containerd[1621]: time="2025-11-05T16:03:26.216505504Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:26.217695 containerd[1621]: time="2025-11-05T16:03:26.217654793Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 16:03:26.217790 containerd[1621]: time="2025-11-05T16:03:26.217728303Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 16:03:26.218002 kubelet[2841]: E1105 16:03:26.217930 2841 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 16:03:26.218360 kubelet[2841]: E1105 16:03:26.217994 2841 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 16:03:26.218360 kubelet[2841]: E1105 16:03:26.218142 2841 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:dd37dbad30e5453fb8d468d048f6be85,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bb68s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6745b64cf-lv9qv_calico-system(b92d66c6-1332-4195-9d17-9fdc4c32310f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:26.220903 containerd[1621]: time="2025-11-05T16:03:26.220516060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 16:03:26.596012 containerd[1621]: time="2025-11-05T16:03:26.595959953Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:26.698284 containerd[1621]: time="2025-11-05T16:03:26.698211745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 16:03:26.706385 containerd[1621]: time="2025-11-05T16:03:26.706318649Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 16:03:26.706693 kubelet[2841]: E1105 16:03:26.706615 2841 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 16:03:26.706693 kubelet[2841]: E1105 16:03:26.706689 2841 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 16:03:26.706942 kubelet[2841]: E1105 16:03:26.706860 2841 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bb68s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6745b64cf-lv9qv_calico-system(b92d66c6-1332-4195-9d17-9fdc4c32310f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:26.708320 kubelet[2841]: E1105 16:03:26.708271 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6745b64cf-lv9qv" podUID="b92d66c6-1332-4195-9d17-9fdc4c32310f" Nov 5 16:03:27.302381 systemd[1]: Started sshd@8-10.0.0.130:22-10.0.0.1:52890.service - OpenSSH per-connection server daemon (10.0.0.1:52890). Nov 5 16:03:27.374949 sshd[4997]: Accepted publickey for core from 10.0.0.1 port 52890 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 16:03:27.376836 sshd-session[4997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:03:27.384368 systemd-logind[1589]: New session 9 of user core. Nov 5 16:03:27.391614 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 5 16:03:27.515239 sshd[5000]: Connection closed by 10.0.0.1 port 52890 Nov 5 16:03:27.515542 sshd-session[4997]: pam_unix(sshd:session): session closed for user core Nov 5 16:03:27.519444 systemd[1]: sshd@8-10.0.0.130:22-10.0.0.1:52890.service: Deactivated successfully. Nov 5 16:03:27.521722 systemd[1]: session-9.scope: Deactivated successfully. Nov 5 16:03:27.523224 systemd-logind[1589]: Session 9 logged out. Waiting for processes to exit. Nov 5 16:03:27.524145 systemd-logind[1589]: Removed session 9. Nov 5 16:03:27.864868 containerd[1621]: time="2025-11-05T16:03:27.864703417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 16:03:28.244659 containerd[1621]: time="2025-11-05T16:03:28.244604549Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:28.372480 containerd[1621]: time="2025-11-05T16:03:28.372389976Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 16:03:28.372679 containerd[1621]: time="2025-11-05T16:03:28.372444479Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 16:03:28.372827 kubelet[2841]: E1105 16:03:28.372747 2841 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 16:03:28.373408 kubelet[2841]: E1105 16:03:28.372829 2841 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 16:03:28.373408 kubelet[2841]: E1105 16:03:28.373065 2841 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s8zcg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-m7g79_calico-system(09c014d2-b99d-493b-9e36-c9afae0fa214): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:28.373547 containerd[1621]: time="2025-11-05T16:03:28.373292087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 16:03:28.753160 containerd[1621]: time="2025-11-05T16:03:28.753102259Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:28.754399 containerd[1621]: time="2025-11-05T16:03:28.754364041Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 16:03:28.754466 containerd[1621]: time="2025-11-05T16:03:28.754433413Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 16:03:28.754640 kubelet[2841]: E1105 16:03:28.754599 2841 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:03:28.754703 kubelet[2841]: E1105 16:03:28.754654 2841 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:03:28.754951 kubelet[2841]: E1105 16:03:28.754897 2841 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n4fwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f5b56f6b8-w6xvl_calico-apiserver(e2f5a50e-d006-4d79-9642-9b2ea8b5bc20): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:28.755289 containerd[1621]: time="2025-11-05T16:03:28.755235694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 16:03:28.756360 kubelet[2841]: E1105 16:03:28.756325 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f5b56f6b8-w6xvl" podUID="e2f5a50e-d006-4d79-9642-9b2ea8b5bc20" Nov 5 16:03:29.100242 containerd[1621]: time="2025-11-05T16:03:29.100055656Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:29.184983 containerd[1621]: time="2025-11-05T16:03:29.184886513Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 16:03:29.185188 containerd[1621]: time="2025-11-05T16:03:29.184934264Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 16:03:29.185348 kubelet[2841]: E1105 16:03:29.185237 2841 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 16:03:29.185348 kubelet[2841]: E1105 16:03:29.185343 2841 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 16:03:29.185757 kubelet[2841]: E1105 16:03:29.185662 2841 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s8zcg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-m7g79_calico-system(09c014d2-b99d-493b-9e36-c9afae0fa214): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:29.186036 containerd[1621]: time="2025-11-05T16:03:29.185788884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 16:03:29.187037 kubelet[2841]: E1105 16:03:29.186999 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-m7g79" podUID="09c014d2-b99d-493b-9e36-c9afae0fa214" Nov 5 16:03:29.645408 containerd[1621]: time="2025-11-05T16:03:29.645352346Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:29.798799 containerd[1621]: time="2025-11-05T16:03:29.798708245Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 16:03:29.798799 containerd[1621]: time="2025-11-05T16:03:29.798757549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 16:03:29.799060 kubelet[2841]: E1105 16:03:29.798999 2841 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:03:29.799405 kubelet[2841]: E1105 16:03:29.799068 2841 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:03:29.799405 kubelet[2841]: E1105 16:03:29.799210 2841 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rg9x7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f5b56f6b8-98rmx_calico-apiserver(7606f5a8-f065-4081-b3bc-2344e91053d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:29.800405 kubelet[2841]: E1105 16:03:29.800377 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f5b56f6b8-98rmx" podUID="7606f5a8-f065-4081-b3bc-2344e91053d9" Nov 5 16:03:29.862781 containerd[1621]: time="2025-11-05T16:03:29.862737628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 16:03:30.384239 containerd[1621]: time="2025-11-05T16:03:30.384164679Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:30.388355 containerd[1621]: time="2025-11-05T16:03:30.388285708Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 16:03:30.388441 containerd[1621]: time="2025-11-05T16:03:30.388370910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 16:03:30.388617 kubelet[2841]: E1105 16:03:30.388567 2841 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 16:03:30.388682 kubelet[2841]: E1105 16:03:30.388631 2841 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 16:03:30.389027 kubelet[2841]: E1105 16:03:30.388959 2841 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tcvwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6894bd5d4c-lzwwl_calico-system(ee8d6811-815a-4547-b978-c3b4809dcbf0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:30.389162 containerd[1621]: time="2025-11-05T16:03:30.389015560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 16:03:30.390376 kubelet[2841]: E1105 16:03:30.390335 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6894bd5d4c-lzwwl" podUID="ee8d6811-815a-4547-b978-c3b4809dcbf0" Nov 5 16:03:30.834646 containerd[1621]: time="2025-11-05T16:03:30.834543719Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:30.835702 containerd[1621]: time="2025-11-05T16:03:30.835665123Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 16:03:30.835829 containerd[1621]: time="2025-11-05T16:03:30.835712684Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 16:03:30.836006 kubelet[2841]: E1105 16:03:30.835955 2841 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 16:03:30.836345 kubelet[2841]: E1105 16:03:30.836020 2841 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 16:03:30.836345 kubelet[2841]: E1105 16:03:30.836157 2841 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8ng6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-r5tq9_calico-system(49cc43d2-d1dd-4d90-a5d3-9c3601306d8f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:30.837446 kubelet[2841]: E1105 16:03:30.837397 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r5tq9" podUID="49cc43d2-d1dd-4d90-a5d3-9c3601306d8f" Nov 5 16:03:32.531181 systemd[1]: Started sshd@9-10.0.0.130:22-10.0.0.1:38100.service - OpenSSH per-connection server daemon (10.0.0.1:38100). Nov 5 16:03:32.597173 sshd[5018]: Accepted publickey for core from 10.0.0.1 port 38100 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 16:03:32.602048 sshd-session[5018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:03:32.611785 systemd-logind[1589]: New session 10 of user core. Nov 5 16:03:32.616986 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 5 16:03:32.781343 sshd[5021]: Connection closed by 10.0.0.1 port 38100 Nov 5 16:03:32.781699 sshd-session[5018]: pam_unix(sshd:session): session closed for user core Nov 5 16:03:32.789748 systemd[1]: sshd@9-10.0.0.130:22-10.0.0.1:38100.service: Deactivated successfully. Nov 5 16:03:32.789994 systemd-logind[1589]: Session 10 logged out. Waiting for processes to exit. Nov 5 16:03:32.793141 systemd[1]: session-10.scope: Deactivated successfully. Nov 5 16:03:32.798659 systemd-logind[1589]: Removed session 10. Nov 5 16:03:37.796521 systemd[1]: Started sshd@10-10.0.0.130:22-10.0.0.1:38104.service - OpenSSH per-connection server daemon (10.0.0.1:38104). Nov 5 16:03:37.864228 sshd[5045]: Accepted publickey for core from 10.0.0.1 port 38104 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 16:03:37.865902 sshd-session[5045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:03:37.871192 systemd-logind[1589]: New session 11 of user core. Nov 5 16:03:37.878938 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 5 16:03:38.041542 sshd[5048]: Connection closed by 10.0.0.1 port 38104 Nov 5 16:03:38.042303 sshd-session[5045]: pam_unix(sshd:session): session closed for user core Nov 5 16:03:38.056537 systemd[1]: sshd@10-10.0.0.130:22-10.0.0.1:38104.service: Deactivated successfully. Nov 5 16:03:38.059303 systemd[1]: session-11.scope: Deactivated successfully. Nov 5 16:03:38.062033 systemd-logind[1589]: Session 11 logged out. Waiting for processes to exit. Nov 5 16:03:38.064056 systemd-logind[1589]: Removed session 11. Nov 5 16:03:38.066694 systemd[1]: Started sshd@11-10.0.0.130:22-10.0.0.1:38118.service - OpenSSH per-connection server daemon (10.0.0.1:38118). Nov 5 16:03:38.140364 sshd[5062]: Accepted publickey for core from 10.0.0.1 port 38118 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 16:03:38.142717 sshd-session[5062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:03:38.168565 systemd-logind[1589]: New session 12 of user core. Nov 5 16:03:38.181441 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 5 16:03:38.465834 sshd[5065]: Connection closed by 10.0.0.1 port 38118 Nov 5 16:03:38.466931 sshd-session[5062]: pam_unix(sshd:session): session closed for user core Nov 5 16:03:38.493444 systemd[1]: sshd@11-10.0.0.130:22-10.0.0.1:38118.service: Deactivated successfully. Nov 5 16:03:38.499705 systemd[1]: session-12.scope: Deactivated successfully. Nov 5 16:03:38.502022 systemd-logind[1589]: Session 12 logged out. Waiting for processes to exit. Nov 5 16:03:38.516732 systemd[1]: Started sshd@12-10.0.0.130:22-10.0.0.1:38128.service - OpenSSH per-connection server daemon (10.0.0.1:38128). Nov 5 16:03:38.522793 systemd-logind[1589]: Removed session 12. Nov 5 16:03:38.781520 sshd[5077]: Accepted publickey for core from 10.0.0.1 port 38128 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 16:03:38.783986 sshd-session[5077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:03:38.795659 systemd-logind[1589]: New session 13 of user core. Nov 5 16:03:38.808861 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 5 16:03:38.859199 kubelet[2841]: E1105 16:03:38.859131 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6745b64cf-lv9qv" podUID="b92d66c6-1332-4195-9d17-9fdc4c32310f" Nov 5 16:03:39.023382 sshd[5080]: Connection closed by 10.0.0.1 port 38128 Nov 5 16:03:39.024394 sshd-session[5077]: pam_unix(sshd:session): session closed for user core Nov 5 16:03:39.030382 systemd[1]: sshd@12-10.0.0.130:22-10.0.0.1:38128.service: Deactivated successfully. Nov 5 16:03:39.032815 systemd[1]: session-13.scope: Deactivated successfully. Nov 5 16:03:39.034070 systemd-logind[1589]: Session 13 logged out. Waiting for processes to exit. Nov 5 16:03:39.035760 systemd-logind[1589]: Removed session 13. Nov 5 16:03:39.857888 kubelet[2841]: E1105 16:03:39.857632 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-m7g79" podUID="09c014d2-b99d-493b-9e36-c9afae0fa214" Nov 5 16:03:40.857546 kubelet[2841]: E1105 16:03:40.857499 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f5b56f6b8-98rmx" podUID="7606f5a8-f065-4081-b3bc-2344e91053d9" Nov 5 16:03:41.856328 kubelet[2841]: E1105 16:03:41.855936 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:03:41.857377 kubelet[2841]: E1105 16:03:41.856726 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:03:41.857377 kubelet[2841]: E1105 16:03:41.857117 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f5b56f6b8-w6xvl" podUID="e2f5a50e-d006-4d79-9642-9b2ea8b5bc20" Nov 5 16:03:41.857377 kubelet[2841]: E1105 16:03:41.857174 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6894bd5d4c-lzwwl" podUID="ee8d6811-815a-4547-b978-c3b4809dcbf0" Nov 5 16:03:42.856194 kubelet[2841]: E1105 16:03:42.856152 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:03:42.856372 kubelet[2841]: E1105 16:03:42.856259 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r5tq9" podUID="49cc43d2-d1dd-4d90-a5d3-9c3601306d8f" Nov 5 16:03:44.041683 systemd[1]: Started sshd@13-10.0.0.130:22-10.0.0.1:42608.service - OpenSSH per-connection server daemon (10.0.0.1:42608). Nov 5 16:03:44.101143 sshd[5095]: Accepted publickey for core from 10.0.0.1 port 42608 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 16:03:44.103063 sshd-session[5095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:03:44.108179 systemd-logind[1589]: New session 14 of user core. Nov 5 16:03:44.113957 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 5 16:03:44.300135 sshd[5098]: Connection closed by 10.0.0.1 port 42608 Nov 5 16:03:44.300404 sshd-session[5095]: pam_unix(sshd:session): session closed for user core Nov 5 16:03:44.307183 systemd[1]: sshd@13-10.0.0.130:22-10.0.0.1:42608.service: Deactivated successfully. Nov 5 16:03:44.311911 systemd[1]: session-14.scope: Deactivated successfully. Nov 5 16:03:44.316005 systemd-logind[1589]: Session 14 logged out. Waiting for processes to exit. Nov 5 16:03:44.317754 systemd-logind[1589]: Removed session 14. Nov 5 16:03:47.437137 containerd[1621]: time="2025-11-05T16:03:47.437068102Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8b75530ab8e6bada166598eebe4e114cf89d7be8cf8cf245ea7fae05c80daf28\" id:\"0d7ace33f9db6612077f21211fb9fbfd871cd2d44ebed47fc82d427be98e417d\" pid:5129 exit_status:1 exited_at:{seconds:1762358627 nanos:436628370}" Nov 5 16:03:49.314782 systemd[1]: Started sshd@14-10.0.0.130:22-10.0.0.1:42622.service - OpenSSH per-connection server daemon (10.0.0.1:42622). Nov 5 16:03:49.360168 sshd[5144]: Accepted publickey for core from 10.0.0.1 port 42622 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 16:03:49.361879 sshd-session[5144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:03:49.369931 systemd-logind[1589]: New session 15 of user core. Nov 5 16:03:49.376037 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 5 16:03:49.503650 sshd[5147]: Connection closed by 10.0.0.1 port 42622 Nov 5 16:03:49.504004 sshd-session[5144]: pam_unix(sshd:session): session closed for user core Nov 5 16:03:49.509204 systemd[1]: sshd@14-10.0.0.130:22-10.0.0.1:42622.service: Deactivated successfully. Nov 5 16:03:49.511383 systemd[1]: session-15.scope: Deactivated successfully. Nov 5 16:03:49.512251 systemd-logind[1589]: Session 15 logged out. Waiting for processes to exit. Nov 5 16:03:49.513392 systemd-logind[1589]: Removed session 15. Nov 5 16:03:50.857719 containerd[1621]: time="2025-11-05T16:03:50.857663996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 16:03:51.278604 containerd[1621]: time="2025-11-05T16:03:51.278544770Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:51.280415 containerd[1621]: time="2025-11-05T16:03:51.280349959Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 16:03:51.280499 containerd[1621]: time="2025-11-05T16:03:51.280412518Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 16:03:51.280632 kubelet[2841]: E1105 16:03:51.280572 2841 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 16:03:51.281205 kubelet[2841]: E1105 16:03:51.280636 2841 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 16:03:51.281205 kubelet[2841]: E1105 16:03:51.280982 2841 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:dd37dbad30e5453fb8d468d048f6be85,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bb68s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6745b64cf-lv9qv_calico-system(b92d66c6-1332-4195-9d17-9fdc4c32310f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:51.281387 containerd[1621]: time="2025-11-05T16:03:51.281360819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 16:03:51.650449 containerd[1621]: time="2025-11-05T16:03:51.650300158Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:51.665648 containerd[1621]: time="2025-11-05T16:03:51.665521791Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 16:03:51.665648 containerd[1621]: time="2025-11-05T16:03:51.665588367Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 16:03:51.665942 kubelet[2841]: E1105 16:03:51.665872 2841 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 16:03:51.666004 kubelet[2841]: E1105 16:03:51.665947 2841 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 16:03:51.666708 kubelet[2841]: E1105 16:03:51.666202 2841 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s8zcg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-m7g79_calico-system(09c014d2-b99d-493b-9e36-c9afae0fa214): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:51.666866 containerd[1621]: time="2025-11-05T16:03:51.666699215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 16:03:52.055895 containerd[1621]: time="2025-11-05T16:03:52.055690239Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:52.068702 containerd[1621]: time="2025-11-05T16:03:52.068632464Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 16:03:52.068702 containerd[1621]: time="2025-11-05T16:03:52.068713488Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 16:03:52.068959 kubelet[2841]: E1105 16:03:52.068898 2841 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 16:03:52.069002 kubelet[2841]: E1105 16:03:52.068965 2841 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 16:03:52.069906 containerd[1621]: time="2025-11-05T16:03:52.069445169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 16:03:52.069950 kubelet[2841]: E1105 16:03:52.069508 2841 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bb68s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6745b64cf-lv9qv_calico-system(b92d66c6-1332-4195-9d17-9fdc4c32310f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:52.071150 kubelet[2841]: E1105 16:03:52.071080 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6745b64cf-lv9qv" podUID="b92d66c6-1332-4195-9d17-9fdc4c32310f" Nov 5 16:03:52.443699 containerd[1621]: time="2025-11-05T16:03:52.443562747Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:52.616994 containerd[1621]: time="2025-11-05T16:03:52.616916976Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 16:03:52.617142 containerd[1621]: time="2025-11-05T16:03:52.617027313Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 16:03:52.617282 kubelet[2841]: E1105 16:03:52.617192 2841 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 16:03:52.617282 kubelet[2841]: E1105 16:03:52.617261 2841 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 16:03:52.617873 kubelet[2841]: E1105 16:03:52.617374 2841 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s8zcg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-m7g79_calico-system(09c014d2-b99d-493b-9e36-c9afae0fa214): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:52.618719 kubelet[2841]: E1105 16:03:52.618668 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-m7g79" podUID="09c014d2-b99d-493b-9e36-c9afae0fa214" Nov 5 16:03:52.857020 containerd[1621]: time="2025-11-05T16:03:52.856967589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 16:03:53.307543 containerd[1621]: time="2025-11-05T16:03:53.307477969Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:53.518002 containerd[1621]: time="2025-11-05T16:03:53.517920330Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 16:03:53.518162 containerd[1621]: time="2025-11-05T16:03:53.517960175Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 16:03:53.518277 kubelet[2841]: E1105 16:03:53.518216 2841 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:03:53.518361 kubelet[2841]: E1105 16:03:53.518278 2841 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:03:53.518607 kubelet[2841]: E1105 16:03:53.518537 2841 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rg9x7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f5b56f6b8-98rmx_calico-apiserver(7606f5a8-f065-4081-b3bc-2344e91053d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:53.518739 containerd[1621]: time="2025-11-05T16:03:53.518633827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 16:03:53.519737 kubelet[2841]: E1105 16:03:53.519707 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f5b56f6b8-98rmx" podUID="7606f5a8-f065-4081-b3bc-2344e91053d9" Nov 5 16:03:54.091346 containerd[1621]: time="2025-11-05T16:03:54.091288293Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:54.148698 containerd[1621]: time="2025-11-05T16:03:54.148626391Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 16:03:54.148698 containerd[1621]: time="2025-11-05T16:03:54.148704248Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 16:03:54.148926 kubelet[2841]: E1105 16:03:54.148830 2841 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 16:03:54.148926 kubelet[2841]: E1105 16:03:54.148884 2841 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 16:03:54.149376 kubelet[2841]: E1105 16:03:54.149254 2841 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tcvwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6894bd5d4c-lzwwl_calico-system(ee8d6811-815a-4547-b978-c3b4809dcbf0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:54.149981 containerd[1621]: time="2025-11-05T16:03:54.149936145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 16:03:54.150902 kubelet[2841]: E1105 16:03:54.150854 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6894bd5d4c-lzwwl" podUID="ee8d6811-815a-4547-b978-c3b4809dcbf0" Nov 5 16:03:54.518336 containerd[1621]: time="2025-11-05T16:03:54.518279589Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:54.519593 containerd[1621]: time="2025-11-05T16:03:54.519562883Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 16:03:54.519645 containerd[1621]: time="2025-11-05T16:03:54.519600664Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 16:03:54.519832 kubelet[2841]: E1105 16:03:54.519790 2841 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:03:54.519911 kubelet[2841]: E1105 16:03:54.519847 2841 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:03:54.520064 kubelet[2841]: E1105 16:03:54.520012 2841 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n4fwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f5b56f6b8-w6xvl_calico-apiserver(e2f5a50e-d006-4d79-9642-9b2ea8b5bc20): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:54.521263 kubelet[2841]: E1105 16:03:54.521179 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f5b56f6b8-w6xvl" podUID="e2f5a50e-d006-4d79-9642-9b2ea8b5bc20" Nov 5 16:03:54.524240 systemd[1]: Started sshd@15-10.0.0.130:22-10.0.0.1:35318.service - OpenSSH per-connection server daemon (10.0.0.1:35318). Nov 5 16:03:54.583593 sshd[5167]: Accepted publickey for core from 10.0.0.1 port 35318 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 16:03:54.585429 sshd-session[5167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:03:54.591172 systemd-logind[1589]: New session 16 of user core. Nov 5 16:03:54.596960 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 5 16:03:54.720326 sshd[5170]: Connection closed by 10.0.0.1 port 35318 Nov 5 16:03:54.721139 sshd-session[5167]: pam_unix(sshd:session): session closed for user core Nov 5 16:03:54.726663 systemd[1]: sshd@15-10.0.0.130:22-10.0.0.1:35318.service: Deactivated successfully. Nov 5 16:03:54.728749 systemd[1]: session-16.scope: Deactivated successfully. Nov 5 16:03:54.729709 systemd-logind[1589]: Session 16 logged out. Waiting for processes to exit. Nov 5 16:03:54.731060 systemd-logind[1589]: Removed session 16. Nov 5 16:03:54.855731 kubelet[2841]: E1105 16:03:54.855601 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:03:54.857596 containerd[1621]: time="2025-11-05T16:03:54.857532479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 16:03:55.416165 containerd[1621]: time="2025-11-05T16:03:55.416102166Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:55.417288 containerd[1621]: time="2025-11-05T16:03:55.417237270Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 16:03:55.417336 containerd[1621]: time="2025-11-05T16:03:55.417326708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 16:03:55.417541 kubelet[2841]: E1105 16:03:55.417489 2841 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 16:03:55.417541 kubelet[2841]: E1105 16:03:55.417543 2841 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 16:03:55.418148 kubelet[2841]: E1105 16:03:55.417683 2841 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8ng6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-r5tq9_calico-system(49cc43d2-d1dd-4d90-a5d3-9c3601306d8f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:55.418855 kubelet[2841]: E1105 16:03:55.418831 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r5tq9" podUID="49cc43d2-d1dd-4d90-a5d3-9c3601306d8f" Nov 5 16:03:59.743284 systemd[1]: Started sshd@16-10.0.0.130:22-10.0.0.1:35326.service - OpenSSH per-connection server daemon (10.0.0.1:35326). Nov 5 16:03:59.838844 sshd[5186]: Accepted publickey for core from 10.0.0.1 port 35326 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 16:03:59.841399 sshd-session[5186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:03:59.856161 systemd-logind[1589]: New session 17 of user core. Nov 5 16:03:59.876714 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 5 16:04:00.075806 sshd[5189]: Connection closed by 10.0.0.1 port 35326 Nov 5 16:04:00.072142 sshd-session[5186]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:00.088231 systemd[1]: sshd@16-10.0.0.130:22-10.0.0.1:35326.service: Deactivated successfully. Nov 5 16:04:00.093653 systemd[1]: session-17.scope: Deactivated successfully. Nov 5 16:04:00.098140 systemd-logind[1589]: Session 17 logged out. Waiting for processes to exit. Nov 5 16:04:00.106347 systemd[1]: Started sshd@17-10.0.0.130:22-10.0.0.1:57462.service - OpenSSH per-connection server daemon (10.0.0.1:57462). Nov 5 16:04:00.110520 systemd-logind[1589]: Removed session 17. Nov 5 16:04:00.184730 sshd[5202]: Accepted publickey for core from 10.0.0.1 port 57462 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 16:04:00.187017 sshd-session[5202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:00.198318 systemd-logind[1589]: New session 18 of user core. Nov 5 16:04:00.204744 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 5 16:04:02.066555 sshd[5205]: Connection closed by 10.0.0.1 port 57462 Nov 5 16:04:02.067491 sshd-session[5202]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:02.088382 systemd[1]: sshd@17-10.0.0.130:22-10.0.0.1:57462.service: Deactivated successfully. Nov 5 16:04:02.093815 systemd[1]: session-18.scope: Deactivated successfully. Nov 5 16:04:02.095138 systemd-logind[1589]: Session 18 logged out. Waiting for processes to exit. Nov 5 16:04:02.098547 systemd[1]: Started sshd@18-10.0.0.130:22-10.0.0.1:57478.service - OpenSSH per-connection server daemon (10.0.0.1:57478). Nov 5 16:04:02.100924 systemd-logind[1589]: Removed session 18. Nov 5 16:04:02.205370 sshd[5218]: Accepted publickey for core from 10.0.0.1 port 57478 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 16:04:02.208037 sshd-session[5218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:02.217742 systemd-logind[1589]: New session 19 of user core. Nov 5 16:04:02.231371 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 5 16:04:03.869398 kubelet[2841]: E1105 16:04:03.869311 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f5b56f6b8-98rmx" podUID="7606f5a8-f065-4081-b3bc-2344e91053d9" Nov 5 16:04:03.870490 kubelet[2841]: E1105 16:04:03.870361 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6745b64cf-lv9qv" podUID="b92d66c6-1332-4195-9d17-9fdc4c32310f" Nov 5 16:04:04.010936 sshd[5221]: Connection closed by 10.0.0.1 port 57478 Nov 5 16:04:04.012302 sshd-session[5218]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:04.030145 systemd[1]: sshd@18-10.0.0.130:22-10.0.0.1:57478.service: Deactivated successfully. Nov 5 16:04:04.035206 systemd[1]: session-19.scope: Deactivated successfully. Nov 5 16:04:04.036662 systemd-logind[1589]: Session 19 logged out. Waiting for processes to exit. Nov 5 16:04:04.043333 systemd[1]: Started sshd@19-10.0.0.130:22-10.0.0.1:57486.service - OpenSSH per-connection server daemon (10.0.0.1:57486). Nov 5 16:04:04.046591 systemd-logind[1589]: Removed session 19. Nov 5 16:04:04.123183 sshd[5242]: Accepted publickey for core from 10.0.0.1 port 57486 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 16:04:04.127507 sshd-session[5242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:04.139870 systemd-logind[1589]: New session 20 of user core. Nov 5 16:04:04.149135 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 5 16:04:04.611093 sshd[5245]: Connection closed by 10.0.0.1 port 57486 Nov 5 16:04:04.613602 sshd-session[5242]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:04.628214 systemd[1]: sshd@19-10.0.0.130:22-10.0.0.1:57486.service: Deactivated successfully. Nov 5 16:04:04.631782 systemd[1]: session-20.scope: Deactivated successfully. Nov 5 16:04:04.635696 systemd-logind[1589]: Session 20 logged out. Waiting for processes to exit. Nov 5 16:04:04.638315 systemd-logind[1589]: Removed session 20. Nov 5 16:04:04.640068 systemd[1]: Started sshd@20-10.0.0.130:22-10.0.0.1:57498.service - OpenSSH per-connection server daemon (10.0.0.1:57498). Nov 5 16:04:04.710975 sshd[5257]: Accepted publickey for core from 10.0.0.1 port 57498 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 16:04:04.713218 sshd-session[5257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:04.723819 systemd-logind[1589]: New session 21 of user core. Nov 5 16:04:04.738803 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 5 16:04:04.899908 sshd[5260]: Connection closed by 10.0.0.1 port 57498 Nov 5 16:04:04.903370 sshd-session[5257]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:04.912455 systemd[1]: sshd@20-10.0.0.130:22-10.0.0.1:57498.service: Deactivated successfully. Nov 5 16:04:04.917178 systemd[1]: session-21.scope: Deactivated successfully. Nov 5 16:04:04.919008 systemd-logind[1589]: Session 21 logged out. Waiting for processes to exit. Nov 5 16:04:04.922251 systemd-logind[1589]: Removed session 21. Nov 5 16:04:05.858678 kubelet[2841]: E1105 16:04:05.858600 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f5b56f6b8-w6xvl" podUID="e2f5a50e-d006-4d79-9642-9b2ea8b5bc20" Nov 5 16:04:06.860579 kubelet[2841]: E1105 16:04:06.860301 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6894bd5d4c-lzwwl" podUID="ee8d6811-815a-4547-b978-c3b4809dcbf0" Nov 5 16:04:07.864434 kubelet[2841]: E1105 16:04:07.863194 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r5tq9" podUID="49cc43d2-d1dd-4d90-a5d3-9c3601306d8f" Nov 5 16:04:07.867096 kubelet[2841]: E1105 16:04:07.867041 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-m7g79" podUID="09c014d2-b99d-493b-9e36-c9afae0fa214" Nov 5 16:04:09.860506 kubelet[2841]: E1105 16:04:09.860426 2841 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 16:04:09.918805 systemd[1]: Started sshd@21-10.0.0.130:22-10.0.0.1:57502.service - OpenSSH per-connection server daemon (10.0.0.1:57502). Nov 5 16:04:10.001952 sshd[5274]: Accepted publickey for core from 10.0.0.1 port 57502 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 16:04:10.004107 sshd-session[5274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:10.019986 systemd-logind[1589]: New session 22 of user core. Nov 5 16:04:10.027284 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 5 16:04:10.188900 sshd[5277]: Connection closed by 10.0.0.1 port 57502 Nov 5 16:04:10.190226 sshd-session[5274]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:10.199718 systemd[1]: sshd@21-10.0.0.130:22-10.0.0.1:57502.service: Deactivated successfully. Nov 5 16:04:10.205130 systemd[1]: session-22.scope: Deactivated successfully. Nov 5 16:04:10.206571 systemd-logind[1589]: Session 22 logged out. Waiting for processes to exit. Nov 5 16:04:10.210133 systemd-logind[1589]: Removed session 22. Nov 5 16:04:14.856788 kubelet[2841]: E1105 16:04:14.856405 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f5b56f6b8-98rmx" podUID="7606f5a8-f065-4081-b3bc-2344e91053d9" Nov 5 16:04:15.202098 systemd[1]: Started sshd@22-10.0.0.130:22-10.0.0.1:55014.service - OpenSSH per-connection server daemon (10.0.0.1:55014). Nov 5 16:04:15.253244 sshd[5293]: Accepted publickey for core from 10.0.0.1 port 55014 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 16:04:15.255178 sshd-session[5293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:15.260632 systemd-logind[1589]: New session 23 of user core. Nov 5 16:04:15.267957 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 5 16:04:15.393915 sshd[5296]: Connection closed by 10.0.0.1 port 55014 Nov 5 16:04:15.394282 sshd-session[5293]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:15.398692 systemd-logind[1589]: Session 23 logged out. Waiting for processes to exit. Nov 5 16:04:15.401091 systemd[1]: sshd@22-10.0.0.130:22-10.0.0.1:55014.service: Deactivated successfully. Nov 5 16:04:15.403968 systemd[1]: session-23.scope: Deactivated successfully. Nov 5 16:04:15.406186 systemd-logind[1589]: Removed session 23. Nov 5 16:04:16.856306 kubelet[2841]: E1105 16:04:16.856225 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f5b56f6b8-w6xvl" podUID="e2f5a50e-d006-4d79-9642-9b2ea8b5bc20" Nov 5 16:04:17.429870 containerd[1621]: time="2025-11-05T16:04:17.429820191Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8b75530ab8e6bada166598eebe4e114cf89d7be8cf8cf245ea7fae05c80daf28\" id:\"0a705422a3842fce3eb2e841643cc408b2187e6b3e364b9cbaf20bfaedf89117\" pid:5319 exited_at:{seconds:1762358657 nanos:429465754}" Nov 5 16:04:17.857705 kubelet[2841]: E1105 16:04:17.857625 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6745b64cf-lv9qv" podUID="b92d66c6-1332-4195-9d17-9fdc4c32310f" Nov 5 16:04:18.857249 kubelet[2841]: E1105 16:04:18.857187 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-m7g79" podUID="09c014d2-b99d-493b-9e36-c9afae0fa214" Nov 5 16:04:20.413145 systemd[1]: Started sshd@23-10.0.0.130:22-10.0.0.1:42502.service - OpenSSH per-connection server daemon (10.0.0.1:42502). Nov 5 16:04:20.490835 sshd[5333]: Accepted publickey for core from 10.0.0.1 port 42502 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 16:04:20.492322 sshd-session[5333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:20.496811 systemd-logind[1589]: New session 24 of user core. Nov 5 16:04:20.505008 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 5 16:04:20.635748 sshd[5336]: Connection closed by 10.0.0.1 port 42502 Nov 5 16:04:20.638955 sshd-session[5333]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:20.644858 systemd[1]: sshd@23-10.0.0.130:22-10.0.0.1:42502.service: Deactivated successfully. Nov 5 16:04:20.651643 systemd[1]: session-24.scope: Deactivated successfully. Nov 5 16:04:20.654002 systemd-logind[1589]: Session 24 logged out. Waiting for processes to exit. Nov 5 16:04:20.656124 systemd-logind[1589]: Removed session 24. Nov 5 16:04:21.857287 kubelet[2841]: E1105 16:04:21.857125 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r5tq9" podUID="49cc43d2-d1dd-4d90-a5d3-9c3601306d8f" Nov 5 16:04:21.857860 kubelet[2841]: E1105 16:04:21.857249 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6894bd5d4c-lzwwl" podUID="ee8d6811-815a-4547-b978-c3b4809dcbf0" Nov 5 16:04:25.647908 systemd[1]: Started sshd@24-10.0.0.130:22-10.0.0.1:42506.service - OpenSSH per-connection server daemon (10.0.0.1:42506). Nov 5 16:04:25.727526 sshd[5349]: Accepted publickey for core from 10.0.0.1 port 42506 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 16:04:25.729282 sshd-session[5349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:25.733885 systemd-logind[1589]: New session 25 of user core. Nov 5 16:04:25.742937 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 5 16:04:25.854460 sshd[5352]: Connection closed by 10.0.0.1 port 42506 Nov 5 16:04:25.854885 sshd-session[5349]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:25.861882 systemd[1]: sshd@24-10.0.0.130:22-10.0.0.1:42506.service: Deactivated successfully. Nov 5 16:04:25.864391 systemd[1]: session-25.scope: Deactivated successfully. Nov 5 16:04:25.865429 systemd-logind[1589]: Session 25 logged out. Waiting for processes to exit. Nov 5 16:04:25.866569 systemd-logind[1589]: Removed session 25. Nov 5 16:04:29.857309 kubelet[2841]: E1105 16:04:29.857261 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f5b56f6b8-98rmx" podUID="7606f5a8-f065-4081-b3bc-2344e91053d9" Nov 5 16:04:29.857912 kubelet[2841]: E1105 16:04:29.857789 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-m7g79" podUID="09c014d2-b99d-493b-9e36-c9afae0fa214" Nov 5 16:04:29.857912 kubelet[2841]: E1105 16:04:29.857826 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6745b64cf-lv9qv" podUID="b92d66c6-1332-4195-9d17-9fdc4c32310f" Nov 5 16:04:30.874683 systemd[1]: Started sshd@25-10.0.0.130:22-10.0.0.1:57928.service - OpenSSH per-connection server daemon (10.0.0.1:57928). Nov 5 16:04:30.936892 sshd[5367]: Accepted publickey for core from 10.0.0.1 port 57928 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 16:04:30.939088 sshd-session[5367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:30.945178 systemd-logind[1589]: New session 26 of user core. Nov 5 16:04:30.953127 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 5 16:04:31.073268 sshd[5370]: Connection closed by 10.0.0.1 port 57928 Nov 5 16:04:31.075030 sshd-session[5367]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:31.080893 systemd[1]: sshd@25-10.0.0.130:22-10.0.0.1:57928.service: Deactivated successfully. Nov 5 16:04:31.085455 systemd[1]: session-26.scope: Deactivated successfully. Nov 5 16:04:31.089746 systemd-logind[1589]: Session 26 logged out. Waiting for processes to exit. Nov 5 16:04:31.091573 systemd-logind[1589]: Removed session 26. Nov 5 16:04:31.857786 kubelet[2841]: E1105 16:04:31.857715 2841 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f5b56f6b8-w6xvl" podUID="e2f5a50e-d006-4d79-9642-9b2ea8b5bc20"