Nov 3 16:27:48.240190 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Mon Nov 3 14:29:33 -00 2025 Nov 3 16:27:48.240215 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e9d538d1fb909beabac60fa40d47676de06d795b9bab2159a3819b90e410c77a Nov 3 16:27:48.240226 kernel: BIOS-provided physical RAM map: Nov 3 16:27:48.240233 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 3 16:27:48.240240 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 3 16:27:48.240247 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 3 16:27:48.240255 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Nov 3 16:27:48.240262 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Nov 3 16:27:48.240271 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 3 16:27:48.240278 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 3 16:27:48.240288 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 3 16:27:48.240295 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 3 16:27:48.240301 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 3 16:27:48.240308 kernel: NX (Execute Disable) protection: active Nov 3 16:27:48.240317 kernel: APIC: Static calls initialized Nov 3 16:27:48.240336 kernel: SMBIOS 2.8 present. Nov 3 16:27:48.240346 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Nov 3 16:27:48.240354 kernel: DMI: Memory slots populated: 1/1 Nov 3 16:27:48.240362 kernel: Hypervisor detected: KVM Nov 3 16:27:48.240370 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 3 16:27:48.240377 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 3 16:27:48.240384 kernel: kvm-clock: using sched offset of 4164508293 cycles Nov 3 16:27:48.240393 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 3 16:27:48.240401 kernel: tsc: Detected 2794.750 MHz processor Nov 3 16:27:48.240411 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 3 16:27:48.240419 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 3 16:27:48.240427 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 3 16:27:48.240435 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 3 16:27:48.240443 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 3 16:27:48.240450 kernel: Using GB pages for direct mapping Nov 3 16:27:48.240458 kernel: ACPI: Early table checksum verification disabled Nov 3 16:27:48.240468 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Nov 3 16:27:48.240476 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 3 16:27:48.240484 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 3 16:27:48.240491 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 3 16:27:48.240499 kernel: ACPI: FACS 0x000000009CFE0000 000040 Nov 3 16:27:48.240507 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 3 16:27:48.240514 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 3 16:27:48.240524 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 3 16:27:48.240532 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 3 16:27:48.240543 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Nov 3 16:27:48.240551 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Nov 3 16:27:48.240559 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Nov 3 16:27:48.240569 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Nov 3 16:27:48.240577 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Nov 3 16:27:48.240585 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Nov 3 16:27:48.240593 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Nov 3 16:27:48.240601 kernel: No NUMA configuration found Nov 3 16:27:48.240609 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Nov 3 16:27:48.240617 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Nov 3 16:27:48.240627 kernel: Zone ranges: Nov 3 16:27:48.240635 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 3 16:27:48.240643 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Nov 3 16:27:48.240651 kernel: Normal empty Nov 3 16:27:48.240662 kernel: Device empty Nov 3 16:27:48.240672 kernel: Movable zone start for each node Nov 3 16:27:48.240683 kernel: Early memory node ranges Nov 3 16:27:48.240697 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 3 16:27:48.240708 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Nov 3 16:27:48.240719 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Nov 3 16:27:48.240729 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 3 16:27:48.240743 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 3 16:27:48.240752 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 3 16:27:48.240762 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 3 16:27:48.240770 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 3 16:27:48.240781 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 3 16:27:48.240789 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 3 16:27:48.240799 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 3 16:27:48.240808 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 3 16:27:48.240816 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 3 16:27:48.240824 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 3 16:27:48.240832 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 3 16:27:48.240842 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 3 16:27:48.240850 kernel: TSC deadline timer available Nov 3 16:27:48.240858 kernel: CPU topo: Max. logical packages: 1 Nov 3 16:27:48.240866 kernel: CPU topo: Max. logical dies: 1 Nov 3 16:27:48.240874 kernel: CPU topo: Max. dies per package: 1 Nov 3 16:27:48.240881 kernel: CPU topo: Max. threads per core: 1 Nov 3 16:27:48.240889 kernel: CPU topo: Num. cores per package: 4 Nov 3 16:27:48.240900 kernel: CPU topo: Num. threads per package: 4 Nov 3 16:27:48.240908 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Nov 3 16:27:48.240916 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 3 16:27:48.240924 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 3 16:27:48.240932 kernel: kvm-guest: setup PV sched yield Nov 3 16:27:48.240940 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 3 16:27:48.240948 kernel: Booting paravirtualized kernel on KVM Nov 3 16:27:48.240956 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 3 16:27:48.240967 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 3 16:27:48.240975 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Nov 3 16:27:48.240983 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Nov 3 16:27:48.240990 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 3 16:27:48.240998 kernel: kvm-guest: PV spinlocks enabled Nov 3 16:27:48.241057 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 3 16:27:48.241067 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e9d538d1fb909beabac60fa40d47676de06d795b9bab2159a3819b90e410c77a Nov 3 16:27:48.241079 kernel: random: crng init done Nov 3 16:27:48.241088 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 3 16:27:48.241096 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 3 16:27:48.241104 kernel: Fallback order for Node 0: 0 Nov 3 16:27:48.241112 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Nov 3 16:27:48.241120 kernel: Policy zone: DMA32 Nov 3 16:27:48.241130 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 3 16:27:48.241138 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 3 16:27:48.241146 kernel: ftrace: allocating 40092 entries in 157 pages Nov 3 16:27:48.241154 kernel: ftrace: allocated 157 pages with 5 groups Nov 3 16:27:48.241162 kernel: Dynamic Preempt: voluntary Nov 3 16:27:48.241170 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 3 16:27:48.241178 kernel: rcu: RCU event tracing is enabled. Nov 3 16:27:48.241187 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 3 16:27:48.241197 kernel: Trampoline variant of Tasks RCU enabled. Nov 3 16:27:48.241208 kernel: Rude variant of Tasks RCU enabled. Nov 3 16:27:48.241216 kernel: Tracing variant of Tasks RCU enabled. Nov 3 16:27:48.241224 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 3 16:27:48.241232 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 3 16:27:48.241241 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 3 16:27:48.241249 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 3 16:27:48.241262 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 3 16:27:48.241273 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 3 16:27:48.241284 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 3 16:27:48.241305 kernel: Console: colour VGA+ 80x25 Nov 3 16:27:48.241320 kernel: printk: legacy console [ttyS0] enabled Nov 3 16:27:48.241341 kernel: ACPI: Core revision 20240827 Nov 3 16:27:48.241352 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 3 16:27:48.241363 kernel: APIC: Switch to symmetric I/O mode setup Nov 3 16:27:48.241374 kernel: x2apic enabled Nov 3 16:27:48.241386 kernel: APIC: Switched APIC routing to: physical x2apic Nov 3 16:27:48.241406 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 3 16:27:48.241419 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 3 16:27:48.241430 kernel: kvm-guest: setup PV IPIs Nov 3 16:27:48.241445 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 3 16:27:48.241457 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Nov 3 16:27:48.241468 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Nov 3 16:27:48.241480 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 3 16:27:48.241492 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 3 16:27:48.241504 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 3 16:27:48.241515 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 3 16:27:48.241530 kernel: Spectre V2 : Mitigation: Retpolines Nov 3 16:27:48.241542 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 3 16:27:48.241554 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 3 16:27:48.241565 kernel: active return thunk: retbleed_return_thunk Nov 3 16:27:48.241577 kernel: RETBleed: Mitigation: untrained return thunk Nov 3 16:27:48.241588 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 3 16:27:48.241599 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 3 16:27:48.241614 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 3 16:27:48.241626 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 3 16:27:48.241638 kernel: active return thunk: srso_return_thunk Nov 3 16:27:48.241650 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 3 16:27:48.241662 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 3 16:27:48.241675 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 3 16:27:48.241685 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 3 16:27:48.241700 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 3 16:27:48.241711 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 3 16:27:48.241723 kernel: Freeing SMP alternatives memory: 32K Nov 3 16:27:48.241734 kernel: pid_max: default: 32768 minimum: 301 Nov 3 16:27:48.241746 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 3 16:27:48.241757 kernel: landlock: Up and running. Nov 3 16:27:48.241768 kernel: SELinux: Initializing. Nov 3 16:27:48.241786 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 3 16:27:48.241798 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 3 16:27:48.241810 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 3 16:27:48.241821 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 3 16:27:48.241832 kernel: ... version: 0 Nov 3 16:27:48.241842 kernel: ... bit width: 48 Nov 3 16:27:48.241850 kernel: ... generic registers: 6 Nov 3 16:27:48.241861 kernel: ... value mask: 0000ffffffffffff Nov 3 16:27:48.241869 kernel: ... max period: 00007fffffffffff Nov 3 16:27:48.241878 kernel: ... fixed-purpose events: 0 Nov 3 16:27:48.241886 kernel: ... event mask: 000000000000003f Nov 3 16:27:48.241894 kernel: signal: max sigframe size: 1776 Nov 3 16:27:48.241903 kernel: rcu: Hierarchical SRCU implementation. Nov 3 16:27:48.241911 kernel: rcu: Max phase no-delay instances is 400. Nov 3 16:27:48.241920 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 3 16:27:48.241930 kernel: smp: Bringing up secondary CPUs ... Nov 3 16:27:48.241939 kernel: smpboot: x86: Booting SMP configuration: Nov 3 16:27:48.241947 kernel: .... node #0, CPUs: #1 #2 #3 Nov 3 16:27:48.241955 kernel: smp: Brought up 1 node, 4 CPUs Nov 3 16:27:48.241963 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Nov 3 16:27:48.241972 kernel: Memory: 2447340K/2571752K available (14336K kernel code, 2443K rwdata, 29892K rodata, 15356K init, 2688K bss, 118476K reserved, 0K cma-reserved) Nov 3 16:27:48.241981 kernel: devtmpfs: initialized Nov 3 16:27:48.241991 kernel: x86/mm: Memory block size: 128MB Nov 3 16:27:48.242000 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 3 16:27:48.242028 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 3 16:27:48.242036 kernel: pinctrl core: initialized pinctrl subsystem Nov 3 16:27:48.242047 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 3 16:27:48.242056 kernel: audit: initializing netlink subsys (disabled) Nov 3 16:27:48.242064 kernel: audit: type=2000 audit(1762187264.440:1): state=initialized audit_enabled=0 res=1 Nov 3 16:27:48.242075 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 3 16:27:48.242084 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 3 16:27:48.242092 kernel: cpuidle: using governor menu Nov 3 16:27:48.242100 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 3 16:27:48.242109 kernel: dca service started, version 1.12.1 Nov 3 16:27:48.242117 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Nov 3 16:27:48.242126 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 3 16:27:48.242136 kernel: PCI: Using configuration type 1 for base access Nov 3 16:27:48.242145 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 3 16:27:48.242153 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 3 16:27:48.242161 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 3 16:27:48.242170 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 3 16:27:48.242178 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 3 16:27:48.242186 kernel: ACPI: Added _OSI(Module Device) Nov 3 16:27:48.242197 kernel: ACPI: Added _OSI(Processor Device) Nov 3 16:27:48.242205 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 3 16:27:48.242213 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 3 16:27:48.242224 kernel: ACPI: Interpreter enabled Nov 3 16:27:48.242241 kernel: ACPI: PM: (supports S0 S3 S5) Nov 3 16:27:48.242252 kernel: ACPI: Using IOAPIC for interrupt routing Nov 3 16:27:48.242260 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 3 16:27:48.242272 kernel: PCI: Using E820 reservations for host bridge windows Nov 3 16:27:48.242281 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 3 16:27:48.242289 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 3 16:27:48.242608 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 3 16:27:48.242887 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 3 16:27:48.243129 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 3 16:27:48.243151 kernel: PCI host bridge to bus 0000:00 Nov 3 16:27:48.243387 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 3 16:27:48.243589 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 3 16:27:48.243761 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 3 16:27:48.243934 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 3 16:27:48.244140 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 3 16:27:48.244358 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 3 16:27:48.244538 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 3 16:27:48.244749 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 3 16:27:48.244981 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 3 16:27:48.245198 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Nov 3 16:27:48.245445 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Nov 3 16:27:48.245664 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Nov 3 16:27:48.245880 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 3 16:27:48.246134 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 3 16:27:48.246369 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Nov 3 16:27:48.246568 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Nov 3 16:27:48.246762 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Nov 3 16:27:48.246963 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 3 16:27:48.247210 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Nov 3 16:27:48.247430 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Nov 3 16:27:48.247640 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Nov 3 16:27:48.247828 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 3 16:27:48.248047 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Nov 3 16:27:48.248275 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Nov 3 16:27:48.248501 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Nov 3 16:27:48.248707 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Nov 3 16:27:48.248939 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 3 16:27:48.249152 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 3 16:27:48.249380 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 3 16:27:48.249583 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Nov 3 16:27:48.249798 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Nov 3 16:27:48.249990 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 3 16:27:48.250206 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Nov 3 16:27:48.250225 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 3 16:27:48.250235 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 3 16:27:48.250243 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 3 16:27:48.250256 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 3 16:27:48.250265 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 3 16:27:48.250273 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 3 16:27:48.250285 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 3 16:27:48.250294 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 3 16:27:48.250302 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 3 16:27:48.250311 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 3 16:27:48.250319 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 3 16:27:48.250337 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 3 16:27:48.250345 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 3 16:27:48.250358 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 3 16:27:48.250366 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 3 16:27:48.250375 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 3 16:27:48.250384 kernel: iommu: Default domain type: Translated Nov 3 16:27:48.250392 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 3 16:27:48.250401 kernel: PCI: Using ACPI for IRQ routing Nov 3 16:27:48.250410 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 3 16:27:48.250418 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 3 16:27:48.250429 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Nov 3 16:27:48.250608 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 3 16:27:48.250781 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 3 16:27:48.250988 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 3 16:27:48.251025 kernel: vgaarb: loaded Nov 3 16:27:48.251038 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 3 16:27:48.251056 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 3 16:27:48.251068 kernel: clocksource: Switched to clocksource kvm-clock Nov 3 16:27:48.251078 kernel: VFS: Disk quotas dquot_6.6.0 Nov 3 16:27:48.251089 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 3 16:27:48.251101 kernel: pnp: PnP ACPI init Nov 3 16:27:48.251321 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 3 16:27:48.251345 kernel: pnp: PnP ACPI: found 6 devices Nov 3 16:27:48.251359 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 3 16:27:48.251368 kernel: NET: Registered PF_INET protocol family Nov 3 16:27:48.251377 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 3 16:27:48.251386 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 3 16:27:48.251394 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 3 16:27:48.251403 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 3 16:27:48.251412 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 3 16:27:48.251423 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 3 16:27:48.251431 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 3 16:27:48.251440 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 3 16:27:48.251449 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 3 16:27:48.251457 kernel: NET: Registered PF_XDP protocol family Nov 3 16:27:48.251655 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 3 16:27:48.251862 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 3 16:27:48.252094 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 3 16:27:48.252299 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 3 16:27:48.252519 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 3 16:27:48.252720 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 3 16:27:48.252737 kernel: PCI: CLS 0 bytes, default 64 Nov 3 16:27:48.252749 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Nov 3 16:27:48.252766 kernel: Initialise system trusted keyrings Nov 3 16:27:48.252783 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 3 16:27:48.252794 kernel: Key type asymmetric registered Nov 3 16:27:48.252805 kernel: Asymmetric key parser 'x509' registered Nov 3 16:27:48.252817 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 3 16:27:48.252829 kernel: io scheduler mq-deadline registered Nov 3 16:27:48.252841 kernel: io scheduler kyber registered Nov 3 16:27:48.252852 kernel: io scheduler bfq registered Nov 3 16:27:48.252867 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 3 16:27:48.252880 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 3 16:27:48.252892 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 3 16:27:48.252904 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 3 16:27:48.252915 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 3 16:27:48.252927 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 3 16:27:48.252939 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 3 16:27:48.252954 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 3 16:27:48.252965 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 3 16:27:48.253223 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 3 16:27:48.253242 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 3 16:27:48.253460 kernel: rtc_cmos 00:04: registered as rtc0 Nov 3 16:27:48.253668 kernel: rtc_cmos 00:04: setting system clock to 2025-11-03T16:27:46 UTC (1762187266) Nov 3 16:27:48.253976 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 3 16:27:48.253995 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 3 16:27:48.254031 kernel: NET: Registered PF_INET6 protocol family Nov 3 16:27:48.254043 kernel: Segment Routing with IPv6 Nov 3 16:27:48.254055 kernel: In-situ OAM (IOAM) with IPv6 Nov 3 16:27:48.254066 kernel: NET: Registered PF_PACKET protocol family Nov 3 16:27:48.254078 kernel: Key type dns_resolver registered Nov 3 16:27:48.254095 kernel: IPI shorthand broadcast: enabled Nov 3 16:27:48.254107 kernel: sched_clock: Marking stable (1931002799, 200891492)->(2196088675, -64194384) Nov 3 16:27:48.254119 kernel: registered taskstats version 1 Nov 3 16:27:48.254131 kernel: Loading compiled-in X.509 certificates Nov 3 16:27:48.254142 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 64feed854ff613845f0d7f947c8dcfa082277d4d' Nov 3 16:27:48.254154 kernel: Demotion targets for Node 0: null Nov 3 16:27:48.254166 kernel: Key type .fscrypt registered Nov 3 16:27:48.254180 kernel: Key type fscrypt-provisioning registered Nov 3 16:27:48.254191 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 3 16:27:48.254202 kernel: ima: Allocated hash algorithm: sha1 Nov 3 16:27:48.254213 kernel: ima: No architecture policies found Nov 3 16:27:48.254225 kernel: clk: Disabling unused clocks Nov 3 16:27:48.254236 kernel: Freeing unused kernel image (initmem) memory: 15356K Nov 3 16:27:48.254248 kernel: Write protecting the kernel read-only data: 45056k Nov 3 16:27:48.254264 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Nov 3 16:27:48.254279 kernel: Run /init as init process Nov 3 16:27:48.254290 kernel: with arguments: Nov 3 16:27:48.254302 kernel: /init Nov 3 16:27:48.254313 kernel: with environment: Nov 3 16:27:48.254332 kernel: HOME=/ Nov 3 16:27:48.254344 kernel: TERM=linux Nov 3 16:27:48.254355 kernel: SCSI subsystem initialized Nov 3 16:27:48.254371 kernel: libata version 3.00 loaded. Nov 3 16:27:48.254599 kernel: ahci 0000:00:1f.2: version 3.0 Nov 3 16:27:48.254641 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 3 16:27:48.254859 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 3 16:27:48.255100 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 3 16:27:48.255345 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 3 16:27:48.255609 kernel: scsi host0: ahci Nov 3 16:27:48.255845 kernel: scsi host1: ahci Nov 3 16:27:48.256113 kernel: scsi host2: ahci Nov 3 16:27:48.256365 kernel: scsi host3: ahci Nov 3 16:27:48.256645 kernel: scsi host4: ahci Nov 3 16:27:48.256891 kernel: scsi host5: ahci Nov 3 16:27:48.256910 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Nov 3 16:27:48.256922 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Nov 3 16:27:48.256934 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Nov 3 16:27:48.256946 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Nov 3 16:27:48.256958 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Nov 3 16:27:48.256975 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Nov 3 16:27:48.256987 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 3 16:27:48.257000 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 3 16:27:48.257027 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 3 16:27:48.257039 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 3 16:27:48.257051 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 3 16:27:48.257063 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 3 16:27:48.257079 kernel: ata3.00: LPM support broken, forcing max_power Nov 3 16:27:48.257090 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 3 16:27:48.257102 kernel: ata3.00: applying bridge limits Nov 3 16:27:48.257113 kernel: ata3.00: LPM support broken, forcing max_power Nov 3 16:27:48.257124 kernel: ata3.00: configured for UDMA/100 Nov 3 16:27:48.257399 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 3 16:27:48.257653 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 3 16:27:48.257894 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 3 16:27:48.257913 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 3 16:27:48.257927 kernel: GPT:16515071 != 27000831 Nov 3 16:27:48.257939 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 3 16:27:48.257952 kernel: GPT:16515071 != 27000831 Nov 3 16:27:48.257964 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 3 16:27:48.257981 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 3 16:27:48.258253 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 3 16:27:48.258273 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 3 16:27:48.258526 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 3 16:27:48.258545 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 3 16:27:48.258558 kernel: device-mapper: uevent: version 1.0.3 Nov 3 16:27:48.258577 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 3 16:27:48.258593 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 3 16:27:48.258609 kernel: raid6: avx2x4 gen() 30128 MB/s Nov 3 16:27:48.258621 kernel: raid6: avx2x2 gen() 30273 MB/s Nov 3 16:27:48.258633 kernel: raid6: avx2x1 gen() 25655 MB/s Nov 3 16:27:48.258644 kernel: raid6: using algorithm avx2x2 gen() 30273 MB/s Nov 3 16:27:48.258660 kernel: raid6: .... xor() 19873 MB/s, rmw enabled Nov 3 16:27:48.258671 kernel: raid6: using avx2x2 recovery algorithm Nov 3 16:27:48.258684 kernel: xor: automatically using best checksumming function avx Nov 3 16:27:48.258696 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 3 16:27:48.258709 kernel: BTRFS: device fsid d2035e3e-4715-43f6-863f-bec7c8f679e4 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (179) Nov 3 16:27:48.258721 kernel: BTRFS info (device dm-0): first mount of filesystem d2035e3e-4715-43f6-863f-bec7c8f679e4 Nov 3 16:27:48.258734 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 3 16:27:48.258750 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 3 16:27:48.258762 kernel: BTRFS info (device dm-0): enabling free space tree Nov 3 16:27:48.258774 kernel: loop: module loaded Nov 3 16:27:48.258786 kernel: loop0: detected capacity change from 0 to 100136 Nov 3 16:27:48.258798 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 3 16:27:48.258812 systemd[1]: Successfully made /usr/ read-only. Nov 3 16:27:48.258832 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 3 16:27:48.258846 systemd[1]: Detected virtualization kvm. Nov 3 16:27:48.258859 systemd[1]: Detected architecture x86-64. Nov 3 16:27:48.258872 systemd[1]: Running in initrd. Nov 3 16:27:48.258884 systemd[1]: No hostname configured, using default hostname. Nov 3 16:27:48.258897 systemd[1]: Hostname set to . Nov 3 16:27:48.258913 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 3 16:27:48.258925 systemd[1]: Queued start job for default target initrd.target. Nov 3 16:27:48.258938 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 3 16:27:48.258951 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 3 16:27:48.258964 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 3 16:27:48.258978 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 3 16:27:48.258991 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 3 16:27:48.259031 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 3 16:27:48.259045 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 3 16:27:48.259057 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 3 16:27:48.259075 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 3 16:27:48.259104 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 3 16:27:48.259152 systemd[1]: Reached target paths.target - Path Units. Nov 3 16:27:48.259186 systemd[1]: Reached target slices.target - Slice Units. Nov 3 16:27:48.259216 systemd[1]: Reached target swap.target - Swaps. Nov 3 16:27:48.259250 systemd[1]: Reached target timers.target - Timer Units. Nov 3 16:27:48.259279 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 3 16:27:48.259308 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 3 16:27:48.259349 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 3 16:27:48.259387 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 3 16:27:48.259420 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 3 16:27:48.259450 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 3 16:27:48.259479 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 3 16:27:48.259514 systemd[1]: Reached target sockets.target - Socket Units. Nov 3 16:27:48.259545 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 3 16:27:48.259579 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 3 16:27:48.259622 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 3 16:27:48.259651 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 3 16:27:48.259686 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 3 16:27:48.259716 systemd[1]: Starting systemd-fsck-usr.service... Nov 3 16:27:48.259745 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 3 16:27:48.259779 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 3 16:27:48.259809 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 3 16:27:48.259852 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 3 16:27:48.259878 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 3 16:27:48.259911 systemd[1]: Finished systemd-fsck-usr.service. Nov 3 16:27:48.259949 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 3 16:27:48.260089 systemd-journald[313]: Collecting audit messages is disabled. Nov 3 16:27:48.260121 systemd-journald[313]: Journal started Nov 3 16:27:48.260149 systemd-journald[313]: Runtime Journal (/run/log/journal/8c588d4f0c074309b93aada7d6bce44a) is 6M, max 48.2M, 42.2M free. Nov 3 16:27:48.264048 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 3 16:27:48.264116 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 3 16:27:48.268037 systemd[1]: Started systemd-journald.service - Journal Service. Nov 3 16:27:48.271487 systemd-modules-load[316]: Inserted module 'br_netfilter' Nov 3 16:27:48.337048 kernel: Bridge firewalling registered Nov 3 16:27:48.335490 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 3 16:27:48.340310 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 3 16:27:48.344809 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 3 16:27:48.346247 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 3 16:27:48.360918 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 3 16:27:48.362337 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 3 16:27:48.375148 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 3 16:27:48.379118 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 3 16:27:48.379611 systemd-tmpfiles[340]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 3 16:27:48.386836 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 3 16:27:48.390368 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 3 16:27:48.394424 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 3 16:27:48.404951 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 3 16:27:48.440720 dracut-cmdline[359]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e9d538d1fb909beabac60fa40d47676de06d795b9bab2159a3819b90e410c77a Nov 3 16:27:48.466486 systemd-resolved[355]: Positive Trust Anchors: Nov 3 16:27:48.466527 systemd-resolved[355]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 3 16:27:48.466541 systemd-resolved[355]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 3 16:27:48.466588 systemd-resolved[355]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 3 16:27:48.495311 systemd-resolved[355]: Defaulting to hostname 'linux'. Nov 3 16:27:48.498761 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 3 16:27:48.502992 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 3 16:27:48.592048 kernel: Loading iSCSI transport class v2.0-870. Nov 3 16:27:48.608043 kernel: iscsi: registered transport (tcp) Nov 3 16:27:48.632044 kernel: iscsi: registered transport (qla4xxx) Nov 3 16:27:48.632115 kernel: QLogic iSCSI HBA Driver Nov 3 16:27:48.660620 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 3 16:27:48.682438 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 3 16:27:48.683905 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 3 16:27:48.765281 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 3 16:27:48.768883 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 3 16:27:48.771259 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 3 16:27:48.809746 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 3 16:27:48.817190 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 3 16:27:48.852575 systemd-udevd[603]: Using default interface naming scheme 'v257'. Nov 3 16:27:48.868837 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 3 16:27:48.875867 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 3 16:27:48.907252 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 3 16:27:48.913586 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 3 16:27:48.918313 dracut-pre-trigger[673]: rd.md=0: removing MD RAID activation Nov 3 16:27:48.954126 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 3 16:27:48.959951 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 3 16:27:48.973908 systemd-networkd[708]: lo: Link UP Nov 3 16:27:48.973933 systemd-networkd[708]: lo: Gained carrier Nov 3 16:27:48.974775 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 3 16:27:48.977128 systemd[1]: Reached target network.target - Network. Nov 3 16:27:49.102214 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 3 16:27:49.107484 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 3 16:27:49.175127 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 3 16:27:49.187061 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 3 16:27:49.217034 kernel: cryptd: max_cpu_qlen set to 1000 Nov 3 16:27:49.228043 kernel: AES CTR mode by8 optimization enabled Nov 3 16:27:49.231230 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 3 16:27:49.237765 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 3 16:27:49.252938 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 3 16:27:49.269607 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 3 16:27:49.269832 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 3 16:27:49.285893 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 3 16:27:49.285917 disk-uuid[817]: Primary Header is updated. Nov 3 16:27:49.285917 disk-uuid[817]: Secondary Entries is updated. Nov 3 16:27:49.285917 disk-uuid[817]: Secondary Header is updated. Nov 3 16:27:49.272177 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 3 16:27:49.279502 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 3 16:27:49.291703 systemd-networkd[708]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 3 16:27:49.291710 systemd-networkd[708]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 3 16:27:49.301796 systemd-networkd[708]: eth0: Link UP Nov 3 16:27:49.302061 systemd-networkd[708]: eth0: Gained carrier Nov 3 16:27:49.302074 systemd-networkd[708]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 3 16:27:49.320330 systemd-networkd[708]: eth0: DHCPv4 address 10.0.0.124/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 3 16:27:49.350246 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 3 16:27:49.423974 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 3 16:27:49.446117 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 3 16:27:49.448443 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 3 16:27:49.452441 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 3 16:27:49.456234 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 3 16:27:49.499280 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 3 16:27:50.348831 disk-uuid[824]: Warning: The kernel is still using the old partition table. Nov 3 16:27:50.348831 disk-uuid[824]: The new table will be used at the next reboot or after you Nov 3 16:27:50.348831 disk-uuid[824]: run partprobe(8) or kpartx(8) Nov 3 16:27:50.348831 disk-uuid[824]: The operation has completed successfully. Nov 3 16:27:50.362804 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 3 16:27:50.362990 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 3 16:27:50.366562 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 3 16:27:50.426466 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (865) Nov 3 16:27:50.426532 kernel: BTRFS info (device vda6): first mount of filesystem 78119da2-288c-42f1-b313-2f60b0c42ea0 Nov 3 16:27:50.426564 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 3 16:27:50.431832 kernel: BTRFS info (device vda6): turning on async discard Nov 3 16:27:50.431866 kernel: BTRFS info (device vda6): enabling free space tree Nov 3 16:27:50.441039 kernel: BTRFS info (device vda6): last unmount of filesystem 78119da2-288c-42f1-b313-2f60b0c42ea0 Nov 3 16:27:50.442086 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 3 16:27:50.446616 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 3 16:27:50.510178 systemd-networkd[708]: eth0: Gained IPv6LL Nov 3 16:27:50.721551 ignition[884]: Ignition 2.22.0 Nov 3 16:27:50.721566 ignition[884]: Stage: fetch-offline Nov 3 16:27:50.721646 ignition[884]: no configs at "/usr/lib/ignition/base.d" Nov 3 16:27:50.721660 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 3 16:27:50.721848 ignition[884]: parsed url from cmdline: "" Nov 3 16:27:50.721852 ignition[884]: no config URL provided Nov 3 16:27:50.721860 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Nov 3 16:27:50.721871 ignition[884]: no config at "/usr/lib/ignition/user.ign" Nov 3 16:27:50.721924 ignition[884]: op(1): [started] loading QEMU firmware config module Nov 3 16:27:50.721929 ignition[884]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 3 16:27:50.756469 ignition[884]: op(1): [finished] loading QEMU firmware config module Nov 3 16:27:50.837658 ignition[884]: parsing config with SHA512: b8a2cd69733272061c5d317d0980601af7e4638fe0395e3140636a0dab4e25d464a82c712d3eea4f0e97f4a21865d8158e4581f2668b122b21f73071e6a8c0c3 Nov 3 16:27:50.847680 unknown[884]: fetched base config from "system" Nov 3 16:27:50.847692 unknown[884]: fetched user config from "qemu" Nov 3 16:27:50.848135 ignition[884]: fetch-offline: fetch-offline passed Nov 3 16:27:50.848214 ignition[884]: Ignition finished successfully Nov 3 16:27:50.853002 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 3 16:27:50.856303 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 3 16:27:50.857402 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 3 16:27:50.975051 ignition[897]: Ignition 2.22.0 Nov 3 16:27:50.975069 ignition[897]: Stage: kargs Nov 3 16:27:50.975212 ignition[897]: no configs at "/usr/lib/ignition/base.d" Nov 3 16:27:50.975223 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 3 16:27:50.980092 ignition[897]: kargs: kargs passed Nov 3 16:27:50.980151 ignition[897]: Ignition finished successfully Nov 3 16:27:50.984620 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 3 16:27:50.988977 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 3 16:27:51.082991 ignition[905]: Ignition 2.22.0 Nov 3 16:27:51.083025 ignition[905]: Stage: disks Nov 3 16:27:51.083218 ignition[905]: no configs at "/usr/lib/ignition/base.d" Nov 3 16:27:51.083232 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 3 16:27:51.084480 ignition[905]: disks: disks passed Nov 3 16:27:51.088260 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 3 16:27:51.084547 ignition[905]: Ignition finished successfully Nov 3 16:27:51.090392 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 3 16:27:51.092942 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 3 16:27:51.096543 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 3 16:27:51.098184 systemd[1]: Reached target sysinit.target - System Initialization. Nov 3 16:27:51.099854 systemd[1]: Reached target basic.target - Basic System. Nov 3 16:27:51.103615 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 3 16:27:51.150018 systemd-fsck[915]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 3 16:27:51.158189 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 3 16:27:51.163358 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 3 16:27:51.287065 kernel: EXT4-fs (vda9): mounted filesystem 1ffbd672-550b-441b-a3fe-835a1dd6d831 r/w with ordered data mode. Quota mode: none. Nov 3 16:27:51.288784 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 3 16:27:51.290790 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 3 16:27:51.294701 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 3 16:27:51.298256 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 3 16:27:51.299655 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 3 16:27:51.299700 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 3 16:27:51.299729 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 3 16:27:51.321032 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 3 16:27:51.326371 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 3 16:27:51.335902 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (923) Nov 3 16:27:51.335942 kernel: BTRFS info (device vda6): first mount of filesystem 78119da2-288c-42f1-b313-2f60b0c42ea0 Nov 3 16:27:51.335983 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 3 16:27:51.338472 kernel: BTRFS info (device vda6): turning on async discard Nov 3 16:27:51.338514 kernel: BTRFS info (device vda6): enabling free space tree Nov 3 16:27:51.340851 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 3 16:27:51.402720 initrd-setup-root[947]: cut: /sysroot/etc/passwd: No such file or directory Nov 3 16:27:51.409088 initrd-setup-root[954]: cut: /sysroot/etc/group: No such file or directory Nov 3 16:27:51.415829 initrd-setup-root[961]: cut: /sysroot/etc/shadow: No such file or directory Nov 3 16:27:51.420679 initrd-setup-root[968]: cut: /sysroot/etc/gshadow: No such file or directory Nov 3 16:27:51.532625 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 3 16:27:51.535357 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 3 16:27:51.538524 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 3 16:27:51.561226 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 3 16:27:51.563991 kernel: BTRFS info (device vda6): last unmount of filesystem 78119da2-288c-42f1-b313-2f60b0c42ea0 Nov 3 16:27:51.581262 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 3 16:27:51.694549 ignition[1038]: INFO : Ignition 2.22.0 Nov 3 16:27:51.694549 ignition[1038]: INFO : Stage: mount Nov 3 16:27:51.697188 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 3 16:27:51.697188 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 3 16:27:51.705327 ignition[1038]: INFO : mount: mount passed Nov 3 16:27:51.706604 ignition[1038]: INFO : Ignition finished successfully Nov 3 16:27:51.711150 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 3 16:27:51.715730 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 3 16:27:51.748594 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 3 16:27:51.780464 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1048) Nov 3 16:27:51.780522 kernel: BTRFS info (device vda6): first mount of filesystem 78119da2-288c-42f1-b313-2f60b0c42ea0 Nov 3 16:27:51.780535 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 3 16:27:51.785638 kernel: BTRFS info (device vda6): turning on async discard Nov 3 16:27:51.785707 kernel: BTRFS info (device vda6): enabling free space tree Nov 3 16:27:51.788488 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 3 16:27:51.840417 ignition[1065]: INFO : Ignition 2.22.0 Nov 3 16:27:51.840417 ignition[1065]: INFO : Stage: files Nov 3 16:27:51.843287 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 3 16:27:51.843287 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 3 16:27:51.843287 ignition[1065]: DEBUG : files: compiled without relabeling support, skipping Nov 3 16:27:51.843287 ignition[1065]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 3 16:27:51.843287 ignition[1065]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 3 16:27:51.853737 ignition[1065]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 3 16:27:51.853737 ignition[1065]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 3 16:27:51.853737 ignition[1065]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 3 16:27:51.853737 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 3 16:27:51.853737 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 3 16:27:51.847629 unknown[1065]: wrote ssh authorized keys file for user: core Nov 3 16:27:51.908896 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 3 16:27:52.073453 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 3 16:27:52.073453 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 3 16:27:52.080423 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 3 16:27:52.080423 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 3 16:27:52.080423 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 3 16:27:52.080423 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 3 16:27:52.080423 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 3 16:27:52.080423 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 3 16:27:52.098241 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 3 16:27:52.326212 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 3 16:27:52.329268 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 3 16:27:52.329268 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 3 16:27:52.406978 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 3 16:27:52.406978 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 3 16:27:52.414707 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 3 16:27:52.879068 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 3 16:27:53.534542 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 3 16:27:53.534542 ignition[1065]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 3 16:27:53.541051 ignition[1065]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 3 16:27:53.546963 ignition[1065]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 3 16:27:53.546963 ignition[1065]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 3 16:27:53.546963 ignition[1065]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 3 16:27:53.554415 ignition[1065]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 3 16:27:53.554415 ignition[1065]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 3 16:27:53.554415 ignition[1065]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 3 16:27:53.554415 ignition[1065]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 3 16:27:53.586067 ignition[1065]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 3 16:27:53.593653 ignition[1065]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 3 16:27:53.597130 ignition[1065]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 3 16:27:53.597130 ignition[1065]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 3 16:27:53.602399 ignition[1065]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 3 16:27:53.602399 ignition[1065]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 3 16:27:53.602399 ignition[1065]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 3 16:27:53.602399 ignition[1065]: INFO : files: files passed Nov 3 16:27:53.602399 ignition[1065]: INFO : Ignition finished successfully Nov 3 16:27:53.614680 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 3 16:27:53.619983 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 3 16:27:53.623909 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 3 16:27:53.643393 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 3 16:27:53.643543 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 3 16:27:53.653663 initrd-setup-root-after-ignition[1096]: grep: /sysroot/oem/oem-release: No such file or directory Nov 3 16:27:53.659062 initrd-setup-root-after-ignition[1098]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 3 16:27:53.661716 initrd-setup-root-after-ignition[1098]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 3 16:27:53.665140 initrd-setup-root-after-ignition[1102]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 3 16:27:53.669549 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 3 16:27:53.670711 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 3 16:27:53.677835 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 3 16:27:53.767125 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 3 16:27:53.767276 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 3 16:27:53.771569 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 3 16:27:53.772382 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 3 16:27:53.779444 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 3 16:27:53.780512 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 3 16:27:53.820483 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 3 16:27:53.824075 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 3 16:27:53.856295 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 3 16:27:53.856526 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 3 16:27:53.857607 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 3 16:27:53.865355 systemd[1]: Stopped target timers.target - Timer Units. Nov 3 16:27:53.866938 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 3 16:27:53.867084 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 3 16:27:53.871698 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 3 16:27:53.875024 systemd[1]: Stopped target basic.target - Basic System. Nov 3 16:27:53.875862 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 3 16:27:53.876400 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 3 16:27:53.883651 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 3 16:27:53.886863 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 3 16:27:53.890504 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 3 16:27:53.893566 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 3 16:27:53.899980 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 3 16:27:53.900749 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 3 16:27:53.903903 systemd[1]: Stopped target swap.target - Swaps. Nov 3 16:27:53.906883 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 3 16:27:53.907022 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 3 16:27:53.911915 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 3 16:27:53.912825 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 3 16:27:53.918627 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 3 16:27:53.918947 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 3 16:27:53.922345 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 3 16:27:53.922460 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 3 16:27:53.927458 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 3 16:27:53.927590 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 3 16:27:53.928675 systemd[1]: Stopped target paths.target - Path Units. Nov 3 16:27:53.932738 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 3 16:27:53.939827 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 3 16:27:53.944650 systemd[1]: Stopped target slices.target - Slice Units. Nov 3 16:27:53.945674 systemd[1]: Stopped target sockets.target - Socket Units. Nov 3 16:27:53.948111 systemd[1]: iscsid.socket: Deactivated successfully. Nov 3 16:27:53.948216 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 3 16:27:53.950889 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 3 16:27:53.950978 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 3 16:27:53.953661 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 3 16:27:53.953791 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 3 16:27:53.956616 systemd[1]: ignition-files.service: Deactivated successfully. Nov 3 16:27:53.956726 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 3 16:27:53.964125 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 3 16:27:53.965730 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 3 16:27:53.974410 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 3 16:27:53.974587 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 3 16:27:53.978071 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 3 16:27:53.978192 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 3 16:27:53.978899 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 3 16:27:53.979019 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 3 16:27:53.992000 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 3 16:27:53.992200 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 3 16:27:54.028077 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 3 16:27:54.112906 ignition[1122]: INFO : Ignition 2.22.0 Nov 3 16:27:54.112906 ignition[1122]: INFO : Stage: umount Nov 3 16:27:54.115816 ignition[1122]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 3 16:27:54.115816 ignition[1122]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 3 16:27:54.115816 ignition[1122]: INFO : umount: umount passed Nov 3 16:27:54.115816 ignition[1122]: INFO : Ignition finished successfully Nov 3 16:27:54.123576 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 3 16:27:54.123725 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 3 16:27:54.126873 systemd[1]: Stopped target network.target - Network. Nov 3 16:27:54.129476 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 3 16:27:54.129544 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 3 16:27:54.132353 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 3 16:27:54.132413 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 3 16:27:54.133218 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 3 16:27:54.133272 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 3 16:27:54.138367 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 3 16:27:54.138422 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 3 16:27:54.139591 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 3 16:27:54.143614 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 3 16:27:54.151000 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 3 16:27:54.151218 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 3 16:27:54.152999 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 3 16:27:54.153376 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 3 16:27:54.156652 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 3 16:27:54.156804 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 3 16:27:54.167733 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 3 16:27:54.167883 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 3 16:27:54.173771 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 3 16:27:54.174532 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 3 16:27:54.174577 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 3 16:27:54.179637 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 3 16:27:54.182354 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 3 16:27:54.182418 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 3 16:27:54.185519 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 3 16:27:54.185572 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 3 16:27:54.188664 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 3 16:27:54.188716 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 3 16:27:54.191880 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 3 16:27:54.214864 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 3 16:27:54.215134 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 3 16:27:54.216465 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 3 16:27:54.216517 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 3 16:27:54.221397 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 3 16:27:54.221445 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 3 16:27:54.221972 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 3 16:27:54.222036 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 3 16:27:54.223131 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 3 16:27:54.223195 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 3 16:27:54.235181 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 3 16:27:54.235251 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 3 16:27:54.240915 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 3 16:27:54.242587 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 3 16:27:54.242660 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 3 16:27:54.243491 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 3 16:27:54.243542 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 3 16:27:54.250070 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 3 16:27:54.250125 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 3 16:27:54.254152 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 3 16:27:54.254231 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 3 16:27:54.254688 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 3 16:27:54.254737 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 3 16:27:54.277919 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 3 16:27:54.278048 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 3 16:27:54.307819 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 3 16:27:54.307965 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 3 16:27:54.311098 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 3 16:27:54.312523 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 3 16:27:54.325810 systemd[1]: Switching root. Nov 3 16:27:54.366446 systemd-journald[313]: Journal stopped Nov 3 16:27:55.975919 systemd-journald[313]: Received SIGTERM from PID 1 (systemd). Nov 3 16:27:55.976035 kernel: SELinux: policy capability network_peer_controls=1 Nov 3 16:27:55.976060 kernel: SELinux: policy capability open_perms=1 Nov 3 16:27:55.976079 kernel: SELinux: policy capability extended_socket_class=1 Nov 3 16:27:55.976098 kernel: SELinux: policy capability always_check_network=0 Nov 3 16:27:55.976111 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 3 16:27:55.976138 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 3 16:27:55.976150 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 3 16:27:55.976162 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 3 16:27:55.976174 kernel: SELinux: policy capability userspace_initial_context=0 Nov 3 16:27:55.976187 kernel: audit: type=1403 audit(1762187275.038:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 3 16:27:55.976207 systemd[1]: Successfully loaded SELinux policy in 68.122ms. Nov 3 16:27:55.976229 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.269ms. Nov 3 16:27:55.976245 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 3 16:27:55.976259 systemd[1]: Detected virtualization kvm. Nov 3 16:27:55.976271 systemd[1]: Detected architecture x86-64. Nov 3 16:27:55.976283 systemd[1]: Detected first boot. Nov 3 16:27:55.976296 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 3 16:27:55.976311 zram_generator::config[1167]: No configuration found. Nov 3 16:27:55.976329 kernel: Guest personality initialized and is inactive Nov 3 16:27:55.976341 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 3 16:27:55.976353 kernel: Initialized host personality Nov 3 16:27:55.976366 kernel: NET: Registered PF_VSOCK protocol family Nov 3 16:27:55.976378 systemd[1]: Populated /etc with preset unit settings. Nov 3 16:27:55.976393 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 3 16:27:55.976408 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 3 16:27:55.976421 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 3 16:27:55.976438 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 3 16:27:55.976456 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 3 16:27:55.976469 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 3 16:27:55.976487 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 3 16:27:55.976500 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 3 16:27:55.976515 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 3 16:27:55.976528 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 3 16:27:55.976541 systemd[1]: Created slice user.slice - User and Session Slice. Nov 3 16:27:55.976557 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 3 16:27:55.976575 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 3 16:27:55.976591 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 3 16:27:55.976608 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 3 16:27:55.976626 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 3 16:27:55.976640 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 3 16:27:55.976653 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 3 16:27:55.976666 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 3 16:27:55.976679 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 3 16:27:55.976691 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 3 16:27:55.976710 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 3 16:27:55.976723 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 3 16:27:55.976735 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 3 16:27:55.976748 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 3 16:27:55.976761 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 3 16:27:55.976774 systemd[1]: Reached target slices.target - Slice Units. Nov 3 16:27:55.976787 systemd[1]: Reached target swap.target - Swaps. Nov 3 16:27:55.976801 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 3 16:27:55.976814 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 3 16:27:55.976827 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 3 16:27:55.976840 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 3 16:27:55.976853 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 3 16:27:55.976865 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 3 16:27:55.976878 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 3 16:27:55.976894 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 3 16:27:55.976907 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 3 16:27:55.976920 systemd[1]: Mounting media.mount - External Media Directory... Nov 3 16:27:55.976933 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 3 16:27:55.976946 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 3 16:27:55.976959 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 3 16:27:55.976971 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 3 16:27:55.976988 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 3 16:27:55.977101 systemd[1]: Reached target machines.target - Containers. Nov 3 16:27:55.977117 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 3 16:27:55.977138 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 3 16:27:55.977151 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 3 16:27:55.977164 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 3 16:27:55.977176 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 3 16:27:55.977192 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 3 16:27:55.977205 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 3 16:27:55.977218 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 3 16:27:55.977233 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 3 16:27:55.977248 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 3 16:27:55.977261 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 3 16:27:55.977276 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 3 16:27:55.977289 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 3 16:27:55.977302 systemd[1]: Stopped systemd-fsck-usr.service. Nov 3 16:27:55.977315 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 3 16:27:55.977327 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 3 16:27:55.977340 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 3 16:27:55.977352 kernel: ACPI: bus type drm_connector registered Nov 3 16:27:55.977367 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 3 16:27:55.977381 kernel: fuse: init (API version 7.41) Nov 3 16:27:55.977394 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 3 16:27:55.977406 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 3 16:27:55.977419 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 3 16:27:55.977456 systemd-journald[1252]: Collecting audit messages is disabled. Nov 3 16:27:55.977479 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 3 16:27:55.977492 systemd-journald[1252]: Journal started Nov 3 16:27:55.977514 systemd-journald[1252]: Runtime Journal (/run/log/journal/8c588d4f0c074309b93aada7d6bce44a) is 6M, max 48.2M, 42.2M free. Nov 3 16:27:55.653355 systemd[1]: Queued start job for default target multi-user.target. Nov 3 16:27:55.676702 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 3 16:27:55.677768 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 3 16:27:55.984162 systemd[1]: Started systemd-journald.service - Journal Service. Nov 3 16:27:55.987737 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 3 16:27:55.989671 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 3 16:27:55.991666 systemd[1]: Mounted media.mount - External Media Directory. Nov 3 16:27:55.993615 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 3 16:27:55.995596 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 3 16:27:55.997555 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 3 16:27:55.999524 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 3 16:27:56.001881 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 3 16:27:56.004392 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 3 16:27:56.004678 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 3 16:27:56.007193 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 3 16:27:56.007442 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 3 16:27:56.009622 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 3 16:27:56.009927 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 3 16:27:56.012393 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 3 16:27:56.012677 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 3 16:27:56.015164 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 3 16:27:56.015414 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 3 16:27:56.017475 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 3 16:27:56.017692 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 3 16:27:56.019836 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 3 16:27:56.022105 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 3 16:27:56.025437 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 3 16:27:56.027969 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 3 16:27:56.049366 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 3 16:27:56.052081 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 3 16:27:56.055913 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 3 16:27:56.059250 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 3 16:27:56.061188 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 3 16:27:56.061317 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 3 16:27:56.064308 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 3 16:27:56.066671 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 3 16:27:56.069200 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 3 16:27:56.073309 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 3 16:27:56.074001 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 3 16:27:56.075203 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 3 16:27:56.077313 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 3 16:27:56.078960 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 3 16:27:56.084212 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 3 16:27:56.088745 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 3 16:27:56.093138 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 3 16:27:56.097611 systemd-journald[1252]: Time spent on flushing to /var/log/journal/8c588d4f0c074309b93aada7d6bce44a is 19.327ms for 967 entries. Nov 3 16:27:56.097611 systemd-journald[1252]: System Journal (/var/log/journal/8c588d4f0c074309b93aada7d6bce44a) is 8M, max 163.5M, 155.5M free. Nov 3 16:27:56.131211 systemd-journald[1252]: Received client request to flush runtime journal. Nov 3 16:27:56.131264 kernel: loop1: detected capacity change from 0 to 111544 Nov 3 16:27:56.096510 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 3 16:27:56.102278 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 3 16:27:56.105527 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 3 16:27:56.113497 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 3 16:27:56.121508 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 3 16:27:56.132621 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 3 16:27:56.141144 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 3 16:27:56.142119 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Nov 3 16:27:56.142465 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Nov 3 16:27:56.145028 kernel: loop2: detected capacity change from 0 to 119080 Nov 3 16:27:56.150163 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 3 16:27:56.155316 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 3 16:27:56.170288 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 3 16:27:56.196032 kernel: loop3: detected capacity change from 0 to 219144 Nov 3 16:27:56.205640 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 3 16:27:56.211796 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 3 16:27:56.215349 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 3 16:27:56.231092 kernel: loop4: detected capacity change from 0 to 111544 Nov 3 16:27:56.237498 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 3 16:27:56.255037 kernel: loop5: detected capacity change from 0 to 119080 Nov 3 16:27:56.265648 systemd-tmpfiles[1308]: ACLs are not supported, ignoring. Nov 3 16:27:56.265670 systemd-tmpfiles[1308]: ACLs are not supported, ignoring. Nov 3 16:27:56.270550 kernel: loop6: detected capacity change from 0 to 219144 Nov 3 16:27:56.270879 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 3 16:27:56.278899 (sd-merge)[1309]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 3 16:27:56.285804 (sd-merge)[1309]: Merged extensions into '/usr'. Nov 3 16:27:56.290788 systemd[1]: Reload requested from client PID 1286 ('systemd-sysext') (unit systemd-sysext.service)... Nov 3 16:27:56.290811 systemd[1]: Reloading... Nov 3 16:27:56.371314 zram_generator::config[1348]: No configuration found. Nov 3 16:27:56.396498 systemd-resolved[1307]: Positive Trust Anchors: Nov 3 16:27:56.396524 systemd-resolved[1307]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 3 16:27:56.396529 systemd-resolved[1307]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 3 16:27:56.396572 systemd-resolved[1307]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 3 16:27:56.404300 systemd-resolved[1307]: Defaulting to hostname 'linux'. Nov 3 16:27:56.615147 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 3 16:27:56.615241 systemd[1]: Reloading finished in 323 ms. Nov 3 16:27:56.646623 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 3 16:27:56.649067 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 3 16:27:56.651463 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 3 16:27:56.656560 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 3 16:27:56.678815 systemd[1]: Starting ensure-sysext.service... Nov 3 16:27:56.681645 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 3 16:27:56.704104 systemd[1]: Reload requested from client PID 1381 ('systemctl') (unit ensure-sysext.service)... Nov 3 16:27:56.704138 systemd[1]: Reloading... Nov 3 16:27:56.711774 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 3 16:27:56.712073 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 3 16:27:56.712391 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 3 16:27:56.712668 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 3 16:27:56.713634 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 3 16:27:56.713905 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Nov 3 16:27:56.713976 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Nov 3 16:27:56.720423 systemd-tmpfiles[1382]: Detected autofs mount point /boot during canonicalization of boot. Nov 3 16:27:56.720433 systemd-tmpfiles[1382]: Skipping /boot Nov 3 16:27:56.731419 systemd-tmpfiles[1382]: Detected autofs mount point /boot during canonicalization of boot. Nov 3 16:27:56.731433 systemd-tmpfiles[1382]: Skipping /boot Nov 3 16:27:56.783048 zram_generator::config[1415]: No configuration found. Nov 3 16:27:56.973034 systemd[1]: Reloading finished in 268 ms. Nov 3 16:27:56.993400 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 3 16:27:57.016392 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 3 16:27:57.027367 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 3 16:27:57.030483 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 3 16:27:57.034031 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 3 16:27:57.038001 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 3 16:27:57.044174 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 3 16:27:57.049224 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 3 16:27:57.053791 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 3 16:27:57.053959 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 3 16:27:57.057350 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 3 16:27:57.061218 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 3 16:27:57.065322 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 3 16:27:57.067312 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 3 16:27:57.067416 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 3 16:27:57.067514 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 3 16:27:57.072719 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 3 16:27:57.072886 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 3 16:27:57.073060 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 3 16:27:57.073153 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 3 16:27:57.073236 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 3 16:27:57.083869 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 3 16:27:57.084132 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 3 16:27:57.090991 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 3 16:27:57.094963 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 3 16:27:57.095777 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 3 16:27:57.100888 systemd-udevd[1455]: Using default interface naming scheme 'v257'. Nov 3 16:27:57.102594 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 3 16:27:57.102929 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 3 16:27:57.110778 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 3 16:27:57.116872 systemd[1]: Finished ensure-sysext.service. Nov 3 16:27:57.122609 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 3 16:27:57.122927 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 3 16:27:57.124388 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 3 16:27:57.126278 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 3 16:27:57.126321 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 3 16:27:57.126375 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 3 16:27:57.126430 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 3 16:27:57.129798 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 3 16:27:57.132754 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 3 16:27:57.149187 augenrules[1487]: No rules Nov 3 16:27:57.150417 systemd[1]: audit-rules.service: Deactivated successfully. Nov 3 16:27:57.151945 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 3 16:27:57.154518 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 3 16:27:57.154807 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 3 16:27:57.161781 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 3 16:27:57.168051 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 3 16:27:57.193840 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 3 16:27:57.196838 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 3 16:27:57.263481 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 3 16:27:57.312581 systemd[1]: Reached target time-set.target - System Time Set. Nov 3 16:27:57.351878 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 3 16:27:57.358489 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 3 16:27:57.367595 systemd-networkd[1500]: lo: Link UP Nov 3 16:27:57.367610 systemd-networkd[1500]: lo: Gained carrier Nov 3 16:27:57.370747 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 3 16:27:57.372899 systemd[1]: Reached target network.target - Network. Nov 3 16:27:57.375798 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 3 16:27:57.380758 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 3 16:27:57.395215 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 3 16:27:57.402346 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 3 16:27:57.413544 systemd-networkd[1500]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 3 16:27:57.413558 systemd-networkd[1500]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 3 16:27:57.415058 systemd-networkd[1500]: eth0: Link UP Nov 3 16:27:57.418372 kernel: mousedev: PS/2 mouse device common for all mice Nov 3 16:27:57.416240 systemd-networkd[1500]: eth0: Gained carrier Nov 3 16:27:57.416256 systemd-networkd[1500]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 3 16:27:57.424291 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 3 16:27:57.428606 systemd-networkd[1500]: eth0: DHCPv4 address 10.0.0.124/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 3 16:27:57.429042 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 3 16:27:57.432403 systemd-timesyncd[1484]: Network configuration changed, trying to establish connection. Nov 3 16:27:57.434732 systemd-timesyncd[1484]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 3 16:27:57.434879 systemd-timesyncd[1484]: Initial clock synchronization to Mon 2025-11-03 16:27:57.397948 UTC. Nov 3 16:27:57.435050 kernel: ACPI: button: Power Button [PWRF] Nov 3 16:27:57.520677 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 3 16:27:57.521091 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 3 16:27:57.616243 kernel: kvm_amd: TSC scaling supported Nov 3 16:27:57.616316 kernel: kvm_amd: Nested Virtualization enabled Nov 3 16:27:57.616331 kernel: kvm_amd: Nested Paging enabled Nov 3 16:27:57.617998 kernel: kvm_amd: LBR virtualization supported Nov 3 16:27:57.618048 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 3 16:27:57.619202 kernel: kvm_amd: Virtual GIF supported Nov 3 16:27:57.643349 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 3 16:27:57.679064 kernel: EDAC MC: Ver: 3.0.0 Nov 3 16:27:57.739086 ldconfig[1453]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 3 16:27:57.746197 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 3 16:27:57.793922 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 3 16:27:57.796912 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 3 16:27:57.841578 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 3 16:27:57.844056 systemd[1]: Reached target sysinit.target - System Initialization. Nov 3 16:27:57.846539 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 3 16:27:57.849032 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 3 16:27:57.851433 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 3 16:27:57.854034 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 3 16:27:57.856475 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 3 16:27:57.858913 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 3 16:27:57.861227 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 3 16:27:57.861271 systemd[1]: Reached target paths.target - Path Units. Nov 3 16:27:57.863035 systemd[1]: Reached target timers.target - Timer Units. Nov 3 16:27:57.866381 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 3 16:27:57.870922 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 3 16:27:57.876133 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 3 16:27:57.878659 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 3 16:27:57.880943 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 3 16:27:57.887920 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 3 16:27:57.890312 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 3 16:27:57.894653 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 3 16:27:57.898749 systemd[1]: Reached target sockets.target - Socket Units. Nov 3 16:27:57.900578 systemd[1]: Reached target basic.target - Basic System. Nov 3 16:27:57.902460 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 3 16:27:57.902528 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 3 16:27:57.904097 systemd[1]: Starting containerd.service - containerd container runtime... Nov 3 16:27:57.907407 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 3 16:27:57.911657 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 3 16:27:57.919653 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 3 16:27:57.924386 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 3 16:27:57.927137 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 3 16:27:57.930282 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 3 16:27:57.935543 jq[1568]: false Nov 3 16:27:57.937546 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 3 16:27:57.941527 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 3 16:27:57.944629 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 3 16:27:57.949495 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Refreshing passwd entry cache Nov 3 16:27:57.949736 oslogin_cache_refresh[1570]: Refreshing passwd entry cache Nov 3 16:27:57.950039 extend-filesystems[1569]: Found /dev/vda6 Nov 3 16:27:57.953375 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 3 16:27:57.956603 extend-filesystems[1569]: Found /dev/vda9 Nov 3 16:27:57.959815 extend-filesystems[1569]: Checking size of /dev/vda9 Nov 3 16:27:57.961156 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 3 16:27:57.962929 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 3 16:27:57.963065 oslogin_cache_refresh[1570]: Failure getting users, quitting Nov 3 16:27:57.964453 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Failure getting users, quitting Nov 3 16:27:57.964453 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 3 16:27:57.964453 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Refreshing group entry cache Nov 3 16:27:57.963870 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 3 16:27:57.963096 oslogin_cache_refresh[1570]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 3 16:27:57.963140 oslogin_cache_refresh[1570]: Refreshing group entry cache Nov 3 16:27:57.965736 systemd[1]: Starting update-engine.service - Update Engine... Nov 3 16:27:57.970744 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 3 16:27:57.974197 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Failure getting groups, quitting Nov 3 16:27:57.974197 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 3 16:27:57.972709 oslogin_cache_refresh[1570]: Failure getting groups, quitting Nov 3 16:27:57.972720 oslogin_cache_refresh[1570]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 3 16:27:57.975427 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 3 16:27:57.977709 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 3 16:27:57.978068 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 3 16:27:57.978458 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 3 16:27:57.978783 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 3 16:27:57.981615 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 3 16:27:57.981859 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 3 16:27:57.984730 systemd[1]: motdgen.service: Deactivated successfully. Nov 3 16:27:57.984972 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 3 16:27:58.048111 extend-filesystems[1569]: Resized partition /dev/vda9 Nov 3 16:27:58.056084 extend-filesystems[1612]: resize2fs 1.47.3 (8-Jul-2025) Nov 3 16:27:58.064129 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 3 16:27:58.075037 update_engine[1585]: I20251103 16:27:58.074914 1585 main.cc:92] Flatcar Update Engine starting Nov 3 16:27:58.086912 jq[1589]: true Nov 3 16:27:58.095099 tar[1593]: linux-amd64/LICENSE Nov 3 16:27:58.095099 tar[1593]: linux-amd64/helm Nov 3 16:27:58.107262 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 3 16:27:58.117248 dbus-daemon[1566]: [system] SELinux support is enabled Nov 3 16:27:58.120476 jq[1618]: true Nov 3 16:27:58.117808 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 3 16:27:58.138386 update_engine[1585]: I20251103 16:27:58.124867 1585 update_check_scheduler.cc:74] Next update check in 7m58s Nov 3 16:27:58.125850 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 3 16:27:58.125899 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 3 16:27:58.129950 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 3 16:27:58.129974 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 3 16:27:58.137574 systemd[1]: Started update-engine.service - Update Engine. Nov 3 16:27:58.141964 extend-filesystems[1612]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 3 16:27:58.141964 extend-filesystems[1612]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 3 16:27:58.141964 extend-filesystems[1612]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 3 16:27:58.153778 extend-filesystems[1569]: Resized filesystem in /dev/vda9 Nov 3 16:27:58.145277 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 3 16:27:58.149093 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 3 16:27:58.149813 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 3 16:27:58.216285 systemd-logind[1582]: Watching system buttons on /dev/input/event2 (Power Button) Nov 3 16:27:58.216315 systemd-logind[1582]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 3 16:27:58.216646 systemd-logind[1582]: New seat seat0. Nov 3 16:27:58.219943 systemd[1]: Started systemd-logind.service - User Login Management. Nov 3 16:27:58.310938 bash[1638]: Updated "/home/core/.ssh/authorized_keys" Nov 3 16:27:58.305145 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 3 16:27:58.314095 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 3 16:27:58.329165 locksmithd[1623]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 3 16:27:58.503823 sshd_keygen[1603]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 3 16:27:58.535459 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 3 16:27:58.540413 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 3 16:27:58.564425 systemd[1]: issuegen.service: Deactivated successfully. Nov 3 16:27:58.564879 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 3 16:27:58.568909 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 3 16:27:58.598320 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 3 16:27:58.603312 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 3 16:27:58.608252 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 3 16:27:58.610265 systemd[1]: Reached target getty.target - Login Prompts. Nov 3 16:27:58.698405 systemd-networkd[1500]: eth0: Gained IPv6LL Nov 3 16:27:58.705344 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 3 16:27:58.707670 containerd[1613]: time="2025-11-03T16:27:58Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 3 16:27:58.708540 containerd[1613]: time="2025-11-03T16:27:58.708483531Z" level=info msg="starting containerd" revision=75cb2b7193e4e490e9fbdc236c0e811ccaba3376 version=v2.1.4 Nov 3 16:27:58.709262 systemd[1]: Reached target network-online.target - Network is Online. Nov 3 16:27:58.713343 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 3 16:27:58.716846 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 3 16:27:58.726794 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 3 16:27:58.740748 containerd[1613]: time="2025-11-03T16:27:58.740696352Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="20.125µs" Nov 3 16:27:58.740748 containerd[1613]: time="2025-11-03T16:27:58.740743406Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 3 16:27:58.740815 containerd[1613]: time="2025-11-03T16:27:58.740802519Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 3 16:27:58.740835 containerd[1613]: time="2025-11-03T16:27:58.740815828Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 3 16:27:58.741617 containerd[1613]: time="2025-11-03T16:27:58.741581648Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 3 16:27:58.741617 containerd[1613]: time="2025-11-03T16:27:58.741614652Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 3 16:27:58.741739 containerd[1613]: time="2025-11-03T16:27:58.741703486Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 3 16:27:58.741739 containerd[1613]: time="2025-11-03T16:27:58.741731387Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 3 16:27:58.742045 containerd[1613]: time="2025-11-03T16:27:58.742017946Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 3 16:27:58.742045 containerd[1613]: time="2025-11-03T16:27:58.742039472Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 3 16:27:58.742112 containerd[1613]: time="2025-11-03T16:27:58.742053592Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 3 16:27:58.742112 containerd[1613]: time="2025-11-03T16:27:58.742062278Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 3 16:27:58.742284 containerd[1613]: time="2025-11-03T16:27:58.742257351Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 3 16:27:58.742284 containerd[1613]: time="2025-11-03T16:27:58.742278166Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 3 16:27:58.742407 containerd[1613]: time="2025-11-03T16:27:58.742384163Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 3 16:27:58.742641 containerd[1613]: time="2025-11-03T16:27:58.742615792Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 3 16:27:58.742670 containerd[1613]: time="2025-11-03T16:27:58.742655832Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 3 16:27:58.742670 containerd[1613]: time="2025-11-03T16:27:58.742666720Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 3 16:27:58.742742 containerd[1613]: time="2025-11-03T16:27:58.742719758Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 3 16:27:58.746173 containerd[1613]: time="2025-11-03T16:27:58.746138594Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 3 16:27:58.746306 containerd[1613]: time="2025-11-03T16:27:58.746275853Z" level=info msg="metadata content store policy set" policy=shared Nov 3 16:27:58.756340 containerd[1613]: time="2025-11-03T16:27:58.756279475Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 3 16:27:58.757536 containerd[1613]: time="2025-11-03T16:27:58.756855025Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 3 16:27:58.757536 containerd[1613]: time="2025-11-03T16:27:58.756943569Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 3 16:27:58.757536 containerd[1613]: time="2025-11-03T16:27:58.756963604Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 3 16:27:58.757536 containerd[1613]: time="2025-11-03T16:27:58.756976384Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 3 16:27:58.757536 containerd[1613]: time="2025-11-03T16:27:58.756987241Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 3 16:27:58.757536 containerd[1613]: time="2025-11-03T16:27:58.756997299Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 3 16:27:58.757536 containerd[1613]: time="2025-11-03T16:27:58.757026550Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 3 16:27:58.757536 containerd[1613]: time="2025-11-03T16:27:58.757039160Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 3 16:27:58.757536 containerd[1613]: time="2025-11-03T16:27:58.757051369Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 3 16:27:58.757536 containerd[1613]: time="2025-11-03T16:27:58.757062467Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 3 16:27:58.757536 containerd[1613]: time="2025-11-03T16:27:58.757073205Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 3 16:27:58.757536 containerd[1613]: time="2025-11-03T16:27:58.757082040Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 3 16:27:58.757536 containerd[1613]: time="2025-11-03T16:27:58.757103026Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 3 16:27:58.757804 containerd[1613]: time="2025-11-03T16:27:58.757254847Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 3 16:27:58.757804 containerd[1613]: time="2025-11-03T16:27:58.757279355Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 3 16:27:58.757804 containerd[1613]: time="2025-11-03T16:27:58.757293285Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 3 16:27:58.757804 containerd[1613]: time="2025-11-03T16:27:58.757311268Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 3 16:27:58.757804 containerd[1613]: time="2025-11-03T16:27:58.757322396Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 3 16:27:58.757804 containerd[1613]: time="2025-11-03T16:27:58.757343342Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 3 16:27:58.757804 containerd[1613]: time="2025-11-03T16:27:58.757361834Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 3 16:27:58.757804 containerd[1613]: time="2025-11-03T16:27:58.757372583Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 3 16:27:58.757804 containerd[1613]: time="2025-11-03T16:27:58.757389145Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 3 16:27:58.757804 containerd[1613]: time="2025-11-03T16:27:58.757400063Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 3 16:27:58.757804 containerd[1613]: time="2025-11-03T16:27:58.757410000Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 3 16:27:58.757804 containerd[1613]: time="2025-11-03T16:27:58.757439081Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 3 16:27:58.757804 containerd[1613]: time="2025-11-03T16:27:58.757488948Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 3 16:27:58.757804 containerd[1613]: time="2025-11-03T16:27:58.757501147Z" level=info msg="Start snapshots syncer" Nov 3 16:27:58.758197 containerd[1613]: time="2025-11-03T16:27:58.758159497Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 3 16:27:58.758683 containerd[1613]: time="2025-11-03T16:27:58.758634413Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 3 16:27:58.759028 containerd[1613]: time="2025-11-03T16:27:58.758851192Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 3 16:27:58.759028 containerd[1613]: time="2025-11-03T16:27:58.758964815Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 3 16:27:58.759350 containerd[1613]: time="2025-11-03T16:27:58.759329351Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 3 16:27:58.759420 containerd[1613]: time="2025-11-03T16:27:58.759406947Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 3 16:27:58.759469 containerd[1613]: time="2025-11-03T16:27:58.759458445Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 3 16:27:58.759530 containerd[1613]: time="2025-11-03T16:27:58.759515707Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 3 16:27:58.759728 containerd[1613]: time="2025-11-03T16:27:58.759643521Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 3 16:27:58.759728 containerd[1613]: time="2025-11-03T16:27:58.759658982Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 3 16:27:58.759728 containerd[1613]: time="2025-11-03T16:27:58.759669129Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 3 16:27:58.759728 containerd[1613]: time="2025-11-03T16:27:58.759678656Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 3 16:27:58.759728 containerd[1613]: time="2025-11-03T16:27:58.759688313Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 3 16:27:58.759896 containerd[1613]: time="2025-11-03T16:27:58.759842236Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 3 16:27:58.759896 containerd[1613]: time="2025-11-03T16:27:58.759860168Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 3 16:27:58.759896 containerd[1613]: time="2025-11-03T16:27:58.759868575Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 3 16:27:58.760113 containerd[1613]: time="2025-11-03T16:27:58.760093739Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 3 16:27:58.760311 containerd[1613]: time="2025-11-03T16:27:58.760158906Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 3 16:27:58.760311 containerd[1613]: time="2025-11-03T16:27:58.760174478Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 3 16:27:58.760311 containerd[1613]: time="2025-11-03T16:27:58.760193272Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 3 16:27:58.760311 containerd[1613]: time="2025-11-03T16:27:58.760205581Z" level=info msg="runtime interface created" Nov 3 16:27:58.760311 containerd[1613]: time="2025-11-03T16:27:58.760211035Z" level=info msg="created NRI interface" Nov 3 16:27:58.760311 containerd[1613]: time="2025-11-03T16:27:58.760232821Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 3 16:27:58.760311 containerd[1613]: time="2025-11-03T16:27:58.760244990Z" level=info msg="Connect containerd service" Nov 3 16:27:58.760311 containerd[1613]: time="2025-11-03T16:27:58.760270838Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 3 16:27:58.761979 containerd[1613]: time="2025-11-03T16:27:58.761575110Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 3 16:27:58.766138 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 3 16:27:58.766447 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 3 16:27:58.771549 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 3 16:27:58.773564 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 3 16:27:58.824309 tar[1593]: linux-amd64/README.md Nov 3 16:27:58.947112 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 3 16:27:59.087550 containerd[1613]: time="2025-11-03T16:27:59.087389444Z" level=info msg="Start subscribing containerd event" Nov 3 16:27:59.087678 containerd[1613]: time="2025-11-03T16:27:59.087508506Z" level=info msg="Start recovering state" Nov 3 16:27:59.087877 containerd[1613]: time="2025-11-03T16:27:59.087810227Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 3 16:27:59.087877 containerd[1613]: time="2025-11-03T16:27:59.087833945Z" level=info msg="Start event monitor" Nov 3 16:27:59.087940 containerd[1613]: time="2025-11-03T16:27:59.087893931Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 3 16:27:59.087961 containerd[1613]: time="2025-11-03T16:27:59.087893120Z" level=info msg="Start cni network conf syncer for default" Nov 3 16:27:59.088551 containerd[1613]: time="2025-11-03T16:27:59.087977855Z" level=info msg="Start streaming server" Nov 3 16:27:59.088551 containerd[1613]: time="2025-11-03T16:27:59.088061649Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 3 16:27:59.088551 containerd[1613]: time="2025-11-03T16:27:59.088088790Z" level=info msg="runtime interface starting up..." Nov 3 16:27:59.088551 containerd[1613]: time="2025-11-03T16:27:59.088113169Z" level=info msg="starting plugins..." Nov 3 16:27:59.088551 containerd[1613]: time="2025-11-03T16:27:59.088165429Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 3 16:27:59.088551 containerd[1613]: time="2025-11-03T16:27:59.088455631Z" level=info msg="containerd successfully booted in 0.381542s" Nov 3 16:27:59.088674 systemd[1]: Started containerd.service - containerd container runtime. Nov 3 16:27:59.351873 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 3 16:27:59.355267 systemd[1]: Started sshd@0-10.0.0.124:22-10.0.0.1:58086.service - OpenSSH per-connection server daemon (10.0.0.1:58086). Nov 3 16:27:59.461603 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 58086 ssh2: RSA SHA256:6IgjKsfLloMODYUZWLJOfDFsK2vE75XcxHBEtXf0d48 Nov 3 16:27:59.463882 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 3 16:27:59.471081 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 3 16:27:59.473998 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 3 16:27:59.482223 systemd-logind[1582]: New session 1 of user core. Nov 3 16:27:59.500921 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 3 16:27:59.507250 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 3 16:27:59.524941 (systemd)[1710]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 3 16:27:59.527748 systemd-logind[1582]: New session c1 of user core. Nov 3 16:27:59.725704 systemd[1710]: Queued start job for default target default.target. Nov 3 16:27:59.733357 systemd[1710]: Created slice app.slice - User Application Slice. Nov 3 16:27:59.733386 systemd[1710]: Reached target paths.target - Paths. Nov 3 16:27:59.733430 systemd[1710]: Reached target timers.target - Timers. Nov 3 16:27:59.735057 systemd[1710]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 3 16:27:59.749488 systemd[1710]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 3 16:27:59.749617 systemd[1710]: Reached target sockets.target - Sockets. Nov 3 16:27:59.749658 systemd[1710]: Reached target basic.target - Basic System. Nov 3 16:27:59.749701 systemd[1710]: Reached target default.target - Main User Target. Nov 3 16:27:59.749738 systemd[1710]: Startup finished in 212ms. Nov 3 16:27:59.750184 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 3 16:27:59.753705 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 3 16:27:59.783725 systemd[1]: Started sshd@1-10.0.0.124:22-10.0.0.1:58094.service - OpenSSH per-connection server daemon (10.0.0.1:58094). Nov 3 16:27:59.852133 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 58094 ssh2: RSA SHA256:6IgjKsfLloMODYUZWLJOfDFsK2vE75XcxHBEtXf0d48 Nov 3 16:27:59.854184 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 3 16:27:59.864487 systemd-logind[1582]: New session 2 of user core. Nov 3 16:27:59.918343 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 3 16:27:59.936120 sshd[1724]: Connection closed by 10.0.0.1 port 58094 Nov 3 16:27:59.936501 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Nov 3 16:27:59.945533 systemd[1]: sshd@1-10.0.0.124:22-10.0.0.1:58094.service: Deactivated successfully. Nov 3 16:27:59.947463 systemd[1]: session-2.scope: Deactivated successfully. Nov 3 16:27:59.948305 systemd-logind[1582]: Session 2 logged out. Waiting for processes to exit. Nov 3 16:27:59.951681 systemd[1]: Started sshd@2-10.0.0.124:22-10.0.0.1:58096.service - OpenSSH per-connection server daemon (10.0.0.1:58096). Nov 3 16:27:59.955144 systemd-logind[1582]: Removed session 2. Nov 3 16:28:00.011194 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 58096 ssh2: RSA SHA256:6IgjKsfLloMODYUZWLJOfDFsK2vE75XcxHBEtXf0d48 Nov 3 16:28:00.012880 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 3 16:28:00.018489 systemd-logind[1582]: New session 3 of user core. Nov 3 16:28:00.029170 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 3 16:28:00.044849 sshd[1735]: Connection closed by 10.0.0.1 port 58096 Nov 3 16:28:00.045140 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Nov 3 16:28:00.048397 systemd[1]: sshd@2-10.0.0.124:22-10.0.0.1:58096.service: Deactivated successfully. Nov 3 16:28:00.051420 systemd[1]: session-3.scope: Deactivated successfully. Nov 3 16:28:00.054096 systemd-logind[1582]: Session 3 logged out. Waiting for processes to exit. Nov 3 16:28:00.055829 systemd-logind[1582]: Removed session 3. Nov 3 16:28:00.069060 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 3 16:28:00.071433 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 3 16:28:00.073371 systemd[1]: Startup finished in 3.233s (kernel) + 7.150s (initrd) + 5.103s (userspace) = 15.487s. Nov 3 16:28:00.088358 (kubelet)[1743]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 3 16:28:00.607826 kubelet[1743]: E1103 16:28:00.607764 1743 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 3 16:28:00.611593 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 3 16:28:00.611810 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 3 16:28:00.612255 systemd[1]: kubelet.service: Consumed 1.654s CPU time, 256.6M memory peak. Nov 3 16:28:10.053260 systemd[1]: Started sshd@3-10.0.0.124:22-10.0.0.1:51694.service - OpenSSH per-connection server daemon (10.0.0.1:51694). Nov 3 16:28:10.124963 sshd[1756]: Accepted publickey for core from 10.0.0.1 port 51694 ssh2: RSA SHA256:6IgjKsfLloMODYUZWLJOfDFsK2vE75XcxHBEtXf0d48 Nov 3 16:28:10.126439 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 3 16:28:10.132365 systemd-logind[1582]: New session 4 of user core. Nov 3 16:28:10.148164 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 3 16:28:10.163966 sshd[1759]: Connection closed by 10.0.0.1 port 51694 Nov 3 16:28:10.164320 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Nov 3 16:28:10.177283 systemd[1]: sshd@3-10.0.0.124:22-10.0.0.1:51694.service: Deactivated successfully. Nov 3 16:28:10.179172 systemd[1]: session-4.scope: Deactivated successfully. Nov 3 16:28:10.179914 systemd-logind[1582]: Session 4 logged out. Waiting for processes to exit. Nov 3 16:28:10.182791 systemd[1]: Started sshd@4-10.0.0.124:22-10.0.0.1:51696.service - OpenSSH per-connection server daemon (10.0.0.1:51696). Nov 3 16:28:10.183683 systemd-logind[1582]: Removed session 4. Nov 3 16:28:10.241099 sshd[1765]: Accepted publickey for core from 10.0.0.1 port 51696 ssh2: RSA SHA256:6IgjKsfLloMODYUZWLJOfDFsK2vE75XcxHBEtXf0d48 Nov 3 16:28:10.242599 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 3 16:28:10.247566 systemd-logind[1582]: New session 5 of user core. Nov 3 16:28:10.261168 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 3 16:28:10.271051 sshd[1768]: Connection closed by 10.0.0.1 port 51696 Nov 3 16:28:10.271409 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Nov 3 16:28:10.280619 systemd[1]: sshd@4-10.0.0.124:22-10.0.0.1:51696.service: Deactivated successfully. Nov 3 16:28:10.282603 systemd[1]: session-5.scope: Deactivated successfully. Nov 3 16:28:10.283375 systemd-logind[1582]: Session 5 logged out. Waiting for processes to exit. Nov 3 16:28:10.286079 systemd[1]: Started sshd@5-10.0.0.124:22-10.0.0.1:51702.service - OpenSSH per-connection server daemon (10.0.0.1:51702). Nov 3 16:28:10.286587 systemd-logind[1582]: Removed session 5. Nov 3 16:28:10.352743 sshd[1774]: Accepted publickey for core from 10.0.0.1 port 51702 ssh2: RSA SHA256:6IgjKsfLloMODYUZWLJOfDFsK2vE75XcxHBEtXf0d48 Nov 3 16:28:10.354575 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 3 16:28:10.360141 systemd-logind[1582]: New session 6 of user core. Nov 3 16:28:10.374310 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 3 16:28:10.391544 sshd[1778]: Connection closed by 10.0.0.1 port 51702 Nov 3 16:28:10.391970 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Nov 3 16:28:10.405500 systemd[1]: sshd@5-10.0.0.124:22-10.0.0.1:51702.service: Deactivated successfully. Nov 3 16:28:10.407283 systemd[1]: session-6.scope: Deactivated successfully. Nov 3 16:28:10.408273 systemd-logind[1582]: Session 6 logged out. Waiting for processes to exit. Nov 3 16:28:10.410864 systemd[1]: Started sshd@6-10.0.0.124:22-10.0.0.1:51704.service - OpenSSH per-connection server daemon (10.0.0.1:51704). Nov 3 16:28:10.411501 systemd-logind[1582]: Removed session 6. Nov 3 16:28:10.474720 sshd[1784]: Accepted publickey for core from 10.0.0.1 port 51704 ssh2: RSA SHA256:6IgjKsfLloMODYUZWLJOfDFsK2vE75XcxHBEtXf0d48 Nov 3 16:28:10.476471 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 3 16:28:10.481136 systemd-logind[1582]: New session 7 of user core. Nov 3 16:28:10.492172 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 3 16:28:10.515124 sudo[1788]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 3 16:28:10.515564 sudo[1788]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 3 16:28:10.533592 sudo[1788]: pam_unix(sudo:session): session closed for user root Nov 3 16:28:10.535676 sshd[1787]: Connection closed by 10.0.0.1 port 51704 Nov 3 16:28:10.536186 sshd-session[1784]: pam_unix(sshd:session): session closed for user core Nov 3 16:28:10.555858 systemd[1]: sshd@6-10.0.0.124:22-10.0.0.1:51704.service: Deactivated successfully. Nov 3 16:28:10.557820 systemd[1]: session-7.scope: Deactivated successfully. Nov 3 16:28:10.558690 systemd-logind[1582]: Session 7 logged out. Waiting for processes to exit. Nov 3 16:28:10.561503 systemd[1]: Started sshd@7-10.0.0.124:22-10.0.0.1:51714.service - OpenSSH per-connection server daemon (10.0.0.1:51714). Nov 3 16:28:10.562373 systemd-logind[1582]: Removed session 7. Nov 3 16:28:10.622440 sshd[1794]: Accepted publickey for core from 10.0.0.1 port 51714 ssh2: RSA SHA256:6IgjKsfLloMODYUZWLJOfDFsK2vE75XcxHBEtXf0d48 Nov 3 16:28:10.624161 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 3 16:28:10.625399 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 3 16:28:10.627027 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 3 16:28:10.631666 systemd-logind[1582]: New session 8 of user core. Nov 3 16:28:10.641224 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 3 16:28:10.658144 sudo[1802]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 3 16:28:10.658469 sudo[1802]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 3 16:28:10.675058 sudo[1802]: pam_unix(sudo:session): session closed for user root Nov 3 16:28:10.683218 sudo[1801]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 3 16:28:10.683544 sudo[1801]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 3 16:28:10.695809 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 3 16:28:10.758104 augenrules[1824]: No rules Nov 3 16:28:10.759980 systemd[1]: audit-rules.service: Deactivated successfully. Nov 3 16:28:10.760369 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 3 16:28:10.761953 sudo[1801]: pam_unix(sudo:session): session closed for user root Nov 3 16:28:10.763955 sshd[1800]: Connection closed by 10.0.0.1 port 51714 Nov 3 16:28:10.764326 sshd-session[1794]: pam_unix(sshd:session): session closed for user core Nov 3 16:28:10.779683 systemd[1]: sshd@7-10.0.0.124:22-10.0.0.1:51714.service: Deactivated successfully. Nov 3 16:28:10.781621 systemd[1]: session-8.scope: Deactivated successfully. Nov 3 16:28:10.782426 systemd-logind[1582]: Session 8 logged out. Waiting for processes to exit. Nov 3 16:28:10.785268 systemd[1]: Started sshd@8-10.0.0.124:22-10.0.0.1:51728.service - OpenSSH per-connection server daemon (10.0.0.1:51728). Nov 3 16:28:10.786160 systemd-logind[1582]: Removed session 8. Nov 3 16:28:10.846661 sshd[1833]: Accepted publickey for core from 10.0.0.1 port 51728 ssh2: RSA SHA256:6IgjKsfLloMODYUZWLJOfDFsK2vE75XcxHBEtXf0d48 Nov 3 16:28:10.848190 sshd-session[1833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 3 16:28:10.852901 systemd-logind[1582]: New session 9 of user core. Nov 3 16:28:10.863180 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 3 16:28:10.880361 sudo[1837]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 3 16:28:10.880788 sudo[1837]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 3 16:28:10.929629 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 3 16:28:10.939322 (kubelet)[1847]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 3 16:28:11.002722 kubelet[1847]: E1103 16:28:11.002626 1847 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 3 16:28:11.008967 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 3 16:28:11.009252 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 3 16:28:11.009755 systemd[1]: kubelet.service: Consumed 322ms CPU time, 109.8M memory peak. Nov 3 16:28:11.544590 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 3 16:28:11.561327 (dockerd)[1871]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 3 16:28:12.169083 dockerd[1871]: time="2025-11-03T16:28:12.168956884Z" level=info msg="Starting up" Nov 3 16:28:12.170181 dockerd[1871]: time="2025-11-03T16:28:12.170156092Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 3 16:28:12.194589 dockerd[1871]: time="2025-11-03T16:28:12.194525709Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 3 16:28:12.646433 systemd[1]: var-lib-docker-metacopy\x2dcheck25394978-merged.mount: Deactivated successfully. Nov 3 16:28:12.682755 dockerd[1871]: time="2025-11-03T16:28:12.682669215Z" level=info msg="Loading containers: start." Nov 3 16:28:12.696049 kernel: Initializing XFRM netlink socket Nov 3 16:28:13.043201 systemd-networkd[1500]: docker0: Link UP Nov 3 16:28:13.143218 dockerd[1871]: time="2025-11-03T16:28:13.143138485Z" level=info msg="Loading containers: done." Nov 3 16:28:13.159887 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck291739558-merged.mount: Deactivated successfully. Nov 3 16:28:13.162903 dockerd[1871]: time="2025-11-03T16:28:13.162863045Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 3 16:28:13.163032 dockerd[1871]: time="2025-11-03T16:28:13.162974804Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 3 16:28:13.163127 dockerd[1871]: time="2025-11-03T16:28:13.163107139Z" level=info msg="Initializing buildkit" Nov 3 16:28:13.210738 dockerd[1871]: time="2025-11-03T16:28:13.210631759Z" level=info msg="Completed buildkit initialization" Nov 3 16:28:13.223066 dockerd[1871]: time="2025-11-03T16:28:13.222956726Z" level=info msg="Daemon has completed initialization" Nov 3 16:28:13.223231 dockerd[1871]: time="2025-11-03T16:28:13.223099832Z" level=info msg="API listen on /run/docker.sock" Nov 3 16:28:13.223390 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 3 16:28:14.019662 containerd[1613]: time="2025-11-03T16:28:14.019593147Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 3 16:28:15.019951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3117729898.mount: Deactivated successfully. Nov 3 16:28:16.524170 containerd[1613]: time="2025-11-03T16:28:16.524090254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 3 16:28:16.525635 containerd[1613]: time="2025-11-03T16:28:16.525585340Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=25940818" Nov 3 16:28:16.527625 containerd[1613]: time="2025-11-03T16:28:16.527574975Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 3 16:28:16.530689 containerd[1613]: time="2025-11-03T16:28:16.530638169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 3 16:28:16.531564 containerd[1613]: time="2025-11-03T16:28:16.531521724Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 2.511864631s" Nov 3 16:28:16.531613 containerd[1613]: time="2025-11-03T16:28:16.531569603Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 3 16:28:16.532580 containerd[1613]: time="2025-11-03T16:28:16.532537353Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 3 16:28:18.049714 containerd[1613]: time="2025-11-03T16:28:18.049621367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 3 16:28:18.051200 containerd[1613]: time="2025-11-03T16:28:18.051121300Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21151604" Nov 3 16:28:18.053748 containerd[1613]: time="2025-11-03T16:28:18.053708782Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 3 16:28:18.056449 containerd[1613]: time="2025-11-03T16:28:18.056361937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 3 16:28:18.057701 containerd[1613]: time="2025-11-03T16:28:18.057662567Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.52508907s" Nov 3 16:28:18.057701 containerd[1613]: time="2025-11-03T16:28:18.057701236Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 3 16:28:18.058865 containerd[1613]: time="2025-11-03T16:28:18.058781186Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 3 16:28:19.903651 containerd[1613]: time="2025-11-03T16:28:19.903579244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 3 16:28:19.904792 containerd[1613]: time="2025-11-03T16:28:19.904755408Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15716956" Nov 3 16:28:19.908032 containerd[1613]: time="2025-11-03T16:28:19.907986388Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 3 16:28:19.911493 containerd[1613]: time="2025-11-03T16:28:19.911444300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 3 16:28:19.912492 containerd[1613]: time="2025-11-03T16:28:19.912455351Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.853594514s" Nov 3 16:28:19.912530 containerd[1613]: time="2025-11-03T16:28:19.912492470Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 3 16:28:19.913071 containerd[1613]: time="2025-11-03T16:28:19.913047374Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 3 16:28:21.025999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount925849399.mount: Deactivated successfully. Nov 3 16:28:21.027157 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 3 16:28:21.028580 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 3 16:28:21.228410 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 3 16:28:21.232737 (kubelet)[2169]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 3 16:28:21.285126 kubelet[2169]: E1103 16:28:21.284966 2169 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 3 16:28:21.288907 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 3 16:28:21.289136 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 3 16:28:21.289532 systemd[1]: kubelet.service: Consumed 213ms CPU time, 110.9M memory peak. Nov 3 16:28:21.763796 containerd[1613]: time="2025-11-03T16:28:21.763724195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 3 16:28:21.764639 containerd[1613]: time="2025-11-03T16:28:21.764567492Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=14376276" Nov 3 16:28:21.765738 containerd[1613]: time="2025-11-03T16:28:21.765692952Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 3 16:28:21.767622 containerd[1613]: time="2025-11-03T16:28:21.767579669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 3 16:28:21.768421 containerd[1613]: time="2025-11-03T16:28:21.768373279Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.855294585s" Nov 3 16:28:21.768421 containerd[1613]: time="2025-11-03T16:28:21.768413232Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 3 16:28:21.769042 containerd[1613]: time="2025-11-03T16:28:21.768980663Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 3 16:28:22.490743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount297362304.mount: Deactivated successfully. Nov 3 16:28:23.892852 containerd[1613]: time="2025-11-03T16:28:23.892766039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 3 16:28:23.893765 containerd[1613]: time="2025-11-03T16:28:23.893707898Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=21824533" Nov 3 16:28:23.895227 containerd[1613]: time="2025-11-03T16:28:23.895189249Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 3 16:28:23.897912 containerd[1613]: time="2025-11-03T16:28:23.897863188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 3 16:28:23.898786 containerd[1613]: time="2025-11-03T16:28:23.898716186Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.129673081s" Nov 3 16:28:23.898786 containerd[1613]: time="2025-11-03T16:28:23.898771431Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 3 16:28:23.899403 containerd[1613]: time="2025-11-03T16:28:23.899360801Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 3 16:28:24.513304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1396413242.mount: Deactivated successfully. Nov 3 16:28:24.519422 containerd[1613]: time="2025-11-03T16:28:24.519346450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 3 16:28:24.520206 containerd[1613]: time="2025-11-03T16:28:24.520138468Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Nov 3 16:28:24.521332 containerd[1613]: time="2025-11-03T16:28:24.521274518Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 3 16:28:24.523150 containerd[1613]: time="2025-11-03T16:28:24.523101758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 3 16:28:24.523731 containerd[1613]: time="2025-11-03T16:28:24.523689966Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 624.299395ms" Nov 3 16:28:24.523731 containerd[1613]: time="2025-11-03T16:28:24.523723161Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 3 16:28:24.524234 containerd[1613]: time="2025-11-03T16:28:24.524206807Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 3 16:28:28.753278 containerd[1613]: time="2025-11-03T16:28:28.753190533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 3 16:28:28.754062 containerd[1613]: time="2025-11-03T16:28:28.754019982Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=61189017" Nov 3 16:28:28.755267 containerd[1613]: time="2025-11-03T16:28:28.755236486Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 3 16:28:28.757829 containerd[1613]: time="2025-11-03T16:28:28.757782907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 3 16:28:28.758650 containerd[1613]: time="2025-11-03T16:28:28.758612907Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 4.234376049s" Nov 3 16:28:28.758722 containerd[1613]: time="2025-11-03T16:28:28.758654436Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 3 16:28:31.348734 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 3 16:28:31.350590 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 3 16:28:31.602770 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 3 16:28:31.620279 (kubelet)[2317]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 3 16:28:31.696655 kubelet[2317]: E1103 16:28:31.696570 2317 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 3 16:28:31.701443 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 3 16:28:31.701668 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 3 16:28:31.702102 systemd[1]: kubelet.service: Consumed 286ms CPU time, 110.6M memory peak. Nov 3 16:28:32.004161 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 3 16:28:32.004344 systemd[1]: kubelet.service: Consumed 286ms CPU time, 110.6M memory peak. Nov 3 16:28:32.006859 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 3 16:28:32.035642 systemd[1]: Reload requested from client PID 2333 ('systemctl') (unit session-9.scope)... Nov 3 16:28:32.035660 systemd[1]: Reloading... Nov 3 16:28:32.120063 zram_generator::config[2377]: No configuration found. Nov 3 16:28:32.897023 systemd[1]: Reloading finished in 860 ms. Nov 3 16:28:32.968954 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 3 16:28:32.969136 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 3 16:28:32.969517 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 3 16:28:32.969609 systemd[1]: kubelet.service: Consumed 155ms CPU time, 98.1M memory peak. Nov 3 16:28:32.971560 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 3 16:28:33.191221 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 3 16:28:33.205346 (kubelet)[2425]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 3 16:28:33.254884 kubelet[2425]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 3 16:28:33.254884 kubelet[2425]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 3 16:28:33.255312 kubelet[2425]: I1103 16:28:33.254944 2425 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 3 16:28:33.717737 kubelet[2425]: I1103 16:28:33.717684 2425 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 3 16:28:33.717737 kubelet[2425]: I1103 16:28:33.717717 2425 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 3 16:28:33.717737 kubelet[2425]: I1103 16:28:33.717754 2425 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 3 16:28:33.717914 kubelet[2425]: I1103 16:28:33.717763 2425 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 3 16:28:33.718064 kubelet[2425]: I1103 16:28:33.718048 2425 server.go:956] "Client rotation is on, will bootstrap in background" Nov 3 16:28:34.704913 kubelet[2425]: E1103 16:28:34.704835 2425 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.124:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 3 16:28:34.705462 kubelet[2425]: I1103 16:28:34.704981 2425 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 3 16:28:34.709202 kubelet[2425]: I1103 16:28:34.709158 2425 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 3 16:28:34.714549 kubelet[2425]: I1103 16:28:34.714516 2425 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 3 16:28:34.715929 kubelet[2425]: I1103 16:28:34.715890 2425 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 3 16:28:34.716129 kubelet[2425]: I1103 16:28:34.715923 2425 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 3 16:28:34.716255 kubelet[2425]: I1103 16:28:34.716142 2425 topology_manager.go:138] "Creating topology manager with none policy" Nov 3 16:28:34.716255 kubelet[2425]: I1103 16:28:34.716152 2425 container_manager_linux.go:306] "Creating device plugin manager" Nov 3 16:28:34.716317 kubelet[2425]: I1103 16:28:34.716294 2425 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 3 16:28:34.718453 kubelet[2425]: I1103 16:28:34.718426 2425 state_mem.go:36] "Initialized new in-memory state store" Nov 3 16:28:34.718668 kubelet[2425]: I1103 16:28:34.718646 2425 kubelet.go:475] "Attempting to sync node with API server" Nov 3 16:28:34.718705 kubelet[2425]: I1103 16:28:34.718672 2425 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 3 16:28:34.718730 kubelet[2425]: I1103 16:28:34.718712 2425 kubelet.go:387] "Adding apiserver pod source" Nov 3 16:28:34.718753 kubelet[2425]: I1103 16:28:34.718734 2425 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 3 16:28:34.720352 kubelet[2425]: E1103 16:28:34.720265 2425 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.124:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 3 16:28:34.720352 kubelet[2425]: E1103 16:28:34.720312 2425 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 3 16:28:34.721316 kubelet[2425]: I1103 16:28:34.721294 2425 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 3 16:28:34.722232 kubelet[2425]: I1103 16:28:34.722205 2425 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 3 16:28:34.722232 kubelet[2425]: I1103 16:28:34.722234 2425 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 3 16:28:34.722318 kubelet[2425]: W1103 16:28:34.722303 2425 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 3 16:28:34.728478 kubelet[2425]: I1103 16:28:34.728430 2425 server.go:1262] "Started kubelet" Nov 3 16:28:34.728773 kubelet[2425]: I1103 16:28:34.728725 2425 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 3 16:28:34.730923 kubelet[2425]: I1103 16:28:34.730895 2425 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 3 16:28:34.733904 kubelet[2425]: I1103 16:28:34.731537 2425 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 3 16:28:34.733904 kubelet[2425]: I1103 16:28:34.732763 2425 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 3 16:28:34.733904 kubelet[2425]: I1103 16:28:34.732821 2425 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 3 16:28:34.733904 kubelet[2425]: I1103 16:28:34.733412 2425 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 3 16:28:34.733904 kubelet[2425]: I1103 16:28:34.733563 2425 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 3 16:28:34.733904 kubelet[2425]: E1103 16:28:34.733876 2425 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 3 16:28:34.735284 kubelet[2425]: I1103 16:28:34.735245 2425 server.go:310] "Adding debug handlers to kubelet server" Nov 3 16:28:34.735396 kubelet[2425]: I1103 16:28:34.735373 2425 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 3 16:28:34.735396 kubelet[2425]: E1103 16:28:34.735376 2425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="200ms" Nov 3 16:28:34.735459 kubelet[2425]: I1103 16:28:34.735423 2425 reconciler.go:29] "Reconciler: start to sync state" Nov 3 16:28:34.735575 kubelet[2425]: I1103 16:28:34.735556 2425 factory.go:223] Registration of the systemd container factory successfully Nov 3 16:28:34.735658 kubelet[2425]: I1103 16:28:34.735640 2425 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 3 16:28:34.735740 kubelet[2425]: E1103 16:28:34.735715 2425 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 3 16:28:34.736756 kubelet[2425]: I1103 16:28:34.736737 2425 factory.go:223] Registration of the containerd container factory successfully Nov 3 16:28:34.737368 kubelet[2425]: E1103 16:28:34.736319 2425 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.124:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.124:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18748c06ba15d86d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-03 16:28:34.728392813 +0000 UTC m=+1.515211851,LastTimestamp:2025-11-03 16:28:34.728392813 +0000 UTC m=+1.515211851,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 3 16:28:34.737510 kubelet[2425]: E1103 16:28:34.737486 2425 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 3 16:28:34.751645 kubelet[2425]: I1103 16:28:34.751615 2425 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 3 16:28:34.751645 kubelet[2425]: I1103 16:28:34.751634 2425 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 3 16:28:34.751753 kubelet[2425]: I1103 16:28:34.751657 2425 state_mem.go:36] "Initialized new in-memory state store" Nov 3 16:28:34.753889 kubelet[2425]: I1103 16:28:34.753796 2425 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 3 16:28:34.754282 kubelet[2425]: I1103 16:28:34.754258 2425 policy_none.go:49] "None policy: Start" Nov 3 16:28:34.754282 kubelet[2425]: I1103 16:28:34.754281 2425 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 3 16:28:34.754330 kubelet[2425]: I1103 16:28:34.754296 2425 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 3 16:28:34.756833 kubelet[2425]: I1103 16:28:34.756810 2425 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 3 16:28:34.757127 kubelet[2425]: I1103 16:28:34.756853 2425 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 3 16:28:34.757127 kubelet[2425]: I1103 16:28:34.756879 2425 kubelet.go:2427] "Starting kubelet main sync loop" Nov 3 16:28:34.757127 kubelet[2425]: E1103 16:28:34.756918 2425 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 3 16:28:34.757375 kubelet[2425]: E1103 16:28:34.757344 2425 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 3 16:28:34.757667 kubelet[2425]: I1103 16:28:34.757447 2425 policy_none.go:47] "Start" Nov 3 16:28:34.762657 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 3 16:28:34.777047 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 3 16:28:34.780464 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 3 16:28:34.799891 kubelet[2425]: E1103 16:28:34.799866 2425 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 3 16:28:34.800135 kubelet[2425]: I1103 16:28:34.800111 2425 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 3 16:28:34.800189 kubelet[2425]: I1103 16:28:34.800134 2425 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 3 16:28:34.800443 kubelet[2425]: I1103 16:28:34.800386 2425 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 3 16:28:34.801274 kubelet[2425]: E1103 16:28:34.801237 2425 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 3 16:28:34.801424 kubelet[2425]: E1103 16:28:34.801291 2425 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 3 16:28:34.867928 systemd[1]: Created slice kubepods-burstable-pod005fb0e00d8bede726b92542eba27ebd.slice - libcontainer container kubepods-burstable-pod005fb0e00d8bede726b92542eba27ebd.slice. Nov 3 16:28:34.878932 kubelet[2425]: E1103 16:28:34.878866 2425 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 3 16:28:34.882233 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Nov 3 16:28:34.884361 kubelet[2425]: E1103 16:28:34.884328 2425 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 3 16:28:34.886293 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Nov 3 16:28:34.888285 kubelet[2425]: E1103 16:28:34.888260 2425 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 3 16:28:34.905561 kubelet[2425]: I1103 16:28:34.905511 2425 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 3 16:28:34.905997 kubelet[2425]: E1103 16:28:34.905947 2425 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Nov 3 16:28:34.936624 kubelet[2425]: E1103 16:28:34.936590 2425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="400ms" Nov 3 16:28:35.036946 kubelet[2425]: I1103 16:28:35.036834 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/005fb0e00d8bede726b92542eba27ebd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"005fb0e00d8bede726b92542eba27ebd\") " pod="kube-system/kube-apiserver-localhost" Nov 3 16:28:35.036946 kubelet[2425]: I1103 16:28:35.036868 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/005fb0e00d8bede726b92542eba27ebd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"005fb0e00d8bede726b92542eba27ebd\") " pod="kube-system/kube-apiserver-localhost" Nov 3 16:28:35.036946 kubelet[2425]: I1103 16:28:35.036885 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 3 16:28:35.036946 kubelet[2425]: I1103 16:28:35.036899 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 3 16:28:35.036946 kubelet[2425]: I1103 16:28:35.036916 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 3 16:28:35.037130 kubelet[2425]: I1103 16:28:35.036976 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/005fb0e00d8bede726b92542eba27ebd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"005fb0e00d8bede726b92542eba27ebd\") " pod="kube-system/kube-apiserver-localhost" Nov 3 16:28:35.037130 kubelet[2425]: I1103 16:28:35.036993 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 3 16:28:35.037186 kubelet[2425]: I1103 16:28:35.037128 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 3 16:28:35.037217 kubelet[2425]: I1103 16:28:35.037195 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 3 16:28:35.107798 kubelet[2425]: I1103 16:28:35.107760 2425 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 3 16:28:35.108048 kubelet[2425]: E1103 16:28:35.107979 2425 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Nov 3 16:28:35.182434 kubelet[2425]: E1103 16:28:35.182375 2425 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:28:35.183164 containerd[1613]: time="2025-11-03T16:28:35.183114222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:005fb0e00d8bede726b92542eba27ebd,Namespace:kube-system,Attempt:0,}" Nov 3 16:28:35.187303 kubelet[2425]: E1103 16:28:35.187276 2425 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:28:35.187612 containerd[1613]: time="2025-11-03T16:28:35.187542960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Nov 3 16:28:35.191166 kubelet[2425]: E1103 16:28:35.191131 2425 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:28:35.191465 containerd[1613]: time="2025-11-03T16:28:35.191404525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Nov 3 16:28:35.338266 kubelet[2425]: E1103 16:28:35.338132 2425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="800ms" Nov 3 16:28:35.510329 kubelet[2425]: I1103 16:28:35.510281 2425 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 3 16:28:35.510692 kubelet[2425]: E1103 16:28:35.510651 2425 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Nov 3 16:28:35.553693 kubelet[2425]: E1103 16:28:35.553636 2425 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 3 16:28:35.605985 kubelet[2425]: E1103 16:28:35.605941 2425 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 3 16:28:35.828477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2344974113.mount: Deactivated successfully. Nov 3 16:28:35.835185 containerd[1613]: time="2025-11-03T16:28:35.835127962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 3 16:28:35.836947 containerd[1613]: time="2025-11-03T16:28:35.836912468Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=881" Nov 3 16:28:35.838993 containerd[1613]: time="2025-11-03T16:28:35.838960694Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 3 16:28:35.840113 kubelet[2425]: E1103 16:28:35.840077 2425 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 3 16:28:35.840933 containerd[1613]: time="2025-11-03T16:28:35.840905984Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 3 16:28:35.841799 containerd[1613]: time="2025-11-03T16:28:35.841752042Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 3 16:28:35.843619 containerd[1613]: time="2025-11-03T16:28:35.843593793Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 3 16:28:35.844597 containerd[1613]: time="2025-11-03T16:28:35.844546493Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 3 16:28:35.845415 containerd[1613]: time="2025-11-03T16:28:35.845386592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 3 16:28:35.846022 containerd[1613]: time="2025-11-03T16:28:35.845982188Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 659.016406ms" Nov 3 16:28:35.847212 containerd[1613]: time="2025-11-03T16:28:35.847191158Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 655.79216ms" Nov 3 16:28:35.856250 containerd[1613]: time="2025-11-03T16:28:35.856114174Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 661.729324ms" Nov 3 16:28:35.883056 containerd[1613]: time="2025-11-03T16:28:35.882319936Z" level=info msg="connecting to shim 5a971e4d576b83b88347c62545fce12fc4f99f6bffb2dcbb582fee9a97ce0e0d" address="unix:///run/containerd/s/0d9f43a3c29c0d567bb0262b3f23c30c9da1d7bbcb8607f1dea5c988fffa5b0c" namespace=k8s.io protocol=ttrpc version=3 Nov 3 16:28:35.883056 containerd[1613]: time="2025-11-03T16:28:35.883055927Z" level=info msg="connecting to shim 3c59f47e918a288ad9f8db890f3e83f2f3871e1302dbc246ff31e4684a1f31d5" address="unix:///run/containerd/s/9f70390f489b28aa905eb4777c401beb6c8a6ca78725b1576ac4402d9d58a214" namespace=k8s.io protocol=ttrpc version=3 Nov 3 16:28:35.895284 containerd[1613]: time="2025-11-03T16:28:35.895218312Z" level=info msg="connecting to shim 014ce8bcb8ecb82e062452920787c9629988332447374b06a05c45dcbec78fa6" address="unix:///run/containerd/s/e0abbdc6c2d99d1a62081095a6b911904bbaae895a74b222932d1b892418546f" namespace=k8s.io protocol=ttrpc version=3 Nov 3 16:28:35.934150 systemd[1]: Started cri-containerd-014ce8bcb8ecb82e062452920787c9629988332447374b06a05c45dcbec78fa6.scope - libcontainer container 014ce8bcb8ecb82e062452920787c9629988332447374b06a05c45dcbec78fa6. Nov 3 16:28:35.938432 systemd[1]: Started cri-containerd-5a971e4d576b83b88347c62545fce12fc4f99f6bffb2dcbb582fee9a97ce0e0d.scope - libcontainer container 5a971e4d576b83b88347c62545fce12fc4f99f6bffb2dcbb582fee9a97ce0e0d. Nov 3 16:28:35.943652 systemd[1]: Started cri-containerd-3c59f47e918a288ad9f8db890f3e83f2f3871e1302dbc246ff31e4684a1f31d5.scope - libcontainer container 3c59f47e918a288ad9f8db890f3e83f2f3871e1302dbc246ff31e4684a1f31d5. Nov 3 16:28:35.997090 containerd[1613]: time="2025-11-03T16:28:35.996805781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a971e4d576b83b88347c62545fce12fc4f99f6bffb2dcbb582fee9a97ce0e0d\"" Nov 3 16:28:35.997896 containerd[1613]: time="2025-11-03T16:28:35.997841447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"014ce8bcb8ecb82e062452920787c9629988332447374b06a05c45dcbec78fa6\"" Nov 3 16:28:35.999512 containerd[1613]: time="2025-11-03T16:28:35.999181225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:005fb0e00d8bede726b92542eba27ebd,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c59f47e918a288ad9f8db890f3e83f2f3871e1302dbc246ff31e4684a1f31d5\"" Nov 3 16:28:35.999563 kubelet[2425]: E1103 16:28:35.999211 2425 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:28:35.999563 kubelet[2425]: E1103 16:28:35.999306 2425 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:28:35.999883 kubelet[2425]: E1103 16:28:35.999861 2425 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:28:36.005068 containerd[1613]: time="2025-11-03T16:28:36.005042138Z" level=info msg="CreateContainer within sandbox \"5a971e4d576b83b88347c62545fce12fc4f99f6bffb2dcbb582fee9a97ce0e0d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 3 16:28:36.007436 containerd[1613]: time="2025-11-03T16:28:36.007389585Z" level=info msg="CreateContainer within sandbox \"3c59f47e918a288ad9f8db890f3e83f2f3871e1302dbc246ff31e4684a1f31d5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 3 16:28:36.009655 containerd[1613]: time="2025-11-03T16:28:36.009612832Z" level=info msg="CreateContainer within sandbox \"014ce8bcb8ecb82e062452920787c9629988332447374b06a05c45dcbec78fa6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 3 16:28:36.019667 containerd[1613]: time="2025-11-03T16:28:36.019613850Z" level=info msg="Container 051318e99b89f0dda80002ba97ceef77559493b5d217a15ae87b746eceb35b9c: CDI devices from CRI Config.CDIDevices: []" Nov 3 16:28:36.024700 containerd[1613]: time="2025-11-03T16:28:36.024672573Z" level=info msg="Container a50aaa56d67d02d14103de38be0ba40d3fc6d42556c0059765af7d063ce95484: CDI devices from CRI Config.CDIDevices: []" Nov 3 16:28:36.030591 containerd[1613]: time="2025-11-03T16:28:36.030559215Z" level=info msg="CreateContainer within sandbox \"5a971e4d576b83b88347c62545fce12fc4f99f6bffb2dcbb582fee9a97ce0e0d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"051318e99b89f0dda80002ba97ceef77559493b5d217a15ae87b746eceb35b9c\"" Nov 3 16:28:36.031016 containerd[1613]: time="2025-11-03T16:28:36.030969253Z" level=info msg="StartContainer for \"051318e99b89f0dda80002ba97ceef77559493b5d217a15ae87b746eceb35b9c\"" Nov 3 16:28:36.032166 containerd[1613]: time="2025-11-03T16:28:36.032139887Z" level=info msg="connecting to shim 051318e99b89f0dda80002ba97ceef77559493b5d217a15ae87b746eceb35b9c" address="unix:///run/containerd/s/0d9f43a3c29c0d567bb0262b3f23c30c9da1d7bbcb8607f1dea5c988fffa5b0c" protocol=ttrpc version=3 Nov 3 16:28:36.032501 containerd[1613]: time="2025-11-03T16:28:36.032469422Z" level=info msg="Container 8a2efacd89a8b9e4518f279c53894612df61846531e7cc3d166ae0c55e0f539f: CDI devices from CRI Config.CDIDevices: []" Nov 3 16:28:36.036630 containerd[1613]: time="2025-11-03T16:28:36.036595826Z" level=info msg="CreateContainer within sandbox \"3c59f47e918a288ad9f8db890f3e83f2f3871e1302dbc246ff31e4684a1f31d5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a50aaa56d67d02d14103de38be0ba40d3fc6d42556c0059765af7d063ce95484\"" Nov 3 16:28:36.037379 containerd[1613]: time="2025-11-03T16:28:36.037325345Z" level=info msg="StartContainer for \"a50aaa56d67d02d14103de38be0ba40d3fc6d42556c0059765af7d063ce95484\"" Nov 3 16:28:36.038438 containerd[1613]: time="2025-11-03T16:28:36.038409256Z" level=info msg="connecting to shim a50aaa56d67d02d14103de38be0ba40d3fc6d42556c0059765af7d063ce95484" address="unix:///run/containerd/s/9f70390f489b28aa905eb4777c401beb6c8a6ca78725b1576ac4402d9d58a214" protocol=ttrpc version=3 Nov 3 16:28:36.044751 containerd[1613]: time="2025-11-03T16:28:36.044720438Z" level=info msg="CreateContainer within sandbox \"014ce8bcb8ecb82e062452920787c9629988332447374b06a05c45dcbec78fa6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8a2efacd89a8b9e4518f279c53894612df61846531e7cc3d166ae0c55e0f539f\"" Nov 3 16:28:36.045671 containerd[1613]: time="2025-11-03T16:28:36.045650633Z" level=info msg="StartContainer for \"8a2efacd89a8b9e4518f279c53894612df61846531e7cc3d166ae0c55e0f539f\"" Nov 3 16:28:36.047238 containerd[1613]: time="2025-11-03T16:28:36.047122219Z" level=info msg="connecting to shim 8a2efacd89a8b9e4518f279c53894612df61846531e7cc3d166ae0c55e0f539f" address="unix:///run/containerd/s/e0abbdc6c2d99d1a62081095a6b911904bbaae895a74b222932d1b892418546f" protocol=ttrpc version=3 Nov 3 16:28:36.050174 systemd[1]: Started cri-containerd-051318e99b89f0dda80002ba97ceef77559493b5d217a15ae87b746eceb35b9c.scope - libcontainer container 051318e99b89f0dda80002ba97ceef77559493b5d217a15ae87b746eceb35b9c. Nov 3 16:28:36.068138 systemd[1]: Started cri-containerd-a50aaa56d67d02d14103de38be0ba40d3fc6d42556c0059765af7d063ce95484.scope - libcontainer container a50aaa56d67d02d14103de38be0ba40d3fc6d42556c0059765af7d063ce95484. Nov 3 16:28:36.071519 systemd[1]: Started cri-containerd-8a2efacd89a8b9e4518f279c53894612df61846531e7cc3d166ae0c55e0f539f.scope - libcontainer container 8a2efacd89a8b9e4518f279c53894612df61846531e7cc3d166ae0c55e0f539f. Nov 3 16:28:36.082259 kubelet[2425]: E1103 16:28:36.082221 2425 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.124:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 3 16:28:36.129973 containerd[1613]: time="2025-11-03T16:28:36.129829464Z" level=info msg="StartContainer for \"051318e99b89f0dda80002ba97ceef77559493b5d217a15ae87b746eceb35b9c\" returns successfully" Nov 3 16:28:36.139534 kubelet[2425]: E1103 16:28:36.139493 2425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="1.6s" Nov 3 16:28:36.141634 containerd[1613]: time="2025-11-03T16:28:36.141596558Z" level=info msg="StartContainer for \"8a2efacd89a8b9e4518f279c53894612df61846531e7cc3d166ae0c55e0f539f\" returns successfully" Nov 3 16:28:36.142647 containerd[1613]: time="2025-11-03T16:28:36.142617313Z" level=info msg="StartContainer for \"a50aaa56d67d02d14103de38be0ba40d3fc6d42556c0059765af7d063ce95484\" returns successfully" Nov 3 16:28:36.312217 kubelet[2425]: I1103 16:28:36.312170 2425 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 3 16:28:36.941843 kubelet[2425]: E1103 16:28:36.786048 2425 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 3 16:28:36.941843 kubelet[2425]: E1103 16:28:36.787237 2425 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 3 16:28:36.941843 kubelet[2425]: E1103 16:28:36.787480 2425 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:28:36.941843 kubelet[2425]: E1103 16:28:36.787753 2425 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:28:36.941843 kubelet[2425]: E1103 16:28:36.788868 2425 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 3 16:28:36.941843 kubelet[2425]: E1103 16:28:36.789169 2425 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:28:37.794588 kubelet[2425]: E1103 16:28:37.794543 2425 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 3 16:28:37.794795 kubelet[2425]: E1103 16:28:37.794675 2425 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:28:37.795182 kubelet[2425]: E1103 16:28:37.795161 2425 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 3 16:28:37.795290 kubelet[2425]: E1103 16:28:37.795260 2425 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:28:38.218279 kubelet[2425]: E1103 16:28:38.218223 2425 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 3 16:28:38.258591 kubelet[2425]: E1103 16:28:38.258409 2425 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18748c06ba15d86d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-03 16:28:34.728392813 +0000 UTC m=+1.515211851,LastTimestamp:2025-11-03 16:28:34.728392813 +0000 UTC m=+1.515211851,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 3 16:28:38.308699 kubelet[2425]: I1103 16:28:38.308609 2425 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 3 16:28:38.335413 kubelet[2425]: I1103 16:28:38.335367 2425 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 3 16:28:38.341433 kubelet[2425]: E1103 16:28:38.341393 2425 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 3 16:28:38.341433 kubelet[2425]: I1103 16:28:38.341429 2425 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 3 16:28:38.343022 kubelet[2425]: E1103 16:28:38.342981 2425 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 3 16:28:38.343247 kubelet[2425]: I1103 16:28:38.343159 2425 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 3 16:28:38.345827 kubelet[2425]: E1103 16:28:38.345804 2425 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 3 16:28:38.721964 kubelet[2425]: I1103 16:28:38.721919 2425 apiserver.go:52] "Watching apiserver" Nov 3 16:28:38.736195 kubelet[2425]: I1103 16:28:38.736143 2425 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 3 16:28:38.795516 kubelet[2425]: I1103 16:28:38.795470 2425 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 3 16:28:38.797649 kubelet[2425]: E1103 16:28:38.797617 2425 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 3 16:28:38.797847 kubelet[2425]: E1103 16:28:38.797794 2425 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:28:40.645848 systemd[1]: Reload requested from client PID 2713 ('systemctl') (unit session-9.scope)... Nov 3 16:28:40.645870 systemd[1]: Reloading... Nov 3 16:28:40.722053 zram_generator::config[2757]: No configuration found. Nov 3 16:28:40.957906 systemd[1]: Reloading finished in 311 ms. Nov 3 16:28:40.994062 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 3 16:28:41.014522 systemd[1]: kubelet.service: Deactivated successfully. Nov 3 16:28:41.014872 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 3 16:28:41.014931 systemd[1]: kubelet.service: Consumed 1.066s CPU time, 125.2M memory peak. Nov 3 16:28:41.017160 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 3 16:28:41.240425 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 3 16:28:41.252322 (kubelet)[2802]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 3 16:28:41.309482 kubelet[2802]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 3 16:28:41.309482 kubelet[2802]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 3 16:28:41.309948 kubelet[2802]: I1103 16:28:41.309530 2802 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 3 16:28:41.316535 kubelet[2802]: I1103 16:28:41.316480 2802 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 3 16:28:41.316535 kubelet[2802]: I1103 16:28:41.316511 2802 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 3 16:28:41.316684 kubelet[2802]: I1103 16:28:41.316546 2802 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 3 16:28:41.316684 kubelet[2802]: I1103 16:28:41.316554 2802 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 3 16:28:41.316807 kubelet[2802]: I1103 16:28:41.316785 2802 server.go:956] "Client rotation is on, will bootstrap in background" Nov 3 16:28:41.317903 kubelet[2802]: I1103 16:28:41.317870 2802 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 3 16:28:41.319739 kubelet[2802]: I1103 16:28:41.319678 2802 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 3 16:28:41.370692 kubelet[2802]: I1103 16:28:41.370656 2802 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 3 16:28:41.375839 kubelet[2802]: I1103 16:28:41.375791 2802 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 3 16:28:41.376091 kubelet[2802]: I1103 16:28:41.376049 2802 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 3 16:28:41.376252 kubelet[2802]: I1103 16:28:41.376083 2802 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 3 16:28:41.376339 kubelet[2802]: I1103 16:28:41.376254 2802 topology_manager.go:138] "Creating topology manager with none policy" Nov 3 16:28:41.376339 kubelet[2802]: I1103 16:28:41.376263 2802 container_manager_linux.go:306] "Creating device plugin manager" Nov 3 16:28:41.376339 kubelet[2802]: I1103 16:28:41.376288 2802 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 3 16:28:41.377130 kubelet[2802]: I1103 16:28:41.377096 2802 state_mem.go:36] "Initialized new in-memory state store" Nov 3 16:28:41.377290 kubelet[2802]: I1103 16:28:41.377272 2802 kubelet.go:475] "Attempting to sync node with API server" Nov 3 16:28:41.377319 kubelet[2802]: I1103 16:28:41.377292 2802 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 3 16:28:41.377319 kubelet[2802]: I1103 16:28:41.377315 2802 kubelet.go:387] "Adding apiserver pod source" Nov 3 16:28:41.377372 kubelet[2802]: I1103 16:28:41.377341 2802 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 3 16:28:41.378445 kubelet[2802]: I1103 16:28:41.378425 2802 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 3 16:28:41.378927 kubelet[2802]: I1103 16:28:41.378910 2802 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 3 16:28:41.378979 kubelet[2802]: I1103 16:28:41.378938 2802 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 3 16:28:41.381979 kubelet[2802]: I1103 16:28:41.381960 2802 server.go:1262] "Started kubelet" Nov 3 16:28:41.382185 kubelet[2802]: I1103 16:28:41.382143 2802 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 3 16:28:41.382330 kubelet[2802]: I1103 16:28:41.382265 2802 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 3 16:28:41.382376 kubelet[2802]: I1103 16:28:41.382334 2802 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 3 16:28:41.382817 kubelet[2802]: I1103 16:28:41.382790 2802 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 3 16:28:41.383771 kubelet[2802]: I1103 16:28:41.383702 2802 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 3 16:28:41.385826 kubelet[2802]: I1103 16:28:41.384700 2802 server.go:310] "Adding debug handlers to kubelet server" Nov 3 16:28:41.390446 kubelet[2802]: I1103 16:28:41.390401 2802 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 3 16:28:41.390575 kubelet[2802]: I1103 16:28:41.390512 2802 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 3 16:28:41.391331 kubelet[2802]: I1103 16:28:41.391313 2802 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 3 16:28:41.391560 kubelet[2802]: I1103 16:28:41.391517 2802 reconciler.go:29] "Reconciler: start to sync state" Nov 3 16:28:41.393162 kubelet[2802]: E1103 16:28:41.393133 2802 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 3 16:28:41.393246 kubelet[2802]: I1103 16:28:41.393217 2802 factory.go:223] Registration of the systemd container factory successfully Nov 3 16:28:41.393372 kubelet[2802]: I1103 16:28:41.393343 2802 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 3 16:28:41.396275 kubelet[2802]: I1103 16:28:41.396246 2802 factory.go:223] Registration of the containerd container factory successfully Nov 3 16:28:41.405943 kubelet[2802]: I1103 16:28:41.405799 2802 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 3 16:28:41.407161 kubelet[2802]: I1103 16:28:41.407120 2802 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 3 16:28:41.407161 kubelet[2802]: I1103 16:28:41.407158 2802 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 3 16:28:41.407236 kubelet[2802]: I1103 16:28:41.407194 2802 kubelet.go:2427] "Starting kubelet main sync loop" Nov 3 16:28:41.407276 kubelet[2802]: E1103 16:28:41.407249 2802 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 3 16:28:41.446291 kubelet[2802]: I1103 16:28:41.446249 2802 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 3 16:28:41.446291 kubelet[2802]: I1103 16:28:41.446271 2802 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 3 16:28:41.446291 kubelet[2802]: I1103 16:28:41.446298 2802 state_mem.go:36] "Initialized new in-memory state store" Nov 3 16:28:41.446488 kubelet[2802]: I1103 16:28:41.446446 2802 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 3 16:28:41.446488 kubelet[2802]: I1103 16:28:41.446458 2802 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 3 16:28:41.446528 kubelet[2802]: I1103 16:28:41.446494 2802 policy_none.go:49] "None policy: Start" Nov 3 16:28:41.446528 kubelet[2802]: I1103 16:28:41.446508 2802 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 3 16:28:41.446528 kubelet[2802]: I1103 16:28:41.446525 2802 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 3 16:28:41.446707 kubelet[2802]: I1103 16:28:41.446689 2802 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 3 16:28:41.446731 kubelet[2802]: I1103 16:28:41.446711 2802 policy_none.go:47] "Start" Nov 3 16:28:41.451555 kubelet[2802]: E1103 16:28:41.451530 2802 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 3 16:28:41.451792 kubelet[2802]: I1103 16:28:41.451767 2802 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 3 16:28:41.451890 kubelet[2802]: I1103 16:28:41.451786 2802 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 3 16:28:41.452128 kubelet[2802]: I1103 16:28:41.452113 2802 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 3 16:28:41.453626 kubelet[2802]: E1103 16:28:41.453593 2802 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 3 16:28:41.508470 kubelet[2802]: I1103 16:28:41.508338 2802 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 3 16:28:41.508598 kubelet[2802]: I1103 16:28:41.508556 2802 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 3 16:28:41.508878 kubelet[2802]: I1103 16:28:41.508717 2802 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 3 16:28:41.561620 kubelet[2802]: I1103 16:28:41.561574 2802 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 3 16:28:41.567480 kubelet[2802]: I1103 16:28:41.567416 2802 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 3 16:28:41.567605 kubelet[2802]: I1103 16:28:41.567528 2802 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 3 16:28:41.693376 kubelet[2802]: I1103 16:28:41.693311 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 3 16:28:41.693376 kubelet[2802]: I1103 16:28:41.693365 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 3 16:28:41.693376 kubelet[2802]: I1103 16:28:41.693389 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 3 16:28:41.693610 kubelet[2802]: I1103 16:28:41.693418 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 3 16:28:41.693610 kubelet[2802]: I1103 16:28:41.693434 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/005fb0e00d8bede726b92542eba27ebd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"005fb0e00d8bede726b92542eba27ebd\") " pod="kube-system/kube-apiserver-localhost" Nov 3 16:28:41.693610 kubelet[2802]: I1103 16:28:41.693447 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/005fb0e00d8bede726b92542eba27ebd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"005fb0e00d8bede726b92542eba27ebd\") " pod="kube-system/kube-apiserver-localhost" Nov 3 16:28:41.693610 kubelet[2802]: I1103 16:28:41.693464 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/005fb0e00d8bede726b92542eba27ebd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"005fb0e00d8bede726b92542eba27ebd\") " pod="kube-system/kube-apiserver-localhost" Nov 3 16:28:41.693610 kubelet[2802]: I1103 16:28:41.693480 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 3 16:28:41.693781 kubelet[2802]: I1103 16:28:41.693495 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 3 16:28:41.815778 kubelet[2802]: E1103 16:28:41.815632 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:28:41.817369 kubelet[2802]: E1103 16:28:41.816596 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:28:41.817369 kubelet[2802]: E1103 16:28:41.816655 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:28:42.378543 kubelet[2802]: I1103 16:28:42.378493 2802 apiserver.go:52] "Watching apiserver" Nov 3 16:28:42.392949 kubelet[2802]: I1103 16:28:42.391815 2802 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 3 16:28:42.427870 kubelet[2802]: I1103 16:28:42.427834 2802 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 3 16:28:42.428059 kubelet[2802]: I1103 16:28:42.427961 2802 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 3 16:28:42.429193 kubelet[2802]: I1103 16:28:42.429174 2802 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 3 16:28:42.435413 kubelet[2802]: E1103 16:28:42.435264 2802 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 3 16:28:42.435996 kubelet[2802]: E1103 16:28:42.435941 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:28:42.436205 kubelet[2802]: E1103 16:28:42.436020 2802 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 3 16:28:42.436205 kubelet[2802]: E1103 16:28:42.436125 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:28:42.436205 kubelet[2802]: E1103 16:28:42.436200 2802 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 3 16:28:42.436756 kubelet[2802]: E1103 16:28:42.436292 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:28:42.456074 kubelet[2802]: I1103 16:28:42.455964 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.455937789 podStartE2EDuration="1.455937789s" podCreationTimestamp="2025-11-03 16:28:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-03 16:28:42.449094625 +0000 UTC m=+1.192915601" watchObservedRunningTime="2025-11-03 16:28:42.455937789 +0000 UTC m=+1.199758765" Nov 3 16:28:42.456286 kubelet[2802]: I1103 16:28:42.456136 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.4561309869999999 podStartE2EDuration="1.456130987s" podCreationTimestamp="2025-11-03 16:28:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-03 16:28:42.455585659 +0000 UTC m=+1.199406635" watchObservedRunningTime="2025-11-03 16:28:42.456130987 +0000 UTC m=+1.199951963" Nov 3 16:28:42.464205 kubelet[2802]: I1103 16:28:42.464121 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.4641095179999999 podStartE2EDuration="1.464109518s" podCreationTimestamp="2025-11-03 16:28:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-03 16:28:42.464018884 +0000 UTC m=+1.207839860" watchObservedRunningTime="2025-11-03 16:28:42.464109518 +0000 UTC m=+1.207930494" Nov 3 16:28:43.428826 kubelet[2802]: E1103 16:28:43.428781 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:28:43.428826 kubelet[2802]: E1103 16:28:43.428830 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:28:43.429489 kubelet[2802]: E1103 16:28:43.429188 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:28:43.588350 update_engine[1585]: I20251103 16:28:43.588220 1585 update_attempter.cc:509] Updating boot flags... Nov 3 16:28:44.430623 kubelet[2802]: E1103 16:28:44.430583 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:28:44.525339 kubelet[2802]: E1103 16:28:44.525275 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:28:45.432412 kubelet[2802]: E1103 16:28:45.432363 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:28:46.802389 kubelet[2802]: I1103 16:28:46.802338 2802 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 3 16:28:46.802977 kubelet[2802]: I1103 16:28:46.802882 2802 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 3 16:28:46.803046 containerd[1613]: time="2025-11-03T16:28:46.802688865Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 3 16:28:47.829530 systemd[1]: Created slice kubepods-besteffort-podc5cfc773_15c7_47e7_9382_7ed2f31ae7a1.slice - libcontainer container kubepods-besteffort-podc5cfc773_15c7_47e7_9382_7ed2f31ae7a1.slice. Nov 3 16:28:47.923556 kubelet[2802]: I1103 16:28:47.923483 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c5cfc773-15c7-47e7-9382-7ed2f31ae7a1-kube-proxy\") pod \"kube-proxy-9lcsg\" (UID: \"c5cfc773-15c7-47e7-9382-7ed2f31ae7a1\") " pod="kube-system/kube-proxy-9lcsg" Nov 3 16:28:47.923556 kubelet[2802]: I1103 16:28:47.923534 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5cfc773-15c7-47e7-9382-7ed2f31ae7a1-xtables-lock\") pod \"kube-proxy-9lcsg\" (UID: \"c5cfc773-15c7-47e7-9382-7ed2f31ae7a1\") " pod="kube-system/kube-proxy-9lcsg" Nov 3 16:28:47.923556 kubelet[2802]: I1103 16:28:47.923554 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5cfc773-15c7-47e7-9382-7ed2f31ae7a1-lib-modules\") pod \"kube-proxy-9lcsg\" (UID: \"c5cfc773-15c7-47e7-9382-7ed2f31ae7a1\") " pod="kube-system/kube-proxy-9lcsg" Nov 3 16:28:47.923556 kubelet[2802]: I1103 16:28:47.923571 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n44jh\" (UniqueName: \"kubernetes.io/projected/c5cfc773-15c7-47e7-9382-7ed2f31ae7a1-kube-api-access-n44jh\") pod \"kube-proxy-9lcsg\" (UID: \"c5cfc773-15c7-47e7-9382-7ed2f31ae7a1\") " pod="kube-system/kube-proxy-9lcsg" Nov 3 16:28:47.932765 systemd[1]: Created slice kubepods-besteffort-pod217bb8e8_7d0f_4d1a_a60d_5feaaf28977f.slice - libcontainer container kubepods-besteffort-pod217bb8e8_7d0f_4d1a_a60d_5feaaf28977f.slice. Nov 3 16:28:48.024444 kubelet[2802]: I1103 16:28:48.024381 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/217bb8e8-7d0f-4d1a-a60d-5feaaf28977f-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-jt957\" (UID: \"217bb8e8-7d0f-4d1a-a60d-5feaaf28977f\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-jt957" Nov 3 16:28:48.024444 kubelet[2802]: I1103 16:28:48.024454 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf2pz\" (UniqueName: \"kubernetes.io/projected/217bb8e8-7d0f-4d1a-a60d-5feaaf28977f-kube-api-access-lf2pz\") pod \"tigera-operator-65cdcdfd6d-jt957\" (UID: \"217bb8e8-7d0f-4d1a-a60d-5feaaf28977f\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-jt957" Nov 3 16:28:48.149281 kubelet[2802]: E1103 16:28:48.149227 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:28:48.150067 containerd[1613]: time="2025-11-03T16:28:48.149995307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9lcsg,Uid:c5cfc773-15c7-47e7-9382-7ed2f31ae7a1,Namespace:kube-system,Attempt:0,}" Nov 3 16:28:48.172816 containerd[1613]: time="2025-11-03T16:28:48.172751962Z" level=info msg="connecting to shim 1b2b2c0de7121149b785f57b1c805034c6adf397fb215d6d63a8382071f778bd" address="unix:///run/containerd/s/36ac879a4f71cbeb6fbb3b3863314c3228332fb97801974ccc83070704b91908" namespace=k8s.io protocol=ttrpc version=3 Nov 3 16:28:48.210151 systemd[1]: Started cri-containerd-1b2b2c0de7121149b785f57b1c805034c6adf397fb215d6d63a8382071f778bd.scope - libcontainer container 1b2b2c0de7121149b785f57b1c805034c6adf397fb215d6d63a8382071f778bd. Nov 3 16:28:48.239637 containerd[1613]: time="2025-11-03T16:28:48.239596190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-jt957,Uid:217bb8e8-7d0f-4d1a-a60d-5feaaf28977f,Namespace:tigera-operator,Attempt:0,}" Nov 3 16:28:48.255297 containerd[1613]: time="2025-11-03T16:28:48.255262971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9lcsg,Uid:c5cfc773-15c7-47e7-9382-7ed2f31ae7a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b2b2c0de7121149b785f57b1c805034c6adf397fb215d6d63a8382071f778bd\"" Nov 3 16:28:48.256223 kubelet[2802]: E1103 16:28:48.256197 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:28:48.262048 containerd[1613]: time="2025-11-03T16:28:48.261530807Z" level=info msg="CreateContainer within sandbox \"1b2b2c0de7121149b785f57b1c805034c6adf397fb215d6d63a8382071f778bd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 3 16:28:48.274450 containerd[1613]: time="2025-11-03T16:28:48.274397202Z" level=info msg="Container d31f3d7f624613c28ec47da22d03539a354ec527173cc0b7022c0478bd5dbd53: CDI devices from CRI Config.CDIDevices: []" Nov 3 16:28:48.282677 containerd[1613]: time="2025-11-03T16:28:48.282640720Z" level=info msg="CreateContainer within sandbox \"1b2b2c0de7121149b785f57b1c805034c6adf397fb215d6d63a8382071f778bd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d31f3d7f624613c28ec47da22d03539a354ec527173cc0b7022c0478bd5dbd53\"" Nov 3 16:28:48.283450 containerd[1613]: time="2025-11-03T16:28:48.283378511Z" level=info msg="StartContainer for \"d31f3d7f624613c28ec47da22d03539a354ec527173cc0b7022c0478bd5dbd53\"" Nov 3 16:28:48.284995 containerd[1613]: time="2025-11-03T16:28:48.284952744Z" level=info msg="connecting to shim d31f3d7f624613c28ec47da22d03539a354ec527173cc0b7022c0478bd5dbd53" address="unix:///run/containerd/s/36ac879a4f71cbeb6fbb3b3863314c3228332fb97801974ccc83070704b91908" protocol=ttrpc version=3 Nov 3 16:28:48.287606 containerd[1613]: time="2025-11-03T16:28:48.287542083Z" level=info msg="connecting to shim d226cc4f28259680d26c32936d1575a36f7ceee5c35c7193769c14fe4f2cf049" address="unix:///run/containerd/s/15338201300a2ca57244e3b371f30dc3cba6c498ea8ed3daf626534cbb61beca" namespace=k8s.io protocol=ttrpc version=3 Nov 3 16:28:48.310246 systemd[1]: Started cri-containerd-d31f3d7f624613c28ec47da22d03539a354ec527173cc0b7022c0478bd5dbd53.scope - libcontainer container d31f3d7f624613c28ec47da22d03539a354ec527173cc0b7022c0478bd5dbd53. Nov 3 16:28:48.337160 systemd[1]: Started cri-containerd-d226cc4f28259680d26c32936d1575a36f7ceee5c35c7193769c14fe4f2cf049.scope - libcontainer container d226cc4f28259680d26c32936d1575a36f7ceee5c35c7193769c14fe4f2cf049. Nov 3 16:28:48.382055 containerd[1613]: time="2025-11-03T16:28:48.381986996Z" level=info msg="StartContainer for \"d31f3d7f624613c28ec47da22d03539a354ec527173cc0b7022c0478bd5dbd53\" returns successfully" Nov 3 16:28:48.388427 containerd[1613]: time="2025-11-03T16:28:48.388364252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-jt957,Uid:217bb8e8-7d0f-4d1a-a60d-5feaaf28977f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d226cc4f28259680d26c32936d1575a36f7ceee5c35c7193769c14fe4f2cf049\"" Nov 3 16:28:48.399176 containerd[1613]: time="2025-11-03T16:28:48.399132644Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 3 16:28:48.446647 kubelet[2802]: E1103 16:28:48.446491 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:28:48.457390 kubelet[2802]: I1103 16:28:48.457338 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9lcsg" podStartSLOduration=1.4563819150000001 podStartE2EDuration="1.456381915s" podCreationTimestamp="2025-11-03 16:28:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-03 16:28:48.455861231 +0000 UTC m=+7.199682207" watchObservedRunningTime="2025-11-03 16:28:48.456381915 +0000 UTC m=+7.200202891" Nov 3 16:28:48.703136 kubelet[2802]: E1103 16:28:48.702928 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:28:49.448165 kubelet[2802]: E1103 16:28:49.448117 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:28:49.824539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount760172120.mount: Deactivated successfully. Nov 3 16:28:50.449535 kubelet[2802]: E1103 16:28:50.449481 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:28:51.031146 containerd[1613]: time="2025-11-03T16:28:51.031086328Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 3 16:28:51.031960 containerd[1613]: time="2025-11-03T16:28:51.031914074Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Nov 3 16:28:51.033194 containerd[1613]: time="2025-11-03T16:28:51.033161678Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 3 16:28:51.035053 containerd[1613]: time="2025-11-03T16:28:51.035019626Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 3 16:28:51.036634 containerd[1613]: time="2025-11-03T16:28:51.036111531Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.636929035s" Nov 3 16:28:51.036634 containerd[1613]: time="2025-11-03T16:28:51.036182769Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 3 16:28:51.043779 containerd[1613]: time="2025-11-03T16:28:51.043719338Z" level=info msg="CreateContainer within sandbox \"d226cc4f28259680d26c32936d1575a36f7ceee5c35c7193769c14fe4f2cf049\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 3 16:28:51.052428 containerd[1613]: time="2025-11-03T16:28:51.052380626Z" level=info msg="Container 1406b51eb78ba9b446b03272156e6ed8c3059a3303297ec575b80172c5688744: CDI devices from CRI Config.CDIDevices: []" Nov 3 16:28:51.055890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2431133497.mount: Deactivated successfully. Nov 3 16:28:51.058513 containerd[1613]: time="2025-11-03T16:28:51.058469007Z" level=info msg="CreateContainer within sandbox \"d226cc4f28259680d26c32936d1575a36f7ceee5c35c7193769c14fe4f2cf049\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1406b51eb78ba9b446b03272156e6ed8c3059a3303297ec575b80172c5688744\"" Nov 3 16:28:51.059097 containerd[1613]: time="2025-11-03T16:28:51.059042250Z" level=info msg="StartContainer for \"1406b51eb78ba9b446b03272156e6ed8c3059a3303297ec575b80172c5688744\"" Nov 3 16:28:51.059818 containerd[1613]: time="2025-11-03T16:28:51.059780067Z" level=info msg="connecting to shim 1406b51eb78ba9b446b03272156e6ed8c3059a3303297ec575b80172c5688744" address="unix:///run/containerd/s/15338201300a2ca57244e3b371f30dc3cba6c498ea8ed3daf626534cbb61beca" protocol=ttrpc version=3 Nov 3 16:28:51.083144 systemd[1]: Started cri-containerd-1406b51eb78ba9b446b03272156e6ed8c3059a3303297ec575b80172c5688744.scope - libcontainer container 1406b51eb78ba9b446b03272156e6ed8c3059a3303297ec575b80172c5688744. Nov 3 16:28:51.116467 containerd[1613]: time="2025-11-03T16:28:51.116431771Z" level=info msg="StartContainer for \"1406b51eb78ba9b446b03272156e6ed8c3059a3303297ec575b80172c5688744\" returns successfully" Nov 3 16:28:51.462452 kubelet[2802]: I1103 16:28:51.462377 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-jt957" podStartSLOduration=1.819917126 podStartE2EDuration="4.462363079s" podCreationTimestamp="2025-11-03 16:28:47 +0000 UTC" firstStartedPulling="2025-11-03 16:28:48.39714974 +0000 UTC m=+7.140970716" lastFinishedPulling="2025-11-03 16:28:51.039595703 +0000 UTC m=+9.783416669" observedRunningTime="2025-11-03 16:28:51.462252194 +0000 UTC m=+10.206073170" watchObservedRunningTime="2025-11-03 16:28:51.462363079 +0000 UTC m=+10.206184055" Nov 3 16:28:54.257340 kubelet[2802]: E1103 16:28:54.257291 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:28:54.460251 kubelet[2802]: E1103 16:28:54.460211 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:28:54.529864 kubelet[2802]: E1103 16:28:54.529737 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:28:56.294583 sudo[1837]: pam_unix(sudo:session): session closed for user root Nov 3 16:28:56.509740 sshd[1836]: Connection closed by 10.0.0.1 port 51728 Nov 3 16:28:56.511623 sshd-session[1833]: pam_unix(sshd:session): session closed for user core Nov 3 16:28:56.518404 systemd-logind[1582]: Session 9 logged out. Waiting for processes to exit. Nov 3 16:28:56.520406 systemd[1]: sshd@8-10.0.0.124:22-10.0.0.1:51728.service: Deactivated successfully. Nov 3 16:28:56.522845 systemd[1]: session-9.scope: Deactivated successfully. Nov 3 16:28:56.523103 systemd[1]: session-9.scope: Consumed 6.161s CPU time, 223.5M memory peak. Nov 3 16:28:56.525671 systemd-logind[1582]: Removed session 9. Nov 3 16:29:00.377531 systemd[1]: Created slice kubepods-besteffort-pod3ec3a934_8f52_40b9_abfb_cd8a85670387.slice - libcontainer container kubepods-besteffort-pod3ec3a934_8f52_40b9_abfb_cd8a85670387.slice. Nov 3 16:29:00.399579 kubelet[2802]: I1103 16:29:00.399510 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3ec3a934-8f52-40b9-abfb-cd8a85670387-typha-certs\") pod \"calico-typha-599b9d7bdd-rvj9p\" (UID: \"3ec3a934-8f52-40b9-abfb-cd8a85670387\") " pod="calico-system/calico-typha-599b9d7bdd-rvj9p" Nov 3 16:29:00.399579 kubelet[2802]: I1103 16:29:00.399576 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ec3a934-8f52-40b9-abfb-cd8a85670387-tigera-ca-bundle\") pod \"calico-typha-599b9d7bdd-rvj9p\" (UID: \"3ec3a934-8f52-40b9-abfb-cd8a85670387\") " pod="calico-system/calico-typha-599b9d7bdd-rvj9p" Nov 3 16:29:00.400195 kubelet[2802]: I1103 16:29:00.399606 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk49l\" (UniqueName: \"kubernetes.io/projected/3ec3a934-8f52-40b9-abfb-cd8a85670387-kube-api-access-zk49l\") pod \"calico-typha-599b9d7bdd-rvj9p\" (UID: \"3ec3a934-8f52-40b9-abfb-cd8a85670387\") " pod="calico-system/calico-typha-599b9d7bdd-rvj9p" Nov 3 16:29:00.480228 systemd[1]: Created slice kubepods-besteffort-pod2f87809a_6e34_458a_8542_8227e25a014b.slice - libcontainer container kubepods-besteffort-pod2f87809a_6e34_458a_8542_8227e25a014b.slice. Nov 3 16:29:00.499780 kubelet[2802]: I1103 16:29:00.499733 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2f87809a-6e34-458a-8542-8227e25a014b-var-run-calico\") pod \"calico-node-fxkrp\" (UID: \"2f87809a-6e34-458a-8542-8227e25a014b\") " pod="calico-system/calico-node-fxkrp" Nov 3 16:29:00.499979 kubelet[2802]: I1103 16:29:00.499798 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2f87809a-6e34-458a-8542-8227e25a014b-cni-log-dir\") pod \"calico-node-fxkrp\" (UID: \"2f87809a-6e34-458a-8542-8227e25a014b\") " pod="calico-system/calico-node-fxkrp" Nov 3 16:29:00.499979 kubelet[2802]: I1103 16:29:00.499815 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2f87809a-6e34-458a-8542-8227e25a014b-cni-net-dir\") pod \"calico-node-fxkrp\" (UID: \"2f87809a-6e34-458a-8542-8227e25a014b\") " pod="calico-system/calico-node-fxkrp" Nov 3 16:29:00.499979 kubelet[2802]: I1103 16:29:00.499827 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2f87809a-6e34-458a-8542-8227e25a014b-flexvol-driver-host\") pod \"calico-node-fxkrp\" (UID: \"2f87809a-6e34-458a-8542-8227e25a014b\") " pod="calico-system/calico-node-fxkrp" Nov 3 16:29:00.499979 kubelet[2802]: I1103 16:29:00.499842 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f87809a-6e34-458a-8542-8227e25a014b-lib-modules\") pod \"calico-node-fxkrp\" (UID: \"2f87809a-6e34-458a-8542-8227e25a014b\") " pod="calico-system/calico-node-fxkrp" Nov 3 16:29:00.499979 kubelet[2802]: I1103 16:29:00.499862 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2f87809a-6e34-458a-8542-8227e25a014b-policysync\") pod \"calico-node-fxkrp\" (UID: \"2f87809a-6e34-458a-8542-8227e25a014b\") " pod="calico-system/calico-node-fxkrp" Nov 3 16:29:00.500144 kubelet[2802]: I1103 16:29:00.499875 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2f87809a-6e34-458a-8542-8227e25a014b-var-lib-calico\") pod \"calico-node-fxkrp\" (UID: \"2f87809a-6e34-458a-8542-8227e25a014b\") " pod="calico-system/calico-node-fxkrp" Nov 3 16:29:00.500144 kubelet[2802]: I1103 16:29:00.499890 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkjz6\" (UniqueName: \"kubernetes.io/projected/2f87809a-6e34-458a-8542-8227e25a014b-kube-api-access-kkjz6\") pod \"calico-node-fxkrp\" (UID: \"2f87809a-6e34-458a-8542-8227e25a014b\") " pod="calico-system/calico-node-fxkrp" Nov 3 16:29:00.500144 kubelet[2802]: I1103 16:29:00.499903 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f87809a-6e34-458a-8542-8227e25a014b-xtables-lock\") pod \"calico-node-fxkrp\" (UID: \"2f87809a-6e34-458a-8542-8227e25a014b\") " pod="calico-system/calico-node-fxkrp" Nov 3 16:29:00.500144 kubelet[2802]: I1103 16:29:00.499922 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2f87809a-6e34-458a-8542-8227e25a014b-node-certs\") pod \"calico-node-fxkrp\" (UID: \"2f87809a-6e34-458a-8542-8227e25a014b\") " pod="calico-system/calico-node-fxkrp" Nov 3 16:29:00.500144 kubelet[2802]: I1103 16:29:00.499935 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2f87809a-6e34-458a-8542-8227e25a014b-cni-bin-dir\") pod \"calico-node-fxkrp\" (UID: \"2f87809a-6e34-458a-8542-8227e25a014b\") " pod="calico-system/calico-node-fxkrp" Nov 3 16:29:00.500333 kubelet[2802]: I1103 16:29:00.499950 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f87809a-6e34-458a-8542-8227e25a014b-tigera-ca-bundle\") pod \"calico-node-fxkrp\" (UID: \"2f87809a-6e34-458a-8542-8227e25a014b\") " pod="calico-system/calico-node-fxkrp" Nov 3 16:29:00.772945 kubelet[2802]: E1103 16:29:00.772838 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:00.772945 kubelet[2802]: W1103 16:29:00.772866 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:00.772945 kubelet[2802]: E1103 16:29:00.772896 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:00.822473 kubelet[2802]: E1103 16:29:00.822434 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:29:00.870688 kubelet[2802]: E1103 16:29:00.870225 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:29:00.870860 containerd[1613]: time="2025-11-03T16:29:00.870834390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-599b9d7bdd-rvj9p,Uid:3ec3a934-8f52-40b9-abfb-cd8a85670387,Namespace:calico-system,Attempt:0,}" Nov 3 16:29:00.875108 containerd[1613]: time="2025-11-03T16:29:00.874899853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fxkrp,Uid:2f87809a-6e34-458a-8542-8227e25a014b,Namespace:calico-system,Attempt:0,}" Nov 3 16:29:00.903613 kubelet[2802]: E1103 16:29:00.903370 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lnj89" podUID="6648243c-1869-4d41-a84f-1ec8db284c55" Nov 3 16:29:00.927462 containerd[1613]: time="2025-11-03T16:29:00.927192045Z" level=info msg="connecting to shim 6e89352513c4e580897195307d2c9a7b99dccb49e4e8997cf8436dd82a7c883e" address="unix:///run/containerd/s/502c8c21104d86539cb75ceb1312ae6a6025c3aa60606e96577de90f2ee28912" namespace=k8s.io protocol=ttrpc version=3 Nov 3 16:29:00.936077 containerd[1613]: time="2025-11-03T16:29:00.935991317Z" level=info msg="connecting to shim 1c626ddbbdb5bd8d54f0651ce1c2fd38dd9037372552268f9c22cdc46123f076" address="unix:///run/containerd/s/7bf1c782a680571f86455fb7dcebcb35bceda97cb5081cc86b9bbbf910d75721" namespace=k8s.io protocol=ttrpc version=3 Nov 3 16:29:00.969228 systemd[1]: Started cri-containerd-6e89352513c4e580897195307d2c9a7b99dccb49e4e8997cf8436dd82a7c883e.scope - libcontainer container 6e89352513c4e580897195307d2c9a7b99dccb49e4e8997cf8436dd82a7c883e. Nov 3 16:29:00.973634 systemd[1]: Started cri-containerd-1c626ddbbdb5bd8d54f0651ce1c2fd38dd9037372552268f9c22cdc46123f076.scope - libcontainer container 1c626ddbbdb5bd8d54f0651ce1c2fd38dd9037372552268f9c22cdc46123f076. Nov 3 16:29:01.002765 kubelet[2802]: E1103 16:29:01.002711 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.003104 kubelet[2802]: W1103 16:29:01.002735 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.003104 kubelet[2802]: E1103 16:29:01.002993 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.003418 kubelet[2802]: E1103 16:29:01.003405 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.003545 kubelet[2802]: W1103 16:29:01.003483 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.003545 kubelet[2802]: E1103 16:29:01.003497 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.003902 kubelet[2802]: E1103 16:29:01.003864 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.004034 kubelet[2802]: W1103 16:29:01.003986 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.004106 kubelet[2802]: E1103 16:29:01.004094 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.004441 kubelet[2802]: E1103 16:29:01.004429 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.004546 kubelet[2802]: W1103 16:29:01.004504 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.004546 kubelet[2802]: E1103 16:29:01.004516 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.005075 kubelet[2802]: E1103 16:29:01.005052 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.005215 kubelet[2802]: W1103 16:29:01.005145 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.005215 kubelet[2802]: E1103 16:29:01.005160 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.005215 kubelet[2802]: I1103 16:29:01.005183 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6648243c-1869-4d41-a84f-1ec8db284c55-kubelet-dir\") pod \"csi-node-driver-lnj89\" (UID: \"6648243c-1869-4d41-a84f-1ec8db284c55\") " pod="calico-system/csi-node-driver-lnj89" Nov 3 16:29:01.005611 kubelet[2802]: E1103 16:29:01.005566 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.005611 kubelet[2802]: W1103 16:29:01.005578 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.005611 kubelet[2802]: E1103 16:29:01.005588 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.006035 kubelet[2802]: E1103 16:29:01.005967 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.006035 kubelet[2802]: W1103 16:29:01.005981 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.006035 kubelet[2802]: E1103 16:29:01.005992 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.006466 kubelet[2802]: E1103 16:29:01.006452 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.006556 kubelet[2802]: W1103 16:29:01.006529 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.006635 kubelet[2802]: E1103 16:29:01.006608 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.006976 kubelet[2802]: E1103 16:29:01.006964 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.007082 kubelet[2802]: W1103 16:29:01.007071 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.007161 kubelet[2802]: E1103 16:29:01.007149 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.007533 kubelet[2802]: E1103 16:29:01.007497 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.007533 kubelet[2802]: W1103 16:29:01.007509 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.007533 kubelet[2802]: E1103 16:29:01.007521 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.007991 kubelet[2802]: E1103 16:29:01.007957 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.007991 kubelet[2802]: W1103 16:29:01.007969 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.007991 kubelet[2802]: E1103 16:29:01.007979 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.008370 kubelet[2802]: E1103 16:29:01.008357 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.008462 kubelet[2802]: W1103 16:29:01.008426 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.008462 kubelet[2802]: E1103 16:29:01.008439 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.008753 kubelet[2802]: E1103 16:29:01.008740 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.008862 kubelet[2802]: W1103 16:29:01.008814 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.008862 kubelet[2802]: E1103 16:29:01.008827 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.009170 kubelet[2802]: E1103 16:29:01.009137 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.009293 kubelet[2802]: W1103 16:29:01.009240 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.009293 kubelet[2802]: E1103 16:29:01.009251 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.009995 kubelet[2802]: E1103 16:29:01.009982 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.010204 kubelet[2802]: W1103 16:29:01.010091 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.010204 kubelet[2802]: E1103 16:29:01.010105 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.010334 kubelet[2802]: E1103 16:29:01.010321 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.010389 kubelet[2802]: W1103 16:29:01.010377 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.010513 kubelet[2802]: E1103 16:29:01.010432 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.010634 kubelet[2802]: E1103 16:29:01.010621 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.010777 kubelet[2802]: W1103 16:29:01.010675 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.010777 kubelet[2802]: E1103 16:29:01.010688 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.010885 kubelet[2802]: E1103 16:29:01.010872 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.010933 kubelet[2802]: W1103 16:29:01.010922 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.010980 kubelet[2802]: E1103 16:29:01.010970 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.011234 kubelet[2802]: E1103 16:29:01.011222 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.011406 kubelet[2802]: W1103 16:29:01.011297 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.011406 kubelet[2802]: E1103 16:29:01.011313 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.011530 kubelet[2802]: E1103 16:29:01.011517 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.011612 kubelet[2802]: W1103 16:29:01.011600 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.011699 kubelet[2802]: E1103 16:29:01.011681 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.011989 kubelet[2802]: E1103 16:29:01.011976 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.012215 kubelet[2802]: W1103 16:29:01.012091 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.012215 kubelet[2802]: E1103 16:29:01.012117 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.012379 kubelet[2802]: E1103 16:29:01.012349 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.012444 kubelet[2802]: W1103 16:29:01.012432 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.012509 kubelet[2802]: E1103 16:29:01.012496 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.012807 kubelet[2802]: E1103 16:29:01.012731 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.012807 kubelet[2802]: W1103 16:29:01.012742 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.012807 kubelet[2802]: E1103 16:29:01.012751 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.064601 containerd[1613]: time="2025-11-03T16:29:01.064472126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fxkrp,Uid:2f87809a-6e34-458a-8542-8227e25a014b,Namespace:calico-system,Attempt:0,} returns sandbox id \"1c626ddbbdb5bd8d54f0651ce1c2fd38dd9037372552268f9c22cdc46123f076\"" Nov 3 16:29:01.065705 containerd[1613]: time="2025-11-03T16:29:01.065652767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-599b9d7bdd-rvj9p,Uid:3ec3a934-8f52-40b9-abfb-cd8a85670387,Namespace:calico-system,Attempt:0,} returns sandbox id \"6e89352513c4e580897195307d2c9a7b99dccb49e4e8997cf8436dd82a7c883e\"" Nov 3 16:29:01.066721 kubelet[2802]: E1103 16:29:01.066568 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:29:01.067097 kubelet[2802]: E1103 16:29:01.067057 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:29:01.068055 containerd[1613]: time="2025-11-03T16:29:01.067996097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 3 16:29:01.106560 kubelet[2802]: E1103 16:29:01.106529 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.106560 kubelet[2802]: W1103 16:29:01.106548 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.106716 kubelet[2802]: E1103 16:29:01.106571 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.106919 kubelet[2802]: E1103 16:29:01.106903 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.106919 kubelet[2802]: W1103 16:29:01.106913 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.106977 kubelet[2802]: E1103 16:29:01.106922 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.106977 kubelet[2802]: I1103 16:29:01.106944 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6648243c-1869-4d41-a84f-1ec8db284c55-socket-dir\") pod \"csi-node-driver-lnj89\" (UID: \"6648243c-1869-4d41-a84f-1ec8db284c55\") " pod="calico-system/csi-node-driver-lnj89" Nov 3 16:29:01.107238 kubelet[2802]: E1103 16:29:01.107213 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.107238 kubelet[2802]: W1103 16:29:01.107234 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.107304 kubelet[2802]: E1103 16:29:01.107253 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.107503 kubelet[2802]: E1103 16:29:01.107483 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.107503 kubelet[2802]: W1103 16:29:01.107498 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.107594 kubelet[2802]: E1103 16:29:01.107510 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.107754 kubelet[2802]: E1103 16:29:01.107736 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.107754 kubelet[2802]: W1103 16:29:01.107750 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.107808 kubelet[2802]: E1103 16:29:01.107765 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.108107 kubelet[2802]: E1103 16:29:01.108073 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.108107 kubelet[2802]: W1103 16:29:01.108098 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.108171 kubelet[2802]: E1103 16:29:01.108122 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.108171 kubelet[2802]: I1103 16:29:01.108158 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6648243c-1869-4d41-a84f-1ec8db284c55-registration-dir\") pod \"csi-node-driver-lnj89\" (UID: \"6648243c-1869-4d41-a84f-1ec8db284c55\") " pod="calico-system/csi-node-driver-lnj89" Nov 3 16:29:01.108385 kubelet[2802]: E1103 16:29:01.108367 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.108385 kubelet[2802]: W1103 16:29:01.108380 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.108449 kubelet[2802]: E1103 16:29:01.108390 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.108596 kubelet[2802]: E1103 16:29:01.108580 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.108596 kubelet[2802]: W1103 16:29:01.108591 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.108644 kubelet[2802]: E1103 16:29:01.108600 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.108876 kubelet[2802]: E1103 16:29:01.108853 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.108876 kubelet[2802]: W1103 16:29:01.108865 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.108876 kubelet[2802]: E1103 16:29:01.108873 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.109089 kubelet[2802]: E1103 16:29:01.109070 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.109089 kubelet[2802]: W1103 16:29:01.109083 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.109156 kubelet[2802]: E1103 16:29:01.109092 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.109156 kubelet[2802]: I1103 16:29:01.109116 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6648243c-1869-4d41-a84f-1ec8db284c55-varrun\") pod \"csi-node-driver-lnj89\" (UID: \"6648243c-1869-4d41-a84f-1ec8db284c55\") " pod="calico-system/csi-node-driver-lnj89" Nov 3 16:29:01.109326 kubelet[2802]: E1103 16:29:01.109309 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.109326 kubelet[2802]: W1103 16:29:01.109323 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.109367 kubelet[2802]: E1103 16:29:01.109333 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.109506 kubelet[2802]: E1103 16:29:01.109494 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.109534 kubelet[2802]: W1103 16:29:01.109506 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.109534 kubelet[2802]: E1103 16:29:01.109515 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.109716 kubelet[2802]: E1103 16:29:01.109703 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.109716 kubelet[2802]: W1103 16:29:01.109713 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.109760 kubelet[2802]: E1103 16:29:01.109721 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.109760 kubelet[2802]: I1103 16:29:01.109741 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh5j2\" (UniqueName: \"kubernetes.io/projected/6648243c-1869-4d41-a84f-1ec8db284c55-kube-api-access-zh5j2\") pod \"csi-node-driver-lnj89\" (UID: \"6648243c-1869-4d41-a84f-1ec8db284c55\") " pod="calico-system/csi-node-driver-lnj89" Nov 3 16:29:01.109967 kubelet[2802]: E1103 16:29:01.109945 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.109967 kubelet[2802]: W1103 16:29:01.109963 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.110032 kubelet[2802]: E1103 16:29:01.109974 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.110207 kubelet[2802]: E1103 16:29:01.110189 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.110207 kubelet[2802]: W1103 16:29:01.110202 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.110247 kubelet[2802]: E1103 16:29:01.110211 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.110432 kubelet[2802]: E1103 16:29:01.110415 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.110432 kubelet[2802]: W1103 16:29:01.110429 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.110478 kubelet[2802]: E1103 16:29:01.110440 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.110653 kubelet[2802]: E1103 16:29:01.110636 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.110653 kubelet[2802]: W1103 16:29:01.110649 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.110710 kubelet[2802]: E1103 16:29:01.110661 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.212592 kubelet[2802]: E1103 16:29:01.212544 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.212592 kubelet[2802]: W1103 16:29:01.212575 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.212763 kubelet[2802]: E1103 16:29:01.212614 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.212879 kubelet[2802]: E1103 16:29:01.212860 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.212879 kubelet[2802]: W1103 16:29:01.212874 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.212879 kubelet[2802]: E1103 16:29:01.212883 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.213102 kubelet[2802]: E1103 16:29:01.213084 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.213102 kubelet[2802]: W1103 16:29:01.213097 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.213161 kubelet[2802]: E1103 16:29:01.213106 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.213306 kubelet[2802]: E1103 16:29:01.213287 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.213306 kubelet[2802]: W1103 16:29:01.213301 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.213362 kubelet[2802]: E1103 16:29:01.213309 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.213767 kubelet[2802]: E1103 16:29:01.213748 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.213767 kubelet[2802]: W1103 16:29:01.213762 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.213819 kubelet[2802]: E1103 16:29:01.213772 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.214544 kubelet[2802]: E1103 16:29:01.214515 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.214626 kubelet[2802]: W1103 16:29:01.214608 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.214701 kubelet[2802]: E1103 16:29:01.214687 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.216233 kubelet[2802]: E1103 16:29:01.216155 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.216233 kubelet[2802]: W1103 16:29:01.216168 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.216233 kubelet[2802]: E1103 16:29:01.216179 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.216535 kubelet[2802]: E1103 16:29:01.216463 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.216535 kubelet[2802]: W1103 16:29:01.216475 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.216535 kubelet[2802]: E1103 16:29:01.216484 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.218034 kubelet[2802]: E1103 16:29:01.216764 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.218034 kubelet[2802]: W1103 16:29:01.216776 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.218034 kubelet[2802]: E1103 16:29:01.216786 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.218341 kubelet[2802]: E1103 16:29:01.218275 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.218341 kubelet[2802]: W1103 16:29:01.218287 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.218341 kubelet[2802]: E1103 16:29:01.218297 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.218633 kubelet[2802]: E1103 16:29:01.218569 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.218633 kubelet[2802]: W1103 16:29:01.218580 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.218633 kubelet[2802]: E1103 16:29:01.218590 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.218942 kubelet[2802]: E1103 16:29:01.218875 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.218942 kubelet[2802]: W1103 16:29:01.218887 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.218942 kubelet[2802]: E1103 16:29:01.218899 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.219265 kubelet[2802]: E1103 16:29:01.219200 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.219265 kubelet[2802]: W1103 16:29:01.219212 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.219265 kubelet[2802]: E1103 16:29:01.219221 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.219534 kubelet[2802]: E1103 16:29:01.219472 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.219534 kubelet[2802]: W1103 16:29:01.219482 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.219534 kubelet[2802]: E1103 16:29:01.219493 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.221155 kubelet[2802]: E1103 16:29:01.221131 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.221155 kubelet[2802]: W1103 16:29:01.221150 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.221230 kubelet[2802]: E1103 16:29:01.221160 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.221452 kubelet[2802]: E1103 16:29:01.221374 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.221452 kubelet[2802]: W1103 16:29:01.221387 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.221452 kubelet[2802]: E1103 16:29:01.221396 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.221940 kubelet[2802]: E1103 16:29:01.221616 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.221940 kubelet[2802]: W1103 16:29:01.221629 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.221940 kubelet[2802]: E1103 16:29:01.221638 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.221940 kubelet[2802]: E1103 16:29:01.221838 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.221940 kubelet[2802]: W1103 16:29:01.221845 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.221940 kubelet[2802]: E1103 16:29:01.221854 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.222119 kubelet[2802]: E1103 16:29:01.222084 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.222119 kubelet[2802]: W1103 16:29:01.222092 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.222119 kubelet[2802]: E1103 16:29:01.222100 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.223019 kubelet[2802]: E1103 16:29:01.222897 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.223019 kubelet[2802]: W1103 16:29:01.222910 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.223019 kubelet[2802]: E1103 16:29:01.222920 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:01.236263 kubelet[2802]: E1103 16:29:01.236223 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:01.236263 kubelet[2802]: W1103 16:29:01.236251 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:01.236263 kubelet[2802]: E1103 16:29:01.236272 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:02.407967 kubelet[2802]: E1103 16:29:02.407897 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lnj89" podUID="6648243c-1869-4d41-a84f-1ec8db284c55" Nov 3 16:29:02.593245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3526915572.mount: Deactivated successfully. Nov 3 16:29:02.981092 containerd[1613]: time="2025-11-03T16:29:02.981033970Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 3 16:29:02.981903 containerd[1613]: time="2025-11-03T16:29:02.981870303Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Nov 3 16:29:02.983158 containerd[1613]: time="2025-11-03T16:29:02.983106315Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 3 16:29:02.984977 containerd[1613]: time="2025-11-03T16:29:02.984939367Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 3 16:29:02.985523 containerd[1613]: time="2025-11-03T16:29:02.985482363Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.917346045s" Nov 3 16:29:02.985523 containerd[1613]: time="2025-11-03T16:29:02.985521452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 3 16:29:02.986698 containerd[1613]: time="2025-11-03T16:29:02.986575559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 3 16:29:02.999455 containerd[1613]: time="2025-11-03T16:29:02.999395635Z" level=info msg="CreateContainer within sandbox \"6e89352513c4e580897195307d2c9a7b99dccb49e4e8997cf8436dd82a7c883e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 3 16:29:03.007982 containerd[1613]: time="2025-11-03T16:29:03.007941159Z" level=info msg="Container 67045f037dc32331a51551209a6ffe58a1367b2f224effac7284b6b1176f18c5: CDI devices from CRI Config.CDIDevices: []" Nov 3 16:29:03.015410 containerd[1613]: time="2025-11-03T16:29:03.015357817Z" level=info msg="CreateContainer within sandbox \"6e89352513c4e580897195307d2c9a7b99dccb49e4e8997cf8436dd82a7c883e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"67045f037dc32331a51551209a6ffe58a1367b2f224effac7284b6b1176f18c5\"" Nov 3 16:29:03.015902 containerd[1613]: time="2025-11-03T16:29:03.015872689Z" level=info msg="StartContainer for \"67045f037dc32331a51551209a6ffe58a1367b2f224effac7284b6b1176f18c5\"" Nov 3 16:29:03.016937 containerd[1613]: time="2025-11-03T16:29:03.016897260Z" level=info msg="connecting to shim 67045f037dc32331a51551209a6ffe58a1367b2f224effac7284b6b1176f18c5" address="unix:///run/containerd/s/502c8c21104d86539cb75ceb1312ae6a6025c3aa60606e96577de90f2ee28912" protocol=ttrpc version=3 Nov 3 16:29:03.038142 systemd[1]: Started cri-containerd-67045f037dc32331a51551209a6ffe58a1367b2f224effac7284b6b1176f18c5.scope - libcontainer container 67045f037dc32331a51551209a6ffe58a1367b2f224effac7284b6b1176f18c5. Nov 3 16:29:03.097968 containerd[1613]: time="2025-11-03T16:29:03.097908298Z" level=info msg="StartContainer for \"67045f037dc32331a51551209a6ffe58a1367b2f224effac7284b6b1176f18c5\" returns successfully" Nov 3 16:29:03.489573 kubelet[2802]: E1103 16:29:03.489510 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:29:03.527609 kubelet[2802]: E1103 16:29:03.527570 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:03.527609 kubelet[2802]: W1103 16:29:03.527591 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:03.527609 kubelet[2802]: E1103 16:29:03.527614 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:03.527817 kubelet[2802]: E1103 16:29:03.527799 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:03.527817 kubelet[2802]: W1103 16:29:03.527810 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:03.527817 kubelet[2802]: E1103 16:29:03.527819 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:03.528081 kubelet[2802]: E1103 16:29:03.528065 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:03.528081 kubelet[2802]: W1103 16:29:03.528076 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:03.528169 kubelet[2802]: E1103 16:29:03.528085 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:03.528373 kubelet[2802]: E1103 16:29:03.528356 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:03.528373 kubelet[2802]: W1103 16:29:03.528369 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:03.528442 kubelet[2802]: E1103 16:29:03.528385 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:03.528616 kubelet[2802]: E1103 16:29:03.528599 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:03.528616 kubelet[2802]: W1103 16:29:03.528612 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:03.528685 kubelet[2802]: E1103 16:29:03.528621 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:03.528824 kubelet[2802]: E1103 16:29:03.528795 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:03.528824 kubelet[2802]: W1103 16:29:03.528806 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:03.528824 kubelet[2802]: E1103 16:29:03.528814 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:03.529048 kubelet[2802]: E1103 16:29:03.529031 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:03.529048 kubelet[2802]: W1103 16:29:03.529043 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:03.529112 kubelet[2802]: E1103 16:29:03.529052 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:03.529267 kubelet[2802]: E1103 16:29:03.529233 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:03.529267 kubelet[2802]: W1103 16:29:03.529243 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:03.529267 kubelet[2802]: E1103 16:29:03.529262 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:03.529456 kubelet[2802]: E1103 16:29:03.529440 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:03.529456 kubelet[2802]: W1103 16:29:03.529451 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:03.529512 kubelet[2802]: E1103 16:29:03.529459 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:03.529669 kubelet[2802]: E1103 16:29:03.529657 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:03.529669 kubelet[2802]: W1103 16:29:03.529667 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:03.529738 kubelet[2802]: E1103 16:29:03.529676 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:03.529869 kubelet[2802]: E1103 16:29:03.529853 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:03.529869 kubelet[2802]: W1103 16:29:03.529865 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:03.529935 kubelet[2802]: E1103 16:29:03.529873 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:03.530136 kubelet[2802]: E1103 16:29:03.530104 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:03.530136 kubelet[2802]: W1103 16:29:03.530116 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:03.530136 kubelet[2802]: E1103 16:29:03.530126 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:03.530371 kubelet[2802]: E1103 16:29:03.530354 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:03.530371 kubelet[2802]: W1103 16:29:03.530366 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:03.530432 kubelet[2802]: E1103 16:29:03.530375 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:03.530636 kubelet[2802]: E1103 16:29:03.530620 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:03.530636 kubelet[2802]: W1103 16:29:03.530631 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:03.530709 kubelet[2802]: E1103 16:29:03.530640 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:03.530828 kubelet[2802]: E1103 16:29:03.530812 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:03.530828 kubelet[2802]: W1103 16:29:03.530823 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:03.530901 kubelet[2802]: E1103 16:29:03.530845 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:03.531149 kubelet[2802]: E1103 16:29:03.531126 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:03.531149 kubelet[2802]: W1103 16:29:03.531139 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:03.531237 kubelet[2802]: E1103 16:29:03.531148 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:03.531476 kubelet[2802]: E1103 16:29:03.531449 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:03.531476 kubelet[2802]: W1103 16:29:03.531462 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:03.531476 kubelet[2802]: E1103 16:29:03.531471 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:03.531693 kubelet[2802]: E1103 16:29:03.531677 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:03.531729 kubelet[2802]: W1103 16:29:03.531697 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:03.531729 kubelet[2802]: E1103 16:29:03.531706 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:03.532066 kubelet[2802]: E1103 16:29:03.531998 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:03.532066 kubelet[2802]: W1103 16:29:03.532064 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:03.532159 kubelet[2802]: E1103 16:29:03.532086 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:03.532362 kubelet[2802]: E1103 16:29:03.532340 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:03.532362 kubelet[2802]: W1103 16:29:03.532352 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:03.532362 kubelet[2802]: E1103 16:29:03.532361 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:03.532567 kubelet[2802]: E1103 16:29:03.532551 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:03.532567 kubelet[2802]: W1103 16:29:03.532562 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:03.532638 kubelet[2802]: E1103 16:29:03.532570 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:03.532795 kubelet[2802]: E1103 16:29:03.532777 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:03.532795 kubelet[2802]: W1103 16:29:03.532789 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:03.532847 kubelet[2802]: E1103 16:29:03.532798 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:03.533051 kubelet[2802]: E1103 16:29:03.533020 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:03.533051 kubelet[2802]: W1103 16:29:03.533034 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:03.533051 kubelet[2802]: E1103 16:29:03.533044 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:03.533285 kubelet[2802]: E1103 16:29:03.533242 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:03.533285 kubelet[2802]: W1103 16:29:03.533258 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:03.533285 kubelet[2802]: E1103 16:29:03.533267 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:03.533486 kubelet[2802]: E1103 16:29:03.533468 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:03.533486 kubelet[2802]: W1103 16:29:03.533481 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:03.533546 kubelet[2802]: E1103 16:29:03.533489 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:03.533671 kubelet[2802]: E1103 16:29:03.533655 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:03.533671 kubelet[2802]: W1103 16:29:03.533665 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:03.533718 kubelet[2802]: E1103 16:29:03.533674 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:03.533876 kubelet[2802]: E1103 16:29:03.533858 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:03.533876 kubelet[2802]: W1103 16:29:03.533869 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:03.533876 kubelet[2802]: E1103 16:29:03.533877 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:03.534300 kubelet[2802]: E1103 16:29:03.534270 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:03.534346 kubelet[2802]: W1103 16:29:03.534297 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:03.534346 kubelet[2802]: E1103 16:29:03.534322 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:03.534549 kubelet[2802]: E1103 16:29:03.534533 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:03.534549 kubelet[2802]: W1103 16:29:03.534544 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:03.534601 kubelet[2802]: E1103 16:29:03.534553 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:03.534739 kubelet[2802]: E1103 16:29:03.534725 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:03.534739 kubelet[2802]: W1103 16:29:03.534735 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:03.534785 kubelet[2802]: E1103 16:29:03.534743 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:03.534967 kubelet[2802]: E1103 16:29:03.534951 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:03.534967 kubelet[2802]: W1103 16:29:03.534962 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:03.535041 kubelet[2802]: E1103 16:29:03.534971 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:03.535950 kubelet[2802]: E1103 16:29:03.535885 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:03.535950 kubelet[2802]: W1103 16:29:03.535944 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:03.536053 kubelet[2802]: E1103 16:29:03.535956 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:03.540168 kubelet[2802]: E1103 16:29:03.540140 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 3 16:29:03.540168 kubelet[2802]: W1103 16:29:03.540163 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 3 16:29:03.540254 kubelet[2802]: E1103 16:29:03.540185 2802 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 3 16:29:04.379487 containerd[1613]: time="2025-11-03T16:29:04.378439700Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 3 16:29:04.380035 containerd[1613]: time="2025-11-03T16:29:04.379948466Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Nov 3 16:29:04.381498 containerd[1613]: time="2025-11-03T16:29:04.381467902Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 3 16:29:04.383777 containerd[1613]: time="2025-11-03T16:29:04.383735244Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 3 16:29:04.384377 containerd[1613]: time="2025-11-03T16:29:04.384327952Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.39771639s" Nov 3 16:29:04.384418 containerd[1613]: time="2025-11-03T16:29:04.384376917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 3 16:29:04.398412 containerd[1613]: time="2025-11-03T16:29:04.398375724Z" level=info msg="CreateContainer within sandbox \"1c626ddbbdb5bd8d54f0651ce1c2fd38dd9037372552268f9c22cdc46123f076\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 3 16:29:04.407800 kubelet[2802]: E1103 16:29:04.407737 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lnj89" podUID="6648243c-1869-4d41-a84f-1ec8db284c55" Nov 3 16:29:04.410574 containerd[1613]: time="2025-11-03T16:29:04.410162776Z" level=info msg="Container 9453bd41e9be4773fd97e3b2856d7f28a39c9d4f39850b11b27d6a09eef09b39: CDI devices from CRI Config.CDIDevices: []" Nov 3 16:29:04.422189 containerd[1613]: time="2025-11-03T16:29:04.422129740Z" level=info msg="CreateContainer within sandbox \"1c626ddbbdb5bd8d54f0651ce1c2fd38dd9037372552268f9c22cdc46123f076\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9453bd41e9be4773fd97e3b2856d7f28a39c9d4f39850b11b27d6a09eef09b39\"" Nov 3 16:29:04.422706 containerd[1613]: time="2025-11-03T16:29:04.422654671Z" level=info msg="StartContainer for \"9453bd41e9be4773fd97e3b2856d7f28a39c9d4f39850b11b27d6a09eef09b39\"" Nov 3 16:29:04.424073 containerd[1613]: time="2025-11-03T16:29:04.423997500Z" level=info msg="connecting to shim 9453bd41e9be4773fd97e3b2856d7f28a39c9d4f39850b11b27d6a09eef09b39" address="unix:///run/containerd/s/7bf1c782a680571f86455fb7dcebcb35bceda97cb5081cc86b9bbbf910d75721" protocol=ttrpc version=3 Nov 3 16:29:04.450238 systemd[1]: Started cri-containerd-9453bd41e9be4773fd97e3b2856d7f28a39c9d4f39850b11b27d6a09eef09b39.scope - libcontainer container 9453bd41e9be4773fd97e3b2856d7f28a39c9d4f39850b11b27d6a09eef09b39. Nov 3 16:29:04.494385 kubelet[2802]: I1103 16:29:04.494313 2802 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 3 16:29:04.495473 kubelet[2802]: E1103 16:29:04.495417 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:29:04.512924 systemd[1]: cri-containerd-9453bd41e9be4773fd97e3b2856d7f28a39c9d4f39850b11b27d6a09eef09b39.scope: Deactivated successfully. Nov 3 16:29:04.513307 systemd[1]: cri-containerd-9453bd41e9be4773fd97e3b2856d7f28a39c9d4f39850b11b27d6a09eef09b39.scope: Consumed 40ms CPU time, 6.5M memory peak, 4.6M written to disk. Nov 3 16:29:04.767755 containerd[1613]: time="2025-11-03T16:29:04.766968363Z" level=info msg="received exit event container_id:\"9453bd41e9be4773fd97e3b2856d7f28a39c9d4f39850b11b27d6a09eef09b39\" id:\"9453bd41e9be4773fd97e3b2856d7f28a39c9d4f39850b11b27d6a09eef09b39\" pid:3511 exited_at:{seconds:1762187344 nanos:514690305}" Nov 3 16:29:04.769759 containerd[1613]: time="2025-11-03T16:29:04.769721166Z" level=info msg="StartContainer for \"9453bd41e9be4773fd97e3b2856d7f28a39c9d4f39850b11b27d6a09eef09b39\" returns successfully" Nov 3 16:29:04.792675 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9453bd41e9be4773fd97e3b2856d7f28a39c9d4f39850b11b27d6a09eef09b39-rootfs.mount: Deactivated successfully. Nov 3 16:29:05.499830 kubelet[2802]: E1103 16:29:05.499791 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:29:05.501026 containerd[1613]: time="2025-11-03T16:29:05.500964878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 3 16:29:05.516346 kubelet[2802]: I1103 16:29:05.515881 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-599b9d7bdd-rvj9p" podStartSLOduration=3.597119647 podStartE2EDuration="5.515865975s" podCreationTimestamp="2025-11-03 16:29:00 +0000 UTC" firstStartedPulling="2025-11-03 16:29:01.06757309 +0000 UTC m=+19.811394056" lastFinishedPulling="2025-11-03 16:29:02.986319408 +0000 UTC m=+21.730140384" observedRunningTime="2025-11-03 16:29:03.498029718 +0000 UTC m=+22.241850694" watchObservedRunningTime="2025-11-03 16:29:05.515865975 +0000 UTC m=+24.259686951" Nov 3 16:29:06.408336 kubelet[2802]: E1103 16:29:06.408263 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lnj89" podUID="6648243c-1869-4d41-a84f-1ec8db284c55" Nov 3 16:29:08.167737 containerd[1613]: time="2025-11-03T16:29:08.167670243Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 3 16:29:08.168550 containerd[1613]: time="2025-11-03T16:29:08.168510415Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Nov 3 16:29:08.169611 containerd[1613]: time="2025-11-03T16:29:08.169566745Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 3 16:29:08.171725 containerd[1613]: time="2025-11-03T16:29:08.171684723Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 3 16:29:08.172494 containerd[1613]: time="2025-11-03T16:29:08.172457947Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.671451397s" Nov 3 16:29:08.172545 containerd[1613]: time="2025-11-03T16:29:08.172507534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 3 16:29:08.176387 containerd[1613]: time="2025-11-03T16:29:08.176352757Z" level=info msg="CreateContainer within sandbox \"1c626ddbbdb5bd8d54f0651ce1c2fd38dd9037372552268f9c22cdc46123f076\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 3 16:29:08.185779 containerd[1613]: time="2025-11-03T16:29:08.185744893Z" level=info msg="Container 04e984941efbe07963ec510dd77d79d5e78a125d037598e7e762140046414c49: CDI devices from CRI Config.CDIDevices: []" Nov 3 16:29:08.195159 containerd[1613]: time="2025-11-03T16:29:08.195104231Z" level=info msg="CreateContainer within sandbox \"1c626ddbbdb5bd8d54f0651ce1c2fd38dd9037372552268f9c22cdc46123f076\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"04e984941efbe07963ec510dd77d79d5e78a125d037598e7e762140046414c49\"" Nov 3 16:29:08.195917 containerd[1613]: time="2025-11-03T16:29:08.195841383Z" level=info msg="StartContainer for \"04e984941efbe07963ec510dd77d79d5e78a125d037598e7e762140046414c49\"" Nov 3 16:29:08.197804 containerd[1613]: time="2025-11-03T16:29:08.197763578Z" level=info msg="connecting to shim 04e984941efbe07963ec510dd77d79d5e78a125d037598e7e762140046414c49" address="unix:///run/containerd/s/7bf1c782a680571f86455fb7dcebcb35bceda97cb5081cc86b9bbbf910d75721" protocol=ttrpc version=3 Nov 3 16:29:08.231317 systemd[1]: Started cri-containerd-04e984941efbe07963ec510dd77d79d5e78a125d037598e7e762140046414c49.scope - libcontainer container 04e984941efbe07963ec510dd77d79d5e78a125d037598e7e762140046414c49. Nov 3 16:29:08.280760 containerd[1613]: time="2025-11-03T16:29:08.280711134Z" level=info msg="StartContainer for \"04e984941efbe07963ec510dd77d79d5e78a125d037598e7e762140046414c49\" returns successfully" Nov 3 16:29:08.408089 kubelet[2802]: E1103 16:29:08.407995 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lnj89" podUID="6648243c-1869-4d41-a84f-1ec8db284c55" Nov 3 16:29:08.508455 kubelet[2802]: E1103 16:29:08.508310 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:29:09.510094 kubelet[2802]: E1103 16:29:09.510040 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:29:10.408417 kubelet[2802]: E1103 16:29:10.408341 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lnj89" podUID="6648243c-1869-4d41-a84f-1ec8db284c55" Nov 3 16:29:10.515895 systemd[1]: cri-containerd-04e984941efbe07963ec510dd77d79d5e78a125d037598e7e762140046414c49.scope: Deactivated successfully. Nov 3 16:29:10.516574 systemd[1]: cri-containerd-04e984941efbe07963ec510dd77d79d5e78a125d037598e7e762140046414c49.scope: Consumed 665ms CPU time, 178.4M memory peak, 2.6M read from disk, 171.3M written to disk. Nov 3 16:29:10.517626 containerd[1613]: time="2025-11-03T16:29:10.517464464Z" level=info msg="received exit event container_id:\"04e984941efbe07963ec510dd77d79d5e78a125d037598e7e762140046414c49\" id:\"04e984941efbe07963ec510dd77d79d5e78a125d037598e7e762140046414c49\" pid:3573 exited_at:{seconds:1762187350 nanos:517232987}" Nov 3 16:29:10.542719 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04e984941efbe07963ec510dd77d79d5e78a125d037598e7e762140046414c49-rootfs.mount: Deactivated successfully. Nov 3 16:29:10.563331 kubelet[2802]: I1103 16:29:10.563296 2802 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 3 16:29:10.666215 systemd[1]: Created slice kubepods-burstable-pod5ff9fe09_a21c_4f7c_94fd_bce37b5b7acf.slice - libcontainer container kubepods-burstable-pod5ff9fe09_a21c_4f7c_94fd_bce37b5b7acf.slice. Nov 3 16:29:10.681055 kubelet[2802]: I1103 16:29:10.680972 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/354740f5-5e36-44c6-9400-a45c8540aea2-whisker-ca-bundle\") pod \"whisker-5d9f759df-27n9g\" (UID: \"354740f5-5e36-44c6-9400-a45c8540aea2\") " pod="calico-system/whisker-5d9f759df-27n9g" Nov 3 16:29:10.681055 kubelet[2802]: I1103 16:29:10.681046 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62vdh\" (UniqueName: \"kubernetes.io/projected/9b5d0d2f-8a47-4e07-931b-0ddd4bf1a984-kube-api-access-62vdh\") pod \"calico-apiserver-6bbd84b756-vf947\" (UID: \"9b5d0d2f-8a47-4e07-931b-0ddd4bf1a984\") " pod="calico-apiserver/calico-apiserver-6bbd84b756-vf947" Nov 3 16:29:10.681237 kubelet[2802]: I1103 16:29:10.681073 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4p9j\" (UniqueName: \"kubernetes.io/projected/dffa515e-d491-4502-8fb4-90d289e9e24a-kube-api-access-d4p9j\") pod \"calico-kube-controllers-65577d7bd7-xn8xr\" (UID: \"dffa515e-d491-4502-8fb4-90d289e9e24a\") " pod="calico-system/calico-kube-controllers-65577d7bd7-xn8xr" Nov 3 16:29:10.681237 kubelet[2802]: I1103 16:29:10.681095 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh2qp\" (UniqueName: \"kubernetes.io/projected/cf32291d-629d-4182-829b-587a319625b7-kube-api-access-xh2qp\") pod \"goldmane-7c778bb748-kw74x\" (UID: \"cf32291d-629d-4182-829b-587a319625b7\") " pod="calico-system/goldmane-7c778bb748-kw74x" Nov 3 16:29:10.681237 kubelet[2802]: I1103 16:29:10.681118 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48j82\" (UniqueName: \"kubernetes.io/projected/5ff9fe09-a21c-4f7c-94fd-bce37b5b7acf-kube-api-access-48j82\") pod \"coredns-66bc5c9577-66brq\" (UID: \"5ff9fe09-a21c-4f7c-94fd-bce37b5b7acf\") " pod="kube-system/coredns-66bc5c9577-66brq" Nov 3 16:29:10.681237 kubelet[2802]: I1103 16:29:10.681145 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/354740f5-5e36-44c6-9400-a45c8540aea2-whisker-backend-key-pair\") pod \"whisker-5d9f759df-27n9g\" (UID: \"354740f5-5e36-44c6-9400-a45c8540aea2\") " pod="calico-system/whisker-5d9f759df-27n9g" Nov 3 16:29:10.681237 kubelet[2802]: I1103 16:29:10.681167 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf32291d-629d-4182-829b-587a319625b7-config\") pod \"goldmane-7c778bb748-kw74x\" (UID: \"cf32291d-629d-4182-829b-587a319625b7\") " pod="calico-system/goldmane-7c778bb748-kw74x" Nov 3 16:29:10.681363 kubelet[2802]: I1103 16:29:10.681186 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpvxs\" (UniqueName: \"kubernetes.io/projected/84a57abf-59e3-4ca8-817d-7787f6e42d37-kube-api-access-jpvxs\") pod \"calico-apiserver-6bbd84b756-z5kfx\" (UID: \"84a57abf-59e3-4ca8-817d-7787f6e42d37\") " pod="calico-apiserver/calico-apiserver-6bbd84b756-z5kfx" Nov 3 16:29:10.681363 kubelet[2802]: I1103 16:29:10.681207 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9b5d0d2f-8a47-4e07-931b-0ddd4bf1a984-calico-apiserver-certs\") pod \"calico-apiserver-6bbd84b756-vf947\" (UID: \"9b5d0d2f-8a47-4e07-931b-0ddd4bf1a984\") " pod="calico-apiserver/calico-apiserver-6bbd84b756-vf947" Nov 3 16:29:10.681363 kubelet[2802]: I1103 16:29:10.681224 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ff9fe09-a21c-4f7c-94fd-bce37b5b7acf-config-volume\") pod \"coredns-66bc5c9577-66brq\" (UID: \"5ff9fe09-a21c-4f7c-94fd-bce37b5b7acf\") " pod="kube-system/coredns-66bc5c9577-66brq" Nov 3 16:29:10.681363 kubelet[2802]: I1103 16:29:10.681243 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwb86\" (UniqueName: \"kubernetes.io/projected/974e3016-4f92-40ef-b564-73c74925d5f3-kube-api-access-nwb86\") pod \"calico-apiserver-848d75bc5c-dq964\" (UID: \"974e3016-4f92-40ef-b564-73c74925d5f3\") " pod="calico-apiserver/calico-apiserver-848d75bc5c-dq964" Nov 3 16:29:10.681363 kubelet[2802]: I1103 16:29:10.681263 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dffa515e-d491-4502-8fb4-90d289e9e24a-tigera-ca-bundle\") pod \"calico-kube-controllers-65577d7bd7-xn8xr\" (UID: \"dffa515e-d491-4502-8fb4-90d289e9e24a\") " pod="calico-system/calico-kube-controllers-65577d7bd7-xn8xr" Nov 3 16:29:10.681481 kubelet[2802]: I1103 16:29:10.681280 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb9d669e-7b5f-4659-94d4-c84247454d71-config-volume\") pod \"coredns-66bc5c9577-gglmp\" (UID: \"cb9d669e-7b5f-4659-94d4-c84247454d71\") " pod="kube-system/coredns-66bc5c9577-gglmp" Nov 3 16:29:10.681481 kubelet[2802]: I1103 16:29:10.681294 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf32291d-629d-4182-829b-587a319625b7-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-kw74x\" (UID: \"cf32291d-629d-4182-829b-587a319625b7\") " pod="calico-system/goldmane-7c778bb748-kw74x" Nov 3 16:29:10.681481 kubelet[2802]: I1103 16:29:10.681312 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt96b\" (UniqueName: \"kubernetes.io/projected/354740f5-5e36-44c6-9400-a45c8540aea2-kube-api-access-zt96b\") pod \"whisker-5d9f759df-27n9g\" (UID: \"354740f5-5e36-44c6-9400-a45c8540aea2\") " pod="calico-system/whisker-5d9f759df-27n9g" Nov 3 16:29:10.681481 kubelet[2802]: I1103 16:29:10.681335 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjvch\" (UniqueName: \"kubernetes.io/projected/cb9d669e-7b5f-4659-94d4-c84247454d71-kube-api-access-wjvch\") pod \"coredns-66bc5c9577-gglmp\" (UID: \"cb9d669e-7b5f-4659-94d4-c84247454d71\") " pod="kube-system/coredns-66bc5c9577-gglmp" Nov 3 16:29:10.681481 kubelet[2802]: I1103 16:29:10.681357 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/84a57abf-59e3-4ca8-817d-7787f6e42d37-calico-apiserver-certs\") pod \"calico-apiserver-6bbd84b756-z5kfx\" (UID: \"84a57abf-59e3-4ca8-817d-7787f6e42d37\") " pod="calico-apiserver/calico-apiserver-6bbd84b756-z5kfx" Nov 3 16:29:10.681599 kubelet[2802]: I1103 16:29:10.681476 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/974e3016-4f92-40ef-b564-73c74925d5f3-calico-apiserver-certs\") pod \"calico-apiserver-848d75bc5c-dq964\" (UID: \"974e3016-4f92-40ef-b564-73c74925d5f3\") " pod="calico-apiserver/calico-apiserver-848d75bc5c-dq964" Nov 3 16:29:10.681599 kubelet[2802]: I1103 16:29:10.681505 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/cf32291d-629d-4182-829b-587a319625b7-goldmane-key-pair\") pod \"goldmane-7c778bb748-kw74x\" (UID: \"cf32291d-629d-4182-829b-587a319625b7\") " pod="calico-system/goldmane-7c778bb748-kw74x" Nov 3 16:29:10.683677 systemd[1]: Created slice kubepods-besteffort-pod9b5d0d2f_8a47_4e07_931b_0ddd4bf1a984.slice - libcontainer container kubepods-besteffort-pod9b5d0d2f_8a47_4e07_931b_0ddd4bf1a984.slice. Nov 3 16:29:10.690913 systemd[1]: Created slice kubepods-besteffort-poddffa515e_d491_4502_8fb4_90d289e9e24a.slice - libcontainer container kubepods-besteffort-poddffa515e_d491_4502_8fb4_90d289e9e24a.slice. Nov 3 16:29:10.696775 systemd[1]: Created slice kubepods-besteffort-pod84a57abf_59e3_4ca8_817d_7787f6e42d37.slice - libcontainer container kubepods-besteffort-pod84a57abf_59e3_4ca8_817d_7787f6e42d37.slice. Nov 3 16:29:10.703781 systemd[1]: Created slice kubepods-besteffort-pod974e3016_4f92_40ef_b564_73c74925d5f3.slice - libcontainer container kubepods-besteffort-pod974e3016_4f92_40ef_b564_73c74925d5f3.slice. Nov 3 16:29:10.712686 systemd[1]: Created slice kubepods-besteffort-podcf32291d_629d_4182_829b_587a319625b7.slice - libcontainer container kubepods-besteffort-podcf32291d_629d_4182_829b_587a319625b7.slice. Nov 3 16:29:10.717454 systemd[1]: Created slice kubepods-besteffort-pod354740f5_5e36_44c6_9400_a45c8540aea2.slice - libcontainer container kubepods-besteffort-pod354740f5_5e36_44c6_9400_a45c8540aea2.slice. Nov 3 16:29:10.725305 systemd[1]: Created slice kubepods-burstable-podcb9d669e_7b5f_4659_94d4_c84247454d71.slice - libcontainer container kubepods-burstable-podcb9d669e_7b5f_4659_94d4_c84247454d71.slice. Nov 3 16:29:10.980925 kubelet[2802]: E1103 16:29:10.980784 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:29:10.981787 containerd[1613]: time="2025-11-03T16:29:10.981666677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-66brq,Uid:5ff9fe09-a21c-4f7c-94fd-bce37b5b7acf,Namespace:kube-system,Attempt:0,}" Nov 3 16:29:10.990781 containerd[1613]: time="2025-11-03T16:29:10.990722674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bbd84b756-vf947,Uid:9b5d0d2f-8a47-4e07-931b-0ddd4bf1a984,Namespace:calico-apiserver,Attempt:0,}" Nov 3 16:29:10.996249 containerd[1613]: time="2025-11-03T16:29:10.996190519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65577d7bd7-xn8xr,Uid:dffa515e-d491-4502-8fb4-90d289e9e24a,Namespace:calico-system,Attempt:0,}" Nov 3 16:29:11.003032 containerd[1613]: time="2025-11-03T16:29:11.002937853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bbd84b756-z5kfx,Uid:84a57abf-59e3-4ca8-817d-7787f6e42d37,Namespace:calico-apiserver,Attempt:0,}" Nov 3 16:29:11.010947 containerd[1613]: time="2025-11-03T16:29:11.010848093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-848d75bc5c-dq964,Uid:974e3016-4f92-40ef-b564-73c74925d5f3,Namespace:calico-apiserver,Attempt:0,}" Nov 3 16:29:11.021679 containerd[1613]: time="2025-11-03T16:29:11.021465625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-kw74x,Uid:cf32291d-629d-4182-829b-587a319625b7,Namespace:calico-system,Attempt:0,}" Nov 3 16:29:11.023696 containerd[1613]: time="2025-11-03T16:29:11.023663738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d9f759df-27n9g,Uid:354740f5-5e36-44c6-9400-a45c8540aea2,Namespace:calico-system,Attempt:0,}" Nov 3 16:29:11.032350 kubelet[2802]: E1103 16:29:11.032308 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:29:11.034197 containerd[1613]: time="2025-11-03T16:29:11.033041566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gglmp,Uid:cb9d669e-7b5f-4659-94d4-c84247454d71,Namespace:kube-system,Attempt:0,}" Nov 3 16:29:11.139962 containerd[1613]: time="2025-11-03T16:29:11.139901627Z" level=error msg="Failed to destroy network for sandbox \"6db50756f71ba33985b2306693c21902eec751758730d57d6acfc2f1dac11ecd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 3 16:29:11.161419 containerd[1613]: time="2025-11-03T16:29:11.161349153Z" level=error msg="Failed to destroy network for sandbox \"4eba266827eb1c49a1e019c0c78264f7115feb5ecfa1a9e5057ed339a12f9dbb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 3 16:29:11.162877 containerd[1613]: time="2025-11-03T16:29:11.162754030Z" level=error msg="Failed to destroy network for sandbox \"01156f9e4911ca60afe121f293dcbb77d9b35c6ad9cdea586363506653cf3cdf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 3 16:29:11.171039 containerd[1613]: time="2025-11-03T16:29:11.170968656Z" level=error msg="Failed to destroy network for sandbox \"c0fcfe5313e8ffc3dbdeb5ddc7d9071909a23bfb2dbed73b312eedee89829512\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 3 16:29:11.175523 containerd[1613]: time="2025-11-03T16:29:11.175456666Z" level=error msg="Failed to destroy network for sandbox \"b91225186cba14fb71de060a62d929692977baa12f057df48d630a9504fe309c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 3 16:29:11.181248 containerd[1613]: time="2025-11-03T16:29:11.181189478Z" level=error msg="Failed to destroy network for sandbox \"8c8558d75beea840db59593e35ed2dc4531cfc222990c3058d002dd67621c548\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 3 16:29:11.182226 containerd[1613]: time="2025-11-03T16:29:11.182164527Z" level=error msg="Failed to destroy network for sandbox \"4c8e1ffd0475b0a44e150e8927472c4e6d45e558c7f2f9fa839a4dda4f7f4605\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 3 16:29:11.184471 containerd[1613]: time="2025-11-03T16:29:11.184432153Z" level=error msg="Failed to destroy network for sandbox \"cc212748584c9e1d7ed40d69babf14088791f48a8f5c8714ac993c8aeea4b533\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 3 16:29:11.440852 containerd[1613]: time="2025-11-03T16:29:11.425118238Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65577d7bd7-xn8xr,Uid:dffa515e-d491-4502-8fb4-90d289e9e24a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0fcfe5313e8ffc3dbdeb5ddc7d9071909a23bfb2dbed73b312eedee89829512\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 3 16:29:11.441084 containerd[1613]: time="2025-11-03T16:29:11.430169180Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bbd84b756-z5kfx,Uid:84a57abf-59e3-4ca8-817d-7787f6e42d37,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6db50756f71ba33985b2306693c21902eec751758730d57d6acfc2f1dac11ecd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 3 16:29:11.441210 kubelet[2802]: E1103 16:29:11.441160 2802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6db50756f71ba33985b2306693c21902eec751758730d57d6acfc2f1dac11ecd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 3 16:29:11.441283 kubelet[2802]: E1103 16:29:11.441233 2802 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6db50756f71ba33985b2306693c21902eec751758730d57d6acfc2f1dac11ecd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bbd84b756-z5kfx" Nov 3 16:29:11.441283 kubelet[2802]: E1103 16:29:11.441221 2802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0fcfe5313e8ffc3dbdeb5ddc7d9071909a23bfb2dbed73b312eedee89829512\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 3 16:29:11.441349 kubelet[2802]: E1103 16:29:11.441320 2802 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0fcfe5313e8ffc3dbdeb5ddc7d9071909a23bfb2dbed73b312eedee89829512\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65577d7bd7-xn8xr" Nov 3 16:29:11.441374 containerd[1613]: time="2025-11-03T16:29:11.431080606Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-66brq,Uid:5ff9fe09-a21c-4f7c-94fd-bce37b5b7acf,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4eba266827eb1c49a1e019c0c78264f7115feb5ecfa1a9e5057ed339a12f9dbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 3 16:29:11.441421 containerd[1613]: time="2025-11-03T16:29:11.431958954Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d9f759df-27n9g,Uid:354740f5-5e36-44c6-9400-a45c8540aea2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"01156f9e4911ca60afe121f293dcbb77d9b35c6ad9cdea586363506653cf3cdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 3 16:29:11.441462 kubelet[2802]: E1103 16:29:11.441348 2802 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0fcfe5313e8ffc3dbdeb5ddc7d9071909a23bfb2dbed73b312eedee89829512\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65577d7bd7-xn8xr" Nov 3 16:29:11.441487 containerd[1613]: time="2025-11-03T16:29:11.432767619Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gglmp,Uid:cb9d669e-7b5f-4659-94d4-c84247454d71,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b91225186cba14fb71de060a62d929692977baa12f057df48d630a9504fe309c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 3 16:29:11.441487 containerd[1613]: time="2025-11-03T16:29:11.433652990Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bbd84b756-vf947,Uid:9b5d0d2f-8a47-4e07-931b-0ddd4bf1a984,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c8558d75beea840db59593e35ed2dc4531cfc222990c3058d002dd67621c548\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 3 16:29:11.441555 kubelet[2802]: E1103 16:29:11.441448 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-65577d7bd7-xn8xr_calico-system(dffa515e-d491-4502-8fb4-90d289e9e24a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-65577d7bd7-xn8xr_calico-system(dffa515e-d491-4502-8fb4-90d289e9e24a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c0fcfe5313e8ffc3dbdeb5ddc7d9071909a23bfb2dbed73b312eedee89829512\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65577d7bd7-xn8xr" podUID="dffa515e-d491-4502-8fb4-90d289e9e24a" Nov 3 16:29:11.441555 kubelet[2802]: E1103 16:29:11.441473 2802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4eba266827eb1c49a1e019c0c78264f7115feb5ecfa1a9e5057ed339a12f9dbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 3 16:29:11.441555 kubelet[2802]: E1103 16:29:11.441528 2802 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4eba266827eb1c49a1e019c0c78264f7115feb5ecfa1a9e5057ed339a12f9dbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-66brq" Nov 3 16:29:11.441646 containerd[1613]: time="2025-11-03T16:29:11.434467575Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-kw74x,Uid:cf32291d-629d-4182-829b-587a319625b7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c8e1ffd0475b0a44e150e8927472c4e6d45e558c7f2f9fa839a4dda4f7f4605\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 3 16:29:11.441646 containerd[1613]: time="2025-11-03T16:29:11.435284113Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-848d75bc5c-dq964,Uid:974e3016-4f92-40ef-b564-73c74925d5f3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc212748584c9e1d7ed40d69babf14088791f48a8f5c8714ac993c8aeea4b533\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 3 16:29:11.441711 kubelet[2802]: E1103 16:29:11.441542 2802 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4eba266827eb1c49a1e019c0c78264f7115feb5ecfa1a9e5057ed339a12f9dbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-66brq" Nov 3 16:29:11.441711 kubelet[2802]: E1103 16:29:11.441254 2802 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6db50756f71ba33985b2306693c21902eec751758730d57d6acfc2f1dac11ecd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bbd84b756-z5kfx" Nov 3 16:29:11.441711 kubelet[2802]: E1103 16:29:11.441647 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-66brq_kube-system(5ff9fe09-a21c-4f7c-94fd-bce37b5b7acf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-66brq_kube-system(5ff9fe09-a21c-4f7c-94fd-bce37b5b7acf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4eba266827eb1c49a1e019c0c78264f7115feb5ecfa1a9e5057ed339a12f9dbb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-66brq" podUID="5ff9fe09-a21c-4f7c-94fd-bce37b5b7acf" Nov 3 16:29:11.441814 kubelet[2802]: E1103 16:29:11.441675 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6bbd84b756-z5kfx_calico-apiserver(84a57abf-59e3-4ca8-817d-7787f6e42d37)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6bbd84b756-z5kfx_calico-apiserver(84a57abf-59e3-4ca8-817d-7787f6e42d37)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6db50756f71ba33985b2306693c21902eec751758730d57d6acfc2f1dac11ecd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bbd84b756-z5kfx" podUID="84a57abf-59e3-4ca8-817d-7787f6e42d37" Nov 3 16:29:11.441814 kubelet[2802]: E1103 16:29:11.441768 2802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc212748584c9e1d7ed40d69babf14088791f48a8f5c8714ac993c8aeea4b533\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 3 16:29:11.441814 kubelet[2802]: E1103 16:29:11.441790 2802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c8558d75beea840db59593e35ed2dc4531cfc222990c3058d002dd67621c548\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 3 16:29:11.441919 kubelet[2802]: E1103 16:29:11.441796 2802 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc212748584c9e1d7ed40d69babf14088791f48a8f5c8714ac993c8aeea4b533\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-848d75bc5c-dq964" Nov 3 16:29:11.441919 kubelet[2802]: E1103 16:29:11.441811 2802 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc212748584c9e1d7ed40d69babf14088791f48a8f5c8714ac993c8aeea4b533\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-848d75bc5c-dq964" Nov 3 16:29:11.441919 kubelet[2802]: E1103 16:29:11.441827 2802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b91225186cba14fb71de060a62d929692977baa12f057df48d630a9504fe309c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 3 16:29:11.441989 kubelet[2802]: E1103 16:29:11.441843 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-848d75bc5c-dq964_calico-apiserver(974e3016-4f92-40ef-b564-73c74925d5f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-848d75bc5c-dq964_calico-apiserver(974e3016-4f92-40ef-b564-73c74925d5f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc212748584c9e1d7ed40d69babf14088791f48a8f5c8714ac993c8aeea4b533\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-848d75bc5c-dq964" podUID="974e3016-4f92-40ef-b564-73c74925d5f3" Nov 3 16:29:11.441989 kubelet[2802]: E1103 16:29:11.441866 2802 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b91225186cba14fb71de060a62d929692977baa12f057df48d630a9504fe309c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-gglmp" Nov 3 16:29:11.441989 kubelet[2802]: E1103 16:29:11.441881 2802 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b91225186cba14fb71de060a62d929692977baa12f057df48d630a9504fe309c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-gglmp" Nov 3 16:29:11.442116 kubelet[2802]: E1103 16:29:11.441884 2802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c8e1ffd0475b0a44e150e8927472c4e6d45e558c7f2f9fa839a4dda4f7f4605\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 3 16:29:11.442116 kubelet[2802]: E1103 16:29:11.441902 2802 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c8e1ffd0475b0a44e150e8927472c4e6d45e558c7f2f9fa839a4dda4f7f4605\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-kw74x" Nov 3 16:29:11.442116 kubelet[2802]: E1103 16:29:11.441913 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-gglmp_kube-system(cb9d669e-7b5f-4659-94d4-c84247454d71)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-gglmp_kube-system(cb9d669e-7b5f-4659-94d4-c84247454d71)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b91225186cba14fb71de060a62d929692977baa12f057df48d630a9504fe309c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-gglmp" podUID="cb9d669e-7b5f-4659-94d4-c84247454d71" Nov 3 16:29:11.442203 kubelet[2802]: E1103 16:29:11.441918 2802 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c8e1ffd0475b0a44e150e8927472c4e6d45e558c7f2f9fa839a4dda4f7f4605\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-kw74x" Nov 3 16:29:11.442203 kubelet[2802]: E1103 16:29:11.441771 2802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01156f9e4911ca60afe121f293dcbb77d9b35c6ad9cdea586363506653cf3cdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 3 16:29:11.442203 kubelet[2802]: E1103 16:29:11.441946 2802 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01156f9e4911ca60afe121f293dcbb77d9b35c6ad9cdea586363506653cf3cdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d9f759df-27n9g" Nov 3 16:29:11.442274 kubelet[2802]: E1103 16:29:11.441947 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-kw74x_calico-system(cf32291d-629d-4182-829b-587a319625b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-kw74x_calico-system(cf32291d-629d-4182-829b-587a319625b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c8e1ffd0475b0a44e150e8927472c4e6d45e558c7f2f9fa839a4dda4f7f4605\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-kw74x" podUID="cf32291d-629d-4182-829b-587a319625b7" Nov 3 16:29:11.442274 kubelet[2802]: E1103 16:29:11.441957 2802 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01156f9e4911ca60afe121f293dcbb77d9b35c6ad9cdea586363506653cf3cdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d9f759df-27n9g" Nov 3 16:29:11.442274 kubelet[2802]: E1103 16:29:11.441811 2802 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c8558d75beea840db59593e35ed2dc4531cfc222990c3058d002dd67621c548\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bbd84b756-vf947" Nov 3 16:29:11.442420 kubelet[2802]: E1103 16:29:11.441975 2802 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c8558d75beea840db59593e35ed2dc4531cfc222990c3058d002dd67621c548\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bbd84b756-vf947" Nov 3 16:29:11.442420 kubelet[2802]: E1103 16:29:11.441983 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5d9f759df-27n9g_calico-system(354740f5-5e36-44c6-9400-a45c8540aea2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5d9f759df-27n9g_calico-system(354740f5-5e36-44c6-9400-a45c8540aea2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"01156f9e4911ca60afe121f293dcbb77d9b35c6ad9cdea586363506653cf3cdf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5d9f759df-27n9g" podUID="354740f5-5e36-44c6-9400-a45c8540aea2" Nov 3 16:29:11.442420 kubelet[2802]: E1103 16:29:11.442018 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6bbd84b756-vf947_calico-apiserver(9b5d0d2f-8a47-4e07-931b-0ddd4bf1a984)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6bbd84b756-vf947_calico-apiserver(9b5d0d2f-8a47-4e07-931b-0ddd4bf1a984)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c8558d75beea840db59593e35ed2dc4531cfc222990c3058d002dd67621c548\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bbd84b756-vf947" podUID="9b5d0d2f-8a47-4e07-931b-0ddd4bf1a984" Nov 3 16:29:11.521393 kubelet[2802]: E1103 16:29:11.521351 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:29:11.522135 containerd[1613]: time="2025-11-03T16:29:11.521995306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 3 16:29:12.414389 systemd[1]: Created slice kubepods-besteffort-pod6648243c_1869_4d41_a84f_1ec8db284c55.slice - libcontainer container kubepods-besteffort-pod6648243c_1869_4d41_a84f_1ec8db284c55.slice. Nov 3 16:29:12.419090 containerd[1613]: time="2025-11-03T16:29:12.419049330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lnj89,Uid:6648243c-1869-4d41-a84f-1ec8db284c55,Namespace:calico-system,Attempt:0,}" Nov 3 16:29:12.474479 containerd[1613]: time="2025-11-03T16:29:12.474384683Z" level=error msg="Failed to destroy network for sandbox \"1663d84ddba8835f75ea6c8cf03483989fbf1ded18353a560bb3c7c84e60316f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 3 16:29:12.476860 systemd[1]: run-netns-cni\x2d9ed79875\x2df93c\x2da5a8\x2d6e8a\x2d48aabf480358.mount: Deactivated successfully. Nov 3 16:29:12.477081 containerd[1613]: time="2025-11-03T16:29:12.477029954Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lnj89,Uid:6648243c-1869-4d41-a84f-1ec8db284c55,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1663d84ddba8835f75ea6c8cf03483989fbf1ded18353a560bb3c7c84e60316f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 3 16:29:12.477358 kubelet[2802]: E1103 16:29:12.477311 2802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1663d84ddba8835f75ea6c8cf03483989fbf1ded18353a560bb3c7c84e60316f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 3 16:29:12.477684 kubelet[2802]: E1103 16:29:12.477384 2802 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1663d84ddba8835f75ea6c8cf03483989fbf1ded18353a560bb3c7c84e60316f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lnj89" Nov 3 16:29:12.477684 kubelet[2802]: E1103 16:29:12.477412 2802 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1663d84ddba8835f75ea6c8cf03483989fbf1ded18353a560bb3c7c84e60316f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lnj89" Nov 3 16:29:12.477684 kubelet[2802]: E1103 16:29:12.477486 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lnj89_calico-system(6648243c-1869-4d41-a84f-1ec8db284c55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lnj89_calico-system(6648243c-1869-4d41-a84f-1ec8db284c55)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1663d84ddba8835f75ea6c8cf03483989fbf1ded18353a560bb3c7c84e60316f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lnj89" podUID="6648243c-1869-4d41-a84f-1ec8db284c55" Nov 3 16:29:19.509964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4160418441.mount: Deactivated successfully. Nov 3 16:29:20.471957 containerd[1613]: time="2025-11-03T16:29:20.471866297Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 3 16:29:20.513625 containerd[1613]: time="2025-11-03T16:29:20.495737994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Nov 3 16:29:20.513625 containerd[1613]: time="2025-11-03T16:29:20.499177733Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 3 16:29:20.513838 containerd[1613]: time="2025-11-03T16:29:20.503989046Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.98184598s" Nov 3 16:29:20.513838 containerd[1613]: time="2025-11-03T16:29:20.513772473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 3 16:29:20.514587 containerd[1613]: time="2025-11-03T16:29:20.514562116Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 3 16:29:20.541195 containerd[1613]: time="2025-11-03T16:29:20.541117709Z" level=info msg="CreateContainer within sandbox \"1c626ddbbdb5bd8d54f0651ce1c2fd38dd9037372552268f9c22cdc46123f076\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 3 16:29:20.561110 containerd[1613]: time="2025-11-03T16:29:20.561062297Z" level=info msg="Container 7d55ce7514207da9c4d9db0057c9576df8b92c2d0fb6365fed8d87daee6f9041: CDI devices from CRI Config.CDIDevices: []" Nov 3 16:29:20.562566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1061891149.mount: Deactivated successfully. Nov 3 16:29:20.592911 containerd[1613]: time="2025-11-03T16:29:20.592840610Z" level=info msg="CreateContainer within sandbox \"1c626ddbbdb5bd8d54f0651ce1c2fd38dd9037372552268f9c22cdc46123f076\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7d55ce7514207da9c4d9db0057c9576df8b92c2d0fb6365fed8d87daee6f9041\"" Nov 3 16:29:20.593622 containerd[1613]: time="2025-11-03T16:29:20.593574995Z" level=info msg="StartContainer for \"7d55ce7514207da9c4d9db0057c9576df8b92c2d0fb6365fed8d87daee6f9041\"" Nov 3 16:29:20.595614 containerd[1613]: time="2025-11-03T16:29:20.595579804Z" level=info msg="connecting to shim 7d55ce7514207da9c4d9db0057c9576df8b92c2d0fb6365fed8d87daee6f9041" address="unix:///run/containerd/s/7bf1c782a680571f86455fb7dcebcb35bceda97cb5081cc86b9bbbf910d75721" protocol=ttrpc version=3 Nov 3 16:29:20.620153 systemd[1]: Started cri-containerd-7d55ce7514207da9c4d9db0057c9576df8b92c2d0fb6365fed8d87daee6f9041.scope - libcontainer container 7d55ce7514207da9c4d9db0057c9576df8b92c2d0fb6365fed8d87daee6f9041. Nov 3 16:29:20.668411 containerd[1613]: time="2025-11-03T16:29:20.668369511Z" level=info msg="StartContainer for \"7d55ce7514207da9c4d9db0057c9576df8b92c2d0fb6365fed8d87daee6f9041\" returns successfully" Nov 3 16:29:20.751980 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 3 16:29:20.752955 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 3 16:29:20.951061 kubelet[2802]: I1103 16:29:20.950960 2802 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt96b\" (UniqueName: \"kubernetes.io/projected/354740f5-5e36-44c6-9400-a45c8540aea2-kube-api-access-zt96b\") pod \"354740f5-5e36-44c6-9400-a45c8540aea2\" (UID: \"354740f5-5e36-44c6-9400-a45c8540aea2\") " Nov 3 16:29:20.951061 kubelet[2802]: I1103 16:29:20.951069 2802 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/354740f5-5e36-44c6-9400-a45c8540aea2-whisker-ca-bundle\") pod \"354740f5-5e36-44c6-9400-a45c8540aea2\" (UID: \"354740f5-5e36-44c6-9400-a45c8540aea2\") " Nov 3 16:29:20.951599 kubelet[2802]: I1103 16:29:20.951093 2802 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/354740f5-5e36-44c6-9400-a45c8540aea2-whisker-backend-key-pair\") pod \"354740f5-5e36-44c6-9400-a45c8540aea2\" (UID: \"354740f5-5e36-44c6-9400-a45c8540aea2\") " Nov 3 16:29:20.952210 kubelet[2802]: I1103 16:29:20.952180 2802 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/354740f5-5e36-44c6-9400-a45c8540aea2-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "354740f5-5e36-44c6-9400-a45c8540aea2" (UID: "354740f5-5e36-44c6-9400-a45c8540aea2"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 3 16:29:20.956630 kubelet[2802]: I1103 16:29:20.956588 2802 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/354740f5-5e36-44c6-9400-a45c8540aea2-kube-api-access-zt96b" (OuterVolumeSpecName: "kube-api-access-zt96b") pod "354740f5-5e36-44c6-9400-a45c8540aea2" (UID: "354740f5-5e36-44c6-9400-a45c8540aea2"). InnerVolumeSpecName "kube-api-access-zt96b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 3 16:29:20.956689 kubelet[2802]: I1103 16:29:20.956647 2802 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/354740f5-5e36-44c6-9400-a45c8540aea2-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "354740f5-5e36-44c6-9400-a45c8540aea2" (UID: "354740f5-5e36-44c6-9400-a45c8540aea2"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 3 16:29:21.002678 systemd[1]: Started sshd@9-10.0.0.124:22-10.0.0.1:47916.service - OpenSSH per-connection server daemon (10.0.0.1:47916). Nov 3 16:29:21.052088 kubelet[2802]: I1103 16:29:21.052052 2802 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/354740f5-5e36-44c6-9400-a45c8540aea2-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 3 16:29:21.052286 kubelet[2802]: I1103 16:29:21.052238 2802 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zt96b\" (UniqueName: \"kubernetes.io/projected/354740f5-5e36-44c6-9400-a45c8540aea2-kube-api-access-zt96b\") on node \"localhost\" DevicePath \"\"" Nov 3 16:29:21.052286 kubelet[2802]: I1103 16:29:21.052256 2802 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/354740f5-5e36-44c6-9400-a45c8540aea2-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 3 16:29:21.075732 sshd[3966]: Accepted publickey for core from 10.0.0.1 port 47916 ssh2: RSA SHA256:6IgjKsfLloMODYUZWLJOfDFsK2vE75XcxHBEtXf0d48 Nov 3 16:29:21.077658 sshd-session[3966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 3 16:29:21.082496 systemd-logind[1582]: New session 10 of user core. Nov 3 16:29:21.092156 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 3 16:29:21.181113 sshd[3981]: Connection closed by 10.0.0.1 port 47916 Nov 3 16:29:21.181417 sshd-session[3966]: pam_unix(sshd:session): session closed for user core Nov 3 16:29:21.186830 systemd[1]: sshd@9-10.0.0.124:22-10.0.0.1:47916.service: Deactivated successfully. Nov 3 16:29:21.189030 systemd[1]: session-10.scope: Deactivated successfully. Nov 3 16:29:21.189893 systemd-logind[1582]: Session 10 logged out. Waiting for processes to exit. Nov 3 16:29:21.191495 systemd-logind[1582]: Removed session 10. Nov 3 16:29:21.416523 systemd[1]: Removed slice kubepods-besteffort-pod354740f5_5e36_44c6_9400_a45c8540aea2.slice - libcontainer container kubepods-besteffort-pod354740f5_5e36_44c6_9400_a45c8540aea2.slice. Nov 3 16:29:21.522753 systemd[1]: var-lib-kubelet-pods-354740f5\x2d5e36\x2d44c6\x2d9400\x2da45c8540aea2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzt96b.mount: Deactivated successfully. Nov 3 16:29:21.522895 systemd[1]: var-lib-kubelet-pods-354740f5\x2d5e36\x2d44c6\x2d9400\x2da45c8540aea2-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 3 16:29:21.545037 kubelet[2802]: E1103 16:29:21.544668 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:29:21.880602 kubelet[2802]: I1103 16:29:21.879898 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fxkrp" podStartSLOduration=2.431551689 podStartE2EDuration="21.879878902s" podCreationTimestamp="2025-11-03 16:29:00 +0000 UTC" firstStartedPulling="2025-11-03 16:29:01.067720353 +0000 UTC m=+19.811541319" lastFinishedPulling="2025-11-03 16:29:20.516047556 +0000 UTC m=+39.259868532" observedRunningTime="2025-11-03 16:29:21.730108237 +0000 UTC m=+40.473929213" watchObservedRunningTime="2025-11-03 16:29:21.879878902 +0000 UTC m=+40.623699868" Nov 3 16:29:21.931045 systemd[1]: Created slice kubepods-besteffort-pod78ce1ad2_3ddf_4f0f_8b04_471a10465b0c.slice - libcontainer container kubepods-besteffort-pod78ce1ad2_3ddf_4f0f_8b04_471a10465b0c.slice. Nov 3 16:29:21.959851 kubelet[2802]: I1103 16:29:21.959783 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/78ce1ad2-3ddf-4f0f-8b04-471a10465b0c-whisker-backend-key-pair\") pod \"whisker-84fddd5b49-7hzd4\" (UID: \"78ce1ad2-3ddf-4f0f-8b04-471a10465b0c\") " pod="calico-system/whisker-84fddd5b49-7hzd4" Nov 3 16:29:21.959851 kubelet[2802]: I1103 16:29:21.959836 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78ce1ad2-3ddf-4f0f-8b04-471a10465b0c-whisker-ca-bundle\") pod \"whisker-84fddd5b49-7hzd4\" (UID: \"78ce1ad2-3ddf-4f0f-8b04-471a10465b0c\") " pod="calico-system/whisker-84fddd5b49-7hzd4" Nov 3 16:29:21.959851 kubelet[2802]: I1103 16:29:21.959855 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7tct\" (UniqueName: \"kubernetes.io/projected/78ce1ad2-3ddf-4f0f-8b04-471a10465b0c-kube-api-access-l7tct\") pod \"whisker-84fddd5b49-7hzd4\" (UID: \"78ce1ad2-3ddf-4f0f-8b04-471a10465b0c\") " pod="calico-system/whisker-84fddd5b49-7hzd4" Nov 3 16:29:22.315918 containerd[1613]: time="2025-11-03T16:29:22.315772222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84fddd5b49-7hzd4,Uid:78ce1ad2-3ddf-4f0f-8b04-471a10465b0c,Namespace:calico-system,Attempt:0,}" Nov 3 16:29:22.413136 containerd[1613]: time="2025-11-03T16:29:22.412974693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-848d75bc5c-dq964,Uid:974e3016-4f92-40ef-b564-73c74925d5f3,Namespace:calico-apiserver,Attempt:0,}" Nov 3 16:29:22.414322 kubelet[2802]: E1103 16:29:22.414295 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:29:22.415264 containerd[1613]: time="2025-11-03T16:29:22.415174893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gglmp,Uid:cb9d669e-7b5f-4659-94d4-c84247454d71,Namespace:kube-system,Attempt:0,}" Nov 3 16:29:22.588768 systemd-networkd[1500]: cali14933c1eb86: Link UP Nov 3 16:29:22.589094 systemd-networkd[1500]: cali14933c1eb86: Gained carrier Nov 3 16:29:22.604521 containerd[1613]: 2025-11-03 16:29:22.457 [INFO][4114] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 3 16:29:22.604521 containerd[1613]: 2025-11-03 16:29:22.477 [INFO][4114] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--848d75bc5c--dq964-eth0 calico-apiserver-848d75bc5c- calico-apiserver 974e3016-4f92-40ef-b564-73c74925d5f3 897 0 2025-11-03 16:28:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:848d75bc5c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-848d75bc5c-dq964 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali14933c1eb86 [] [] }} ContainerID="7f4f621bedf59daab8bba3a0887adfa05f9f59eaa8daaa25b7d2d8d10f4e29de" Namespace="calico-apiserver" Pod="calico-apiserver-848d75bc5c-dq964" WorkloadEndpoint="localhost-k8s-calico--apiserver--848d75bc5c--dq964-" Nov 3 16:29:22.604521 containerd[1613]: 2025-11-03 16:29:22.478 [INFO][4114] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7f4f621bedf59daab8bba3a0887adfa05f9f59eaa8daaa25b7d2d8d10f4e29de" Namespace="calico-apiserver" Pod="calico-apiserver-848d75bc5c-dq964" WorkloadEndpoint="localhost-k8s-calico--apiserver--848d75bc5c--dq964-eth0" Nov 3 16:29:22.604521 containerd[1613]: 2025-11-03 16:29:22.538 [INFO][4149] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f4f621bedf59daab8bba3a0887adfa05f9f59eaa8daaa25b7d2d8d10f4e29de" HandleID="k8s-pod-network.7f4f621bedf59daab8bba3a0887adfa05f9f59eaa8daaa25b7d2d8d10f4e29de" Workload="localhost-k8s-calico--apiserver--848d75bc5c--dq964-eth0" Nov 3 16:29:22.604878 containerd[1613]: 2025-11-03 16:29:22.539 [INFO][4149] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7f4f621bedf59daab8bba3a0887adfa05f9f59eaa8daaa25b7d2d8d10f4e29de" HandleID="k8s-pod-network.7f4f621bedf59daab8bba3a0887adfa05f9f59eaa8daaa25b7d2d8d10f4e29de" Workload="localhost-k8s-calico--apiserver--848d75bc5c--dq964-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004b8980), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-848d75bc5c-dq964", "timestamp":"2025-11-03 16:29:22.538469473 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 3 16:29:22.604878 containerd[1613]: 2025-11-03 16:29:22.539 [INFO][4149] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 3 16:29:22.604878 containerd[1613]: 2025-11-03 16:29:22.539 [INFO][4149] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 3 16:29:22.604878 containerd[1613]: 2025-11-03 16:29:22.539 [INFO][4149] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 3 16:29:22.604878 containerd[1613]: 2025-11-03 16:29:22.549 [INFO][4149] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7f4f621bedf59daab8bba3a0887adfa05f9f59eaa8daaa25b7d2d8d10f4e29de" host="localhost" Nov 3 16:29:22.604878 containerd[1613]: 2025-11-03 16:29:22.557 [INFO][4149] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 3 16:29:22.604878 containerd[1613]: 2025-11-03 16:29:22.561 [INFO][4149] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 3 16:29:22.604878 containerd[1613]: 2025-11-03 16:29:22.563 [INFO][4149] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 3 16:29:22.604878 containerd[1613]: 2025-11-03 16:29:22.566 [INFO][4149] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 3 16:29:22.604878 containerd[1613]: 2025-11-03 16:29:22.566 [INFO][4149] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7f4f621bedf59daab8bba3a0887adfa05f9f59eaa8daaa25b7d2d8d10f4e29de" host="localhost" Nov 3 16:29:22.605243 containerd[1613]: 2025-11-03 16:29:22.567 [INFO][4149] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7f4f621bedf59daab8bba3a0887adfa05f9f59eaa8daaa25b7d2d8d10f4e29de Nov 3 16:29:22.605243 containerd[1613]: 2025-11-03 16:29:22.571 [INFO][4149] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7f4f621bedf59daab8bba3a0887adfa05f9f59eaa8daaa25b7d2d8d10f4e29de" host="localhost" Nov 3 16:29:22.605243 containerd[1613]: 2025-11-03 16:29:22.577 [INFO][4149] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.7f4f621bedf59daab8bba3a0887adfa05f9f59eaa8daaa25b7d2d8d10f4e29de" host="localhost" Nov 3 16:29:22.605243 containerd[1613]: 2025-11-03 16:29:22.577 [INFO][4149] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.7f4f621bedf59daab8bba3a0887adfa05f9f59eaa8daaa25b7d2d8d10f4e29de" host="localhost" Nov 3 16:29:22.605243 containerd[1613]: 2025-11-03 16:29:22.577 [INFO][4149] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 3 16:29:22.605243 containerd[1613]: 2025-11-03 16:29:22.577 [INFO][4149] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="7f4f621bedf59daab8bba3a0887adfa05f9f59eaa8daaa25b7d2d8d10f4e29de" HandleID="k8s-pod-network.7f4f621bedf59daab8bba3a0887adfa05f9f59eaa8daaa25b7d2d8d10f4e29de" Workload="localhost-k8s-calico--apiserver--848d75bc5c--dq964-eth0" Nov 3 16:29:22.605408 containerd[1613]: 2025-11-03 16:29:22.581 [INFO][4114] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7f4f621bedf59daab8bba3a0887adfa05f9f59eaa8daaa25b7d2d8d10f4e29de" Namespace="calico-apiserver" Pod="calico-apiserver-848d75bc5c-dq964" WorkloadEndpoint="localhost-k8s-calico--apiserver--848d75bc5c--dq964-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--848d75bc5c--dq964-eth0", GenerateName:"calico-apiserver-848d75bc5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"974e3016-4f92-40ef-b564-73c74925d5f3", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.November, 3, 16, 28, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"848d75bc5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-848d75bc5c-dq964", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali14933c1eb86", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 3 16:29:22.605483 containerd[1613]: 2025-11-03 16:29:22.581 [INFO][4114] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="7f4f621bedf59daab8bba3a0887adfa05f9f59eaa8daaa25b7d2d8d10f4e29de" Namespace="calico-apiserver" Pod="calico-apiserver-848d75bc5c-dq964" WorkloadEndpoint="localhost-k8s-calico--apiserver--848d75bc5c--dq964-eth0" Nov 3 16:29:22.605483 containerd[1613]: 2025-11-03 16:29:22.581 [INFO][4114] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali14933c1eb86 ContainerID="7f4f621bedf59daab8bba3a0887adfa05f9f59eaa8daaa25b7d2d8d10f4e29de" Namespace="calico-apiserver" Pod="calico-apiserver-848d75bc5c-dq964" WorkloadEndpoint="localhost-k8s-calico--apiserver--848d75bc5c--dq964-eth0" Nov 3 16:29:22.605483 containerd[1613]: 2025-11-03 16:29:22.589 [INFO][4114] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f4f621bedf59daab8bba3a0887adfa05f9f59eaa8daaa25b7d2d8d10f4e29de" Namespace="calico-apiserver" Pod="calico-apiserver-848d75bc5c-dq964" WorkloadEndpoint="localhost-k8s-calico--apiserver--848d75bc5c--dq964-eth0" Nov 3 16:29:22.605592 containerd[1613]: 2025-11-03 16:29:22.590 [INFO][4114] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7f4f621bedf59daab8bba3a0887adfa05f9f59eaa8daaa25b7d2d8d10f4e29de" Namespace="calico-apiserver" Pod="calico-apiserver-848d75bc5c-dq964" WorkloadEndpoint="localhost-k8s-calico--apiserver--848d75bc5c--dq964-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--848d75bc5c--dq964-eth0", GenerateName:"calico-apiserver-848d75bc5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"974e3016-4f92-40ef-b564-73c74925d5f3", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.November, 3, 16, 28, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"848d75bc5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7f4f621bedf59daab8bba3a0887adfa05f9f59eaa8daaa25b7d2d8d10f4e29de", Pod:"calico-apiserver-848d75bc5c-dq964", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali14933c1eb86", MAC:"9e:b6:aa:cc:19:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 3 16:29:22.605661 containerd[1613]: 2025-11-03 16:29:22.600 [INFO][4114] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7f4f621bedf59daab8bba3a0887adfa05f9f59eaa8daaa25b7d2d8d10f4e29de" Namespace="calico-apiserver" Pod="calico-apiserver-848d75bc5c-dq964" WorkloadEndpoint="localhost-k8s-calico--apiserver--848d75bc5c--dq964-eth0" Nov 3 16:29:22.763797 systemd-networkd[1500]: cali51dccbfc95e: Link UP Nov 3 16:29:22.764124 systemd-networkd[1500]: cali51dccbfc95e: Gained carrier Nov 3 16:29:22.781823 containerd[1613]: 2025-11-03 16:29:22.456 [INFO][4125] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 3 16:29:22.781823 containerd[1613]: 2025-11-03 16:29:22.469 [INFO][4125] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--gglmp-eth0 coredns-66bc5c9577- kube-system cb9d669e-7b5f-4659-94d4-c84247454d71 895 0 2025-11-03 16:28:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-gglmp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali51dccbfc95e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="6a24d543a2ee978917e92a20efd3c1c8342c85df1d35283963ae8ca9adfc6f17" Namespace="kube-system" Pod="coredns-66bc5c9577-gglmp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--gglmp-" Nov 3 16:29:22.781823 containerd[1613]: 2025-11-03 16:29:22.469 [INFO][4125] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6a24d543a2ee978917e92a20efd3c1c8342c85df1d35283963ae8ca9adfc6f17" Namespace="kube-system" Pod="coredns-66bc5c9577-gglmp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--gglmp-eth0" Nov 3 16:29:22.781823 containerd[1613]: 2025-11-03 16:29:22.538 [INFO][4142] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a24d543a2ee978917e92a20efd3c1c8342c85df1d35283963ae8ca9adfc6f17" HandleID="k8s-pod-network.6a24d543a2ee978917e92a20efd3c1c8342c85df1d35283963ae8ca9adfc6f17" Workload="localhost-k8s-coredns--66bc5c9577--gglmp-eth0" Nov 3 16:29:22.782151 containerd[1613]: 2025-11-03 16:29:22.539 [INFO][4142] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6a24d543a2ee978917e92a20efd3c1c8342c85df1d35283963ae8ca9adfc6f17" HandleID="k8s-pod-network.6a24d543a2ee978917e92a20efd3c1c8342c85df1d35283963ae8ca9adfc6f17" Workload="localhost-k8s-coredns--66bc5c9577--gglmp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138e30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-gglmp", "timestamp":"2025-11-03 16:29:22.538380714 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 3 16:29:22.782151 containerd[1613]: 2025-11-03 16:29:22.539 [INFO][4142] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 3 16:29:22.782151 containerd[1613]: 2025-11-03 16:29:22.577 [INFO][4142] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 3 16:29:22.782151 containerd[1613]: 2025-11-03 16:29:22.577 [INFO][4142] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 3 16:29:22.782151 containerd[1613]: 2025-11-03 16:29:22.650 [INFO][4142] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6a24d543a2ee978917e92a20efd3c1c8342c85df1d35283963ae8ca9adfc6f17" host="localhost" Nov 3 16:29:22.782151 containerd[1613]: 2025-11-03 16:29:22.656 [INFO][4142] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 3 16:29:22.782151 containerd[1613]: 2025-11-03 16:29:22.694 [INFO][4142] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 3 16:29:22.782151 containerd[1613]: 2025-11-03 16:29:22.696 [INFO][4142] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 3 16:29:22.782151 containerd[1613]: 2025-11-03 16:29:22.698 [INFO][4142] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 3 16:29:22.782151 containerd[1613]: 2025-11-03 16:29:22.698 [INFO][4142] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6a24d543a2ee978917e92a20efd3c1c8342c85df1d35283963ae8ca9adfc6f17" host="localhost" Nov 3 16:29:22.782403 containerd[1613]: 2025-11-03 16:29:22.699 [INFO][4142] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6a24d543a2ee978917e92a20efd3c1c8342c85df1d35283963ae8ca9adfc6f17 Nov 3 16:29:22.782403 containerd[1613]: 2025-11-03 16:29:22.748 [INFO][4142] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6a24d543a2ee978917e92a20efd3c1c8342c85df1d35283963ae8ca9adfc6f17" host="localhost" Nov 3 16:29:22.782403 containerd[1613]: 2025-11-03 16:29:22.754 [INFO][4142] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.6a24d543a2ee978917e92a20efd3c1c8342c85df1d35283963ae8ca9adfc6f17" host="localhost" Nov 3 16:29:22.782403 containerd[1613]: 2025-11-03 16:29:22.754 [INFO][4142] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.6a24d543a2ee978917e92a20efd3c1c8342c85df1d35283963ae8ca9adfc6f17" host="localhost" Nov 3 16:29:22.782403 containerd[1613]: 2025-11-03 16:29:22.755 [INFO][4142] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 3 16:29:22.782403 containerd[1613]: 2025-11-03 16:29:22.755 [INFO][4142] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="6a24d543a2ee978917e92a20efd3c1c8342c85df1d35283963ae8ca9adfc6f17" HandleID="k8s-pod-network.6a24d543a2ee978917e92a20efd3c1c8342c85df1d35283963ae8ca9adfc6f17" Workload="localhost-k8s-coredns--66bc5c9577--gglmp-eth0" Nov 3 16:29:22.782551 containerd[1613]: 2025-11-03 16:29:22.759 [INFO][4125] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6a24d543a2ee978917e92a20efd3c1c8342c85df1d35283963ae8ca9adfc6f17" Namespace="kube-system" Pod="coredns-66bc5c9577-gglmp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--gglmp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--gglmp-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"cb9d669e-7b5f-4659-94d4-c84247454d71", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.November, 3, 16, 28, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-gglmp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali51dccbfc95e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 3 16:29:22.782551 containerd[1613]: 2025-11-03 16:29:22.759 [INFO][4125] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="6a24d543a2ee978917e92a20efd3c1c8342c85df1d35283963ae8ca9adfc6f17" Namespace="kube-system" Pod="coredns-66bc5c9577-gglmp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--gglmp-eth0" Nov 3 16:29:22.782551 containerd[1613]: 2025-11-03 16:29:22.759 [INFO][4125] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali51dccbfc95e ContainerID="6a24d543a2ee978917e92a20efd3c1c8342c85df1d35283963ae8ca9adfc6f17" Namespace="kube-system" Pod="coredns-66bc5c9577-gglmp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--gglmp-eth0" Nov 3 16:29:22.782551 containerd[1613]: 2025-11-03 16:29:22.766 [INFO][4125] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6a24d543a2ee978917e92a20efd3c1c8342c85df1d35283963ae8ca9adfc6f17" Namespace="kube-system" Pod="coredns-66bc5c9577-gglmp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--gglmp-eth0" Nov 3 16:29:22.782551 containerd[1613]: 2025-11-03 16:29:22.766 [INFO][4125] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6a24d543a2ee978917e92a20efd3c1c8342c85df1d35283963ae8ca9adfc6f17" Namespace="kube-system" Pod="coredns-66bc5c9577-gglmp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--gglmp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--gglmp-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"cb9d669e-7b5f-4659-94d4-c84247454d71", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.November, 3, 16, 28, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6a24d543a2ee978917e92a20efd3c1c8342c85df1d35283963ae8ca9adfc6f17", Pod:"coredns-66bc5c9577-gglmp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali51dccbfc95e", MAC:"fa:fb:40:5c:53:d5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 3 16:29:22.782551 containerd[1613]: 2025-11-03 16:29:22.778 [INFO][4125] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6a24d543a2ee978917e92a20efd3c1c8342c85df1d35283963ae8ca9adfc6f17" Namespace="kube-system" Pod="coredns-66bc5c9577-gglmp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--gglmp-eth0" Nov 3 16:29:22.808628 systemd-networkd[1500]: calie1760c44417: Link UP Nov 3 16:29:22.810616 systemd-networkd[1500]: calie1760c44417: Gained carrier Nov 3 16:29:22.826833 containerd[1613]: 2025-11-03 16:29:22.345 [INFO][4096] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 3 16:29:22.826833 containerd[1613]: 2025-11-03 16:29:22.375 [INFO][4096] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--84fddd5b49--7hzd4-eth0 whisker-84fddd5b49- calico-system 78ce1ad2-3ddf-4f0f-8b04-471a10465b0c 1009 0 2025-11-03 16:29:21 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:84fddd5b49 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-84fddd5b49-7hzd4 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie1760c44417 [] [] }} ContainerID="c2b8da22384d122585b7cf8879f0a7dac6587463e6a7c4b190420049bb53e6d0" Namespace="calico-system" Pod="whisker-84fddd5b49-7hzd4" WorkloadEndpoint="localhost-k8s-whisker--84fddd5b49--7hzd4-" Nov 3 16:29:22.826833 containerd[1613]: 2025-11-03 16:29:22.375 [INFO][4096] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c2b8da22384d122585b7cf8879f0a7dac6587463e6a7c4b190420049bb53e6d0" Namespace="calico-system" Pod="whisker-84fddd5b49-7hzd4" WorkloadEndpoint="localhost-k8s-whisker--84fddd5b49--7hzd4-eth0" Nov 3 16:29:22.826833 containerd[1613]: 2025-11-03 16:29:22.538 [INFO][4109] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c2b8da22384d122585b7cf8879f0a7dac6587463e6a7c4b190420049bb53e6d0" HandleID="k8s-pod-network.c2b8da22384d122585b7cf8879f0a7dac6587463e6a7c4b190420049bb53e6d0" Workload="localhost-k8s-whisker--84fddd5b49--7hzd4-eth0" Nov 3 16:29:22.826833 containerd[1613]: 2025-11-03 16:29:22.538 [INFO][4109] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c2b8da22384d122585b7cf8879f0a7dac6587463e6a7c4b190420049bb53e6d0" HandleID="k8s-pod-network.c2b8da22384d122585b7cf8879f0a7dac6587463e6a7c4b190420049bb53e6d0" Workload="localhost-k8s-whisker--84fddd5b49--7hzd4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000bf1e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-84fddd5b49-7hzd4", "timestamp":"2025-11-03 16:29:22.53850091 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 3 16:29:22.826833 containerd[1613]: 2025-11-03 16:29:22.538 [INFO][4109] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 3 16:29:22.826833 containerd[1613]: 2025-11-03 16:29:22.755 [INFO][4109] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 3 16:29:22.826833 containerd[1613]: 2025-11-03 16:29:22.755 [INFO][4109] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 3 16:29:22.826833 containerd[1613]: 2025-11-03 16:29:22.764 [INFO][4109] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c2b8da22384d122585b7cf8879f0a7dac6587463e6a7c4b190420049bb53e6d0" host="localhost" Nov 3 16:29:22.826833 containerd[1613]: 2025-11-03 16:29:22.772 [INFO][4109] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 3 16:29:22.826833 containerd[1613]: 2025-11-03 16:29:22.781 [INFO][4109] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 3 16:29:22.826833 containerd[1613]: 2025-11-03 16:29:22.783 [INFO][4109] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 3 16:29:22.826833 containerd[1613]: 2025-11-03 16:29:22.785 [INFO][4109] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 3 16:29:22.826833 containerd[1613]: 2025-11-03 16:29:22.785 [INFO][4109] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c2b8da22384d122585b7cf8879f0a7dac6587463e6a7c4b190420049bb53e6d0" host="localhost" Nov 3 16:29:22.826833 containerd[1613]: 2025-11-03 16:29:22.789 [INFO][4109] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c2b8da22384d122585b7cf8879f0a7dac6587463e6a7c4b190420049bb53e6d0 Nov 3 16:29:22.826833 containerd[1613]: 2025-11-03 16:29:22.792 [INFO][4109] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c2b8da22384d122585b7cf8879f0a7dac6587463e6a7c4b190420049bb53e6d0" host="localhost" Nov 3 16:29:22.826833 containerd[1613]: 2025-11-03 16:29:22.801 [INFO][4109] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c2b8da22384d122585b7cf8879f0a7dac6587463e6a7c4b190420049bb53e6d0" host="localhost" Nov 3 16:29:22.826833 containerd[1613]: 2025-11-03 16:29:22.801 [INFO][4109] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c2b8da22384d122585b7cf8879f0a7dac6587463e6a7c4b190420049bb53e6d0" host="localhost" Nov 3 16:29:22.826833 containerd[1613]: 2025-11-03 16:29:22.801 [INFO][4109] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 3 16:29:22.826833 containerd[1613]: 2025-11-03 16:29:22.801 [INFO][4109] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c2b8da22384d122585b7cf8879f0a7dac6587463e6a7c4b190420049bb53e6d0" HandleID="k8s-pod-network.c2b8da22384d122585b7cf8879f0a7dac6587463e6a7c4b190420049bb53e6d0" Workload="localhost-k8s-whisker--84fddd5b49--7hzd4-eth0" Nov 3 16:29:22.827577 containerd[1613]: 2025-11-03 16:29:22.806 [INFO][4096] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c2b8da22384d122585b7cf8879f0a7dac6587463e6a7c4b190420049bb53e6d0" Namespace="calico-system" Pod="whisker-84fddd5b49-7hzd4" WorkloadEndpoint="localhost-k8s-whisker--84fddd5b49--7hzd4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--84fddd5b49--7hzd4-eth0", GenerateName:"whisker-84fddd5b49-", Namespace:"calico-system", SelfLink:"", UID:"78ce1ad2-3ddf-4f0f-8b04-471a10465b0c", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.November, 3, 16, 29, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84fddd5b49", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-84fddd5b49-7hzd4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie1760c44417", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 3 16:29:22.827577 containerd[1613]: 2025-11-03 16:29:22.806 [INFO][4096] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c2b8da22384d122585b7cf8879f0a7dac6587463e6a7c4b190420049bb53e6d0" Namespace="calico-system" Pod="whisker-84fddd5b49-7hzd4" WorkloadEndpoint="localhost-k8s-whisker--84fddd5b49--7hzd4-eth0" Nov 3 16:29:22.827577 containerd[1613]: 2025-11-03 16:29:22.806 [INFO][4096] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie1760c44417 ContainerID="c2b8da22384d122585b7cf8879f0a7dac6587463e6a7c4b190420049bb53e6d0" Namespace="calico-system" Pod="whisker-84fddd5b49-7hzd4" WorkloadEndpoint="localhost-k8s-whisker--84fddd5b49--7hzd4-eth0" Nov 3 16:29:22.827577 containerd[1613]: 2025-11-03 16:29:22.809 [INFO][4096] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c2b8da22384d122585b7cf8879f0a7dac6587463e6a7c4b190420049bb53e6d0" Namespace="calico-system" Pod="whisker-84fddd5b49-7hzd4" WorkloadEndpoint="localhost-k8s-whisker--84fddd5b49--7hzd4-eth0" Nov 3 16:29:22.827577 containerd[1613]: 2025-11-03 16:29:22.809 [INFO][4096] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c2b8da22384d122585b7cf8879f0a7dac6587463e6a7c4b190420049bb53e6d0" Namespace="calico-system" Pod="whisker-84fddd5b49-7hzd4" WorkloadEndpoint="localhost-k8s-whisker--84fddd5b49--7hzd4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--84fddd5b49--7hzd4-eth0", GenerateName:"whisker-84fddd5b49-", Namespace:"calico-system", SelfLink:"", UID:"78ce1ad2-3ddf-4f0f-8b04-471a10465b0c", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.November, 3, 16, 29, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84fddd5b49", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c2b8da22384d122585b7cf8879f0a7dac6587463e6a7c4b190420049bb53e6d0", Pod:"whisker-84fddd5b49-7hzd4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie1760c44417", MAC:"16:a2:2e:56:4b:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 3 16:29:22.827577 containerd[1613]: 2025-11-03 16:29:22.823 [INFO][4096] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c2b8da22384d122585b7cf8879f0a7dac6587463e6a7c4b190420049bb53e6d0" Namespace="calico-system" Pod="whisker-84fddd5b49-7hzd4" WorkloadEndpoint="localhost-k8s-whisker--84fddd5b49--7hzd4-eth0" Nov 3 16:29:22.950622 containerd[1613]: time="2025-11-03T16:29:22.950565165Z" level=info msg="connecting to shim 6a24d543a2ee978917e92a20efd3c1c8342c85df1d35283963ae8ca9adfc6f17" address="unix:///run/containerd/s/7cc80856870552dbb9083c0c4f435135593dd4a2ba402d5b925db658fc6700c3" namespace=k8s.io protocol=ttrpc version=3 Nov 3 16:29:22.951276 containerd[1613]: time="2025-11-03T16:29:22.951217626Z" level=info msg="connecting to shim c2b8da22384d122585b7cf8879f0a7dac6587463e6a7c4b190420049bb53e6d0" address="unix:///run/containerd/s/e57fa5726f2fd7089de6ea8312c14111d34e1f8371c3f10044e44d1a6b7c3136" namespace=k8s.io protocol=ttrpc version=3 Nov 3 16:29:22.955054 containerd[1613]: time="2025-11-03T16:29:22.953670240Z" level=info msg="connecting to shim 7f4f621bedf59daab8bba3a0887adfa05f9f59eaa8daaa25b7d2d8d10f4e29de" address="unix:///run/containerd/s/9205bf2f98c268da197137d928950742d3c13a292205b9320247460a8eb0c60b" namespace=k8s.io protocol=ttrpc version=3 Nov 3 16:29:22.988176 systemd[1]: Started cri-containerd-7f4f621bedf59daab8bba3a0887adfa05f9f59eaa8daaa25b7d2d8d10f4e29de.scope - libcontainer container 7f4f621bedf59daab8bba3a0887adfa05f9f59eaa8daaa25b7d2d8d10f4e29de. Nov 3 16:29:22.992853 systemd[1]: Started cri-containerd-6a24d543a2ee978917e92a20efd3c1c8342c85df1d35283963ae8ca9adfc6f17.scope - libcontainer container 6a24d543a2ee978917e92a20efd3c1c8342c85df1d35283963ae8ca9adfc6f17. Nov 3 16:29:22.994921 systemd[1]: Started cri-containerd-c2b8da22384d122585b7cf8879f0a7dac6587463e6a7c4b190420049bb53e6d0.scope - libcontainer container c2b8da22384d122585b7cf8879f0a7dac6587463e6a7c4b190420049bb53e6d0. Nov 3 16:29:23.010561 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 3 16:29:23.014209 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 3 16:29:23.017579 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 3 16:29:23.042877 kubelet[2802]: I1103 16:29:23.042785 2802 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 3 16:29:23.043802 kubelet[2802]: E1103 16:29:23.043753 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:29:23.063140 containerd[1613]: time="2025-11-03T16:29:23.063095040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-848d75bc5c-dq964,Uid:974e3016-4f92-40ef-b564-73c74925d5f3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7f4f621bedf59daab8bba3a0887adfa05f9f59eaa8daaa25b7d2d8d10f4e29de\"" Nov 3 16:29:23.065995 containerd[1613]: time="2025-11-03T16:29:23.065965882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 3 16:29:23.070689 containerd[1613]: time="2025-11-03T16:29:23.070638384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gglmp,Uid:cb9d669e-7b5f-4659-94d4-c84247454d71,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a24d543a2ee978917e92a20efd3c1c8342c85df1d35283963ae8ca9adfc6f17\"" Nov 3 16:29:23.077043 kubelet[2802]: E1103 16:29:23.076473 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:29:23.080059 containerd[1613]: time="2025-11-03T16:29:23.079918249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84fddd5b49-7hzd4,Uid:78ce1ad2-3ddf-4f0f-8b04-471a10465b0c,Namespace:calico-system,Attempt:0,} returns sandbox id \"c2b8da22384d122585b7cf8879f0a7dac6587463e6a7c4b190420049bb53e6d0\"" Nov 3 16:29:23.081565 containerd[1613]: time="2025-11-03T16:29:23.081535406Z" level=info msg="CreateContainer within sandbox \"6a24d543a2ee978917e92a20efd3c1c8342c85df1d35283963ae8ca9adfc6f17\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 3 16:29:23.098642 containerd[1613]: time="2025-11-03T16:29:23.098569934Z" level=info msg="Container a3df7ca33136616d75e551e39a1686ba522a46f8db78ca2d2989165c6911c5d6: CDI devices from CRI Config.CDIDevices: []" Nov 3 16:29:23.115666 containerd[1613]: time="2025-11-03T16:29:23.115593773Z" level=info msg="CreateContainer within sandbox \"6a24d543a2ee978917e92a20efd3c1c8342c85df1d35283963ae8ca9adfc6f17\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a3df7ca33136616d75e551e39a1686ba522a46f8db78ca2d2989165c6911c5d6\"" Nov 3 16:29:23.116438 containerd[1613]: time="2025-11-03T16:29:23.116401326Z" level=info msg="StartContainer for \"a3df7ca33136616d75e551e39a1686ba522a46f8db78ca2d2989165c6911c5d6\"" Nov 3 16:29:23.117473 containerd[1613]: time="2025-11-03T16:29:23.117432080Z" level=info msg="connecting to shim a3df7ca33136616d75e551e39a1686ba522a46f8db78ca2d2989165c6911c5d6" address="unix:///run/containerd/s/7cc80856870552dbb9083c0c4f435135593dd4a2ba402d5b925db658fc6700c3" protocol=ttrpc version=3 Nov 3 16:29:23.143390 systemd[1]: Started cri-containerd-a3df7ca33136616d75e551e39a1686ba522a46f8db78ca2d2989165c6911c5d6.scope - libcontainer container a3df7ca33136616d75e551e39a1686ba522a46f8db78ca2d2989165c6911c5d6. Nov 3 16:29:23.188713 containerd[1613]: time="2025-11-03T16:29:23.188460249Z" level=info msg="StartContainer for \"a3df7ca33136616d75e551e39a1686ba522a46f8db78ca2d2989165c6911c5d6\" returns successfully" Nov 3 16:29:23.411867 kubelet[2802]: I1103 16:29:23.411782 2802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="354740f5-5e36-44c6-9400-a45c8540aea2" path="/var/lib/kubelet/pods/354740f5-5e36-44c6-9400-a45c8540aea2/volumes" Nov 3 16:29:23.412602 kubelet[2802]: E1103 16:29:23.412559 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:29:23.413276 containerd[1613]: time="2025-11-03T16:29:23.413240518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-66brq,Uid:5ff9fe09-a21c-4f7c-94fd-bce37b5b7acf,Namespace:kube-system,Attempt:0,}" Nov 3 16:29:23.414803 containerd[1613]: time="2025-11-03T16:29:23.414746366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65577d7bd7-xn8xr,Uid:dffa515e-d491-4502-8fb4-90d289e9e24a,Namespace:calico-system,Attempt:0,}" Nov 3 16:29:23.449869 containerd[1613]: time="2025-11-03T16:29:23.449799742Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 3 16:29:23.450927 containerd[1613]: time="2025-11-03T16:29:23.450885143Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 3 16:29:23.450973 containerd[1613]: time="2025-11-03T16:29:23.450929303Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 3 16:29:23.451318 kubelet[2802]: E1103 16:29:23.451249 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 3 16:29:23.451387 kubelet[2802]: E1103 16:29:23.451337 2802 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 3 16:29:23.451661 kubelet[2802]: E1103 16:29:23.451619 2802 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-848d75bc5c-dq964_calico-apiserver(974e3016-4f92-40ef-b564-73c74925d5f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 3 16:29:23.451759 kubelet[2802]: E1103 16:29:23.451735 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-848d75bc5c-dq964" podUID="974e3016-4f92-40ef-b564-73c74925d5f3" Nov 3 16:29:23.452060 containerd[1613]: time="2025-11-03T16:29:23.451989279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 3 16:29:23.543911 systemd-networkd[1500]: cali1e95ef3e1ca: Link UP Nov 3 16:29:23.544202 systemd-networkd[1500]: cali1e95ef3e1ca: Gained carrier Nov 3 16:29:23.556186 kubelet[2802]: E1103 16:29:23.556146 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:29:23.557120 containerd[1613]: 2025-11-03 16:29:23.452 [INFO][4426] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 3 16:29:23.557120 containerd[1613]: 2025-11-03 16:29:23.468 [INFO][4426] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--66brq-eth0 coredns-66bc5c9577- kube-system 5ff9fe09-a21c-4f7c-94fd-bce37b5b7acf 885 0 2025-11-03 16:28:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-66brq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1e95ef3e1ca [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="a7ffee476110f26767b274c41c899124c0974db03a941b5ec16de1ef632e2cc0" Namespace="kube-system" Pod="coredns-66bc5c9577-66brq" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--66brq-" Nov 3 16:29:23.557120 containerd[1613]: 2025-11-03 16:29:23.469 [INFO][4426] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a7ffee476110f26767b274c41c899124c0974db03a941b5ec16de1ef632e2cc0" Namespace="kube-system" Pod="coredns-66bc5c9577-66brq" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--66brq-eth0" Nov 3 16:29:23.557120 containerd[1613]: 2025-11-03 16:29:23.501 [INFO][4454] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a7ffee476110f26767b274c41c899124c0974db03a941b5ec16de1ef632e2cc0" HandleID="k8s-pod-network.a7ffee476110f26767b274c41c899124c0974db03a941b5ec16de1ef632e2cc0" Workload="localhost-k8s-coredns--66bc5c9577--66brq-eth0" Nov 3 16:29:23.557120 containerd[1613]: 2025-11-03 16:29:23.501 [INFO][4454] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a7ffee476110f26767b274c41c899124c0974db03a941b5ec16de1ef632e2cc0" HandleID="k8s-pod-network.a7ffee476110f26767b274c41c899124c0974db03a941b5ec16de1ef632e2cc0" Workload="localhost-k8s-coredns--66bc5c9577--66brq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b15f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-66brq", "timestamp":"2025-11-03 16:29:23.501718243 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 3 16:29:23.557120 containerd[1613]: 2025-11-03 16:29:23.501 [INFO][4454] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 3 16:29:23.557120 containerd[1613]: 2025-11-03 16:29:23.501 [INFO][4454] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 3 16:29:23.557120 containerd[1613]: 2025-11-03 16:29:23.502 [INFO][4454] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 3 16:29:23.557120 containerd[1613]: 2025-11-03 16:29:23.508 [INFO][4454] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a7ffee476110f26767b274c41c899124c0974db03a941b5ec16de1ef632e2cc0" host="localhost" Nov 3 16:29:23.557120 containerd[1613]: 2025-11-03 16:29:23.515 [INFO][4454] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 3 16:29:23.557120 containerd[1613]: 2025-11-03 16:29:23.520 [INFO][4454] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 3 16:29:23.557120 containerd[1613]: 2025-11-03 16:29:23.522 [INFO][4454] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 3 16:29:23.557120 containerd[1613]: 2025-11-03 16:29:23.525 [INFO][4454] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 3 16:29:23.557120 containerd[1613]: 2025-11-03 16:29:23.525 [INFO][4454] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a7ffee476110f26767b274c41c899124c0974db03a941b5ec16de1ef632e2cc0" host="localhost" Nov 3 16:29:23.557120 containerd[1613]: 2025-11-03 16:29:23.526 [INFO][4454] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a7ffee476110f26767b274c41c899124c0974db03a941b5ec16de1ef632e2cc0 Nov 3 16:29:23.557120 containerd[1613]: 2025-11-03 16:29:23.530 [INFO][4454] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a7ffee476110f26767b274c41c899124c0974db03a941b5ec16de1ef632e2cc0" host="localhost" Nov 3 16:29:23.557120 containerd[1613]: 2025-11-03 16:29:23.534 [INFO][4454] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a7ffee476110f26767b274c41c899124c0974db03a941b5ec16de1ef632e2cc0" host="localhost" Nov 3 16:29:23.557120 containerd[1613]: 2025-11-03 16:29:23.535 [INFO][4454] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a7ffee476110f26767b274c41c899124c0974db03a941b5ec16de1ef632e2cc0" host="localhost" Nov 3 16:29:23.557120 containerd[1613]: 2025-11-03 16:29:23.535 [INFO][4454] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 3 16:29:23.557120 containerd[1613]: 2025-11-03 16:29:23.535 [INFO][4454] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a7ffee476110f26767b274c41c899124c0974db03a941b5ec16de1ef632e2cc0" HandleID="k8s-pod-network.a7ffee476110f26767b274c41c899124c0974db03a941b5ec16de1ef632e2cc0" Workload="localhost-k8s-coredns--66bc5c9577--66brq-eth0" Nov 3 16:29:23.557648 containerd[1613]: 2025-11-03 16:29:23.540 [INFO][4426] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a7ffee476110f26767b274c41c899124c0974db03a941b5ec16de1ef632e2cc0" Namespace="kube-system" Pod="coredns-66bc5c9577-66brq" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--66brq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--66brq-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"5ff9fe09-a21c-4f7c-94fd-bce37b5b7acf", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.November, 3, 16, 28, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-66brq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e95ef3e1ca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 3 16:29:23.557648 containerd[1613]: 2025-11-03 16:29:23.540 [INFO][4426] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a7ffee476110f26767b274c41c899124c0974db03a941b5ec16de1ef632e2cc0" Namespace="kube-system" Pod="coredns-66bc5c9577-66brq" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--66brq-eth0" Nov 3 16:29:23.557648 containerd[1613]: 2025-11-03 16:29:23.541 [INFO][4426] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e95ef3e1ca ContainerID="a7ffee476110f26767b274c41c899124c0974db03a941b5ec16de1ef632e2cc0" Namespace="kube-system" Pod="coredns-66bc5c9577-66brq" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--66brq-eth0" Nov 3 16:29:23.557648 containerd[1613]: 2025-11-03 16:29:23.544 [INFO][4426] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a7ffee476110f26767b274c41c899124c0974db03a941b5ec16de1ef632e2cc0" Namespace="kube-system" Pod="coredns-66bc5c9577-66brq" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--66brq-eth0" Nov 3 16:29:23.557648 containerd[1613]: 2025-11-03 16:29:23.544 [INFO][4426] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a7ffee476110f26767b274c41c899124c0974db03a941b5ec16de1ef632e2cc0" Namespace="kube-system" Pod="coredns-66bc5c9577-66brq" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--66brq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--66brq-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"5ff9fe09-a21c-4f7c-94fd-bce37b5b7acf", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.November, 3, 16, 28, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a7ffee476110f26767b274c41c899124c0974db03a941b5ec16de1ef632e2cc0", Pod:"coredns-66bc5c9577-66brq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e95ef3e1ca", MAC:"ba:70:71:2b:4e:42", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 3 16:29:23.557648 containerd[1613]: 2025-11-03 16:29:23.554 [INFO][4426] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a7ffee476110f26767b274c41c899124c0974db03a941b5ec16de1ef632e2cc0" Namespace="kube-system" Pod="coredns-66bc5c9577-66brq" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--66brq-eth0" Nov 3 16:29:23.560938 kubelet[2802]: E1103 16:29:23.560896 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-848d75bc5c-dq964" podUID="974e3016-4f92-40ef-b564-73c74925d5f3" Nov 3 16:29:23.568549 kubelet[2802]: I1103 16:29:23.568158 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-gglmp" podStartSLOduration=36.568136002 podStartE2EDuration="36.568136002s" podCreationTimestamp="2025-11-03 16:28:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-03 16:29:23.566414898 +0000 UTC m=+42.310235874" watchObservedRunningTime="2025-11-03 16:29:23.568136002 +0000 UTC m=+42.311956979" Nov 3 16:29:23.592690 containerd[1613]: time="2025-11-03T16:29:23.592597816Z" level=info msg="connecting to shim a7ffee476110f26767b274c41c899124c0974db03a941b5ec16de1ef632e2cc0" address="unix:///run/containerd/s/5fef8f717e37f71b00bdd7f777de427598014ccf8e7e161c1e68baf56766d5c6" namespace=k8s.io protocol=ttrpc version=3 Nov 3 16:29:23.628407 systemd[1]: Started cri-containerd-a7ffee476110f26767b274c41c899124c0974db03a941b5ec16de1ef632e2cc0.scope - libcontainer container a7ffee476110f26767b274c41c899124c0974db03a941b5ec16de1ef632e2cc0. Nov 3 16:29:23.646608 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 3 16:29:23.667517 systemd-networkd[1500]: cali772754c3dfb: Link UP Nov 3 16:29:23.669425 systemd-networkd[1500]: cali772754c3dfb: Gained carrier Nov 3 16:29:23.690051 containerd[1613]: time="2025-11-03T16:29:23.689987524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-66brq,Uid:5ff9fe09-a21c-4f7c-94fd-bce37b5b7acf,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7ffee476110f26767b274c41c899124c0974db03a941b5ec16de1ef632e2cc0\"" Nov 3 16:29:23.690814 kubelet[2802]: E1103 16:29:23.690786 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:29:23.698173 containerd[1613]: 2025-11-03 16:29:23.453 [INFO][4432] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 3 16:29:23.698173 containerd[1613]: 2025-11-03 16:29:23.471 [INFO][4432] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--65577d7bd7--xn8xr-eth0 calico-kube-controllers-65577d7bd7- calico-system dffa515e-d491-4502-8fb4-90d289e9e24a 896 0 2025-11-03 16:29:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:65577d7bd7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-65577d7bd7-xn8xr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali772754c3dfb [] [] }} ContainerID="fe27f9f9cb3763948df9ae2acac0d62e7ba6e2de4cf81b6d55d6b37a0a0662e2" Namespace="calico-system" Pod="calico-kube-controllers-65577d7bd7-xn8xr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65577d7bd7--xn8xr-" Nov 3 16:29:23.698173 containerd[1613]: 2025-11-03 16:29:23.471 [INFO][4432] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fe27f9f9cb3763948df9ae2acac0d62e7ba6e2de4cf81b6d55d6b37a0a0662e2" Namespace="calico-system" Pod="calico-kube-controllers-65577d7bd7-xn8xr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65577d7bd7--xn8xr-eth0" Nov 3 16:29:23.698173 containerd[1613]: 2025-11-03 16:29:23.503 [INFO][4456] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fe27f9f9cb3763948df9ae2acac0d62e7ba6e2de4cf81b6d55d6b37a0a0662e2" HandleID="k8s-pod-network.fe27f9f9cb3763948df9ae2acac0d62e7ba6e2de4cf81b6d55d6b37a0a0662e2" Workload="localhost-k8s-calico--kube--controllers--65577d7bd7--xn8xr-eth0" Nov 3 16:29:23.698173 containerd[1613]: 2025-11-03 16:29:23.503 [INFO][4456] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fe27f9f9cb3763948df9ae2acac0d62e7ba6e2de4cf81b6d55d6b37a0a0662e2" HandleID="k8s-pod-network.fe27f9f9cb3763948df9ae2acac0d62e7ba6e2de4cf81b6d55d6b37a0a0662e2" Workload="localhost-k8s-calico--kube--controllers--65577d7bd7--xn8xr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fc10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-65577d7bd7-xn8xr", "timestamp":"2025-11-03 16:29:23.503606808 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 3 16:29:23.698173 containerd[1613]: 2025-11-03 16:29:23.503 [INFO][4456] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 3 16:29:23.698173 containerd[1613]: 2025-11-03 16:29:23.535 [INFO][4456] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 3 16:29:23.698173 containerd[1613]: 2025-11-03 16:29:23.535 [INFO][4456] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 3 16:29:23.698173 containerd[1613]: 2025-11-03 16:29:23.609 [INFO][4456] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fe27f9f9cb3763948df9ae2acac0d62e7ba6e2de4cf81b6d55d6b37a0a0662e2" host="localhost" Nov 3 16:29:23.698173 containerd[1613]: 2025-11-03 16:29:23.618 [INFO][4456] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 3 16:29:23.698173 containerd[1613]: 2025-11-03 16:29:23.623 [INFO][4456] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 3 16:29:23.698173 containerd[1613]: 2025-11-03 16:29:23.625 [INFO][4456] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 3 16:29:23.698173 containerd[1613]: 2025-11-03 16:29:23.628 [INFO][4456] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 3 16:29:23.698173 containerd[1613]: 2025-11-03 16:29:23.628 [INFO][4456] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fe27f9f9cb3763948df9ae2acac0d62e7ba6e2de4cf81b6d55d6b37a0a0662e2" host="localhost" Nov 3 16:29:23.698173 containerd[1613]: 2025-11-03 16:29:23.629 [INFO][4456] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fe27f9f9cb3763948df9ae2acac0d62e7ba6e2de4cf81b6d55d6b37a0a0662e2 Nov 3 16:29:23.698173 containerd[1613]: 2025-11-03 16:29:23.654 [INFO][4456] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fe27f9f9cb3763948df9ae2acac0d62e7ba6e2de4cf81b6d55d6b37a0a0662e2" host="localhost" Nov 3 16:29:23.698173 containerd[1613]: 2025-11-03 16:29:23.660 [INFO][4456] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.fe27f9f9cb3763948df9ae2acac0d62e7ba6e2de4cf81b6d55d6b37a0a0662e2" host="localhost" Nov 3 16:29:23.698173 containerd[1613]: 2025-11-03 16:29:23.661 [INFO][4456] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.fe27f9f9cb3763948df9ae2acac0d62e7ba6e2de4cf81b6d55d6b37a0a0662e2" host="localhost" Nov 3 16:29:23.698173 containerd[1613]: 2025-11-03 16:29:23.661 [INFO][4456] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 3 16:29:23.698173 containerd[1613]: 2025-11-03 16:29:23.661 [INFO][4456] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="fe27f9f9cb3763948df9ae2acac0d62e7ba6e2de4cf81b6d55d6b37a0a0662e2" HandleID="k8s-pod-network.fe27f9f9cb3763948df9ae2acac0d62e7ba6e2de4cf81b6d55d6b37a0a0662e2" Workload="localhost-k8s-calico--kube--controllers--65577d7bd7--xn8xr-eth0" Nov 3 16:29:23.699408 containerd[1613]: 2025-11-03 16:29:23.664 [INFO][4432] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fe27f9f9cb3763948df9ae2acac0d62e7ba6e2de4cf81b6d55d6b37a0a0662e2" Namespace="calico-system" Pod="calico-kube-controllers-65577d7bd7-xn8xr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65577d7bd7--xn8xr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--65577d7bd7--xn8xr-eth0", GenerateName:"calico-kube-controllers-65577d7bd7-", Namespace:"calico-system", SelfLink:"", UID:"dffa515e-d491-4502-8fb4-90d289e9e24a", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.November, 3, 16, 29, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65577d7bd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-65577d7bd7-xn8xr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali772754c3dfb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 3 16:29:23.699408 containerd[1613]: 2025-11-03 16:29:23.664 [INFO][4432] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="fe27f9f9cb3763948df9ae2acac0d62e7ba6e2de4cf81b6d55d6b37a0a0662e2" Namespace="calico-system" Pod="calico-kube-controllers-65577d7bd7-xn8xr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65577d7bd7--xn8xr-eth0" Nov 3 16:29:23.699408 containerd[1613]: 2025-11-03 16:29:23.664 [INFO][4432] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali772754c3dfb ContainerID="fe27f9f9cb3763948df9ae2acac0d62e7ba6e2de4cf81b6d55d6b37a0a0662e2" Namespace="calico-system" Pod="calico-kube-controllers-65577d7bd7-xn8xr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65577d7bd7--xn8xr-eth0" Nov 3 16:29:23.699408 containerd[1613]: 2025-11-03 16:29:23.667 [INFO][4432] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fe27f9f9cb3763948df9ae2acac0d62e7ba6e2de4cf81b6d55d6b37a0a0662e2" Namespace="calico-system" Pod="calico-kube-controllers-65577d7bd7-xn8xr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65577d7bd7--xn8xr-eth0" Nov 3 16:29:23.699408 containerd[1613]: 2025-11-03 16:29:23.670 [INFO][4432] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fe27f9f9cb3763948df9ae2acac0d62e7ba6e2de4cf81b6d55d6b37a0a0662e2" Namespace="calico-system" Pod="calico-kube-controllers-65577d7bd7-xn8xr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65577d7bd7--xn8xr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--65577d7bd7--xn8xr-eth0", GenerateName:"calico-kube-controllers-65577d7bd7-", Namespace:"calico-system", SelfLink:"", UID:"dffa515e-d491-4502-8fb4-90d289e9e24a", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.November, 3, 16, 29, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65577d7bd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fe27f9f9cb3763948df9ae2acac0d62e7ba6e2de4cf81b6d55d6b37a0a0662e2", Pod:"calico-kube-controllers-65577d7bd7-xn8xr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali772754c3dfb", MAC:"6a:07:6d:83:fb:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 3 16:29:23.699408 containerd[1613]: 2025-11-03 16:29:23.694 [INFO][4432] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fe27f9f9cb3763948df9ae2acac0d62e7ba6e2de4cf81b6d55d6b37a0a0662e2" Namespace="calico-system" Pod="calico-kube-controllers-65577d7bd7-xn8xr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65577d7bd7--xn8xr-eth0" Nov 3 16:29:23.701307 kubelet[2802]: I1103 16:29:23.700157 2802 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 3 16:29:23.701463 containerd[1613]: time="2025-11-03T16:29:23.700684537Z" level=info msg="CreateContainer within sandbox \"a7ffee476110f26767b274c41c899124c0974db03a941b5ec16de1ef632e2cc0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 3 16:29:23.702342 kubelet[2802]: E1103 16:29:23.702281 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:29:23.719564 containerd[1613]: time="2025-11-03T16:29:23.719488638Z" level=info msg="Container 25661e563fa4277685227f0f5157ece732417076f2eb46fd9c3016122120b4ba: CDI devices from CRI Config.CDIDevices: []" Nov 3 16:29:23.734576 containerd[1613]: time="2025-11-03T16:29:23.734520586Z" level=info msg="CreateContainer within sandbox \"a7ffee476110f26767b274c41c899124c0974db03a941b5ec16de1ef632e2cc0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"25661e563fa4277685227f0f5157ece732417076f2eb46fd9c3016122120b4ba\"" Nov 3 16:29:23.737381 containerd[1613]: time="2025-11-03T16:29:23.736948230Z" level=info msg="StartContainer for \"25661e563fa4277685227f0f5157ece732417076f2eb46fd9c3016122120b4ba\"" Nov 3 16:29:23.740026 containerd[1613]: time="2025-11-03T16:29:23.739910807Z" level=info msg="connecting to shim 25661e563fa4277685227f0f5157ece732417076f2eb46fd9c3016122120b4ba" address="unix:///run/containerd/s/5fef8f717e37f71b00bdd7f777de427598014ccf8e7e161c1e68baf56766d5c6" protocol=ttrpc version=3 Nov 3 16:29:23.744190 containerd[1613]: time="2025-11-03T16:29:23.744116098Z" level=info msg="connecting to shim fe27f9f9cb3763948df9ae2acac0d62e7ba6e2de4cf81b6d55d6b37a0a0662e2" address="unix:///run/containerd/s/5327323d1b3c166bb004c9928a36490f33bd473a61afc8b4719170aade36fb3a" namespace=k8s.io protocol=ttrpc version=3 Nov 3 16:29:23.758916 systemd-networkd[1500]: cali14933c1eb86: Gained IPv6LL Nov 3 16:29:23.763314 systemd[1]: Started cri-containerd-25661e563fa4277685227f0f5157ece732417076f2eb46fd9c3016122120b4ba.scope - libcontainer container 25661e563fa4277685227f0f5157ece732417076f2eb46fd9c3016122120b4ba. Nov 3 16:29:23.774356 systemd[1]: Started cri-containerd-fe27f9f9cb3763948df9ae2acac0d62e7ba6e2de4cf81b6d55d6b37a0a0662e2.scope - libcontainer container fe27f9f9cb3763948df9ae2acac0d62e7ba6e2de4cf81b6d55d6b37a0a0662e2. Nov 3 16:29:23.790475 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 3 16:29:23.805788 containerd[1613]: time="2025-11-03T16:29:23.805735912Z" level=info msg="StartContainer for \"25661e563fa4277685227f0f5157ece732417076f2eb46fd9c3016122120b4ba\" returns successfully" Nov 3 16:29:23.819943 containerd[1613]: time="2025-11-03T16:29:23.819897755Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 3 16:29:23.822257 containerd[1613]: time="2025-11-03T16:29:23.822215642Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 3 16:29:23.822746 containerd[1613]: time="2025-11-03T16:29:23.822359141Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 3 16:29:23.822800 kubelet[2802]: E1103 16:29:23.822556 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 3 16:29:23.822800 kubelet[2802]: E1103 16:29:23.822615 2802 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 3 16:29:23.822992 kubelet[2802]: E1103 16:29:23.822954 2802 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-84fddd5b49-7hzd4_calico-system(78ce1ad2-3ddf-4f0f-8b04-471a10465b0c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 3 16:29:23.824891 containerd[1613]: time="2025-11-03T16:29:23.824857142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 3 16:29:23.840460 containerd[1613]: time="2025-11-03T16:29:23.840408053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65577d7bd7-xn8xr,Uid:dffa515e-d491-4502-8fb4-90d289e9e24a,Namespace:calico-system,Attempt:0,} returns sandbox id \"fe27f9f9cb3763948df9ae2acac0d62e7ba6e2de4cf81b6d55d6b37a0a0662e2\"" Nov 3 16:29:24.200079 containerd[1613]: time="2025-11-03T16:29:24.199991048Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 3 16:29:24.205390 systemd-networkd[1500]: cali51dccbfc95e: Gained IPv6LL Nov 3 16:29:24.269187 systemd-networkd[1500]: calie1760c44417: Gained IPv6LL Nov 3 16:29:24.349666 containerd[1613]: time="2025-11-03T16:29:24.348976464Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 3 16:29:24.349666 containerd[1613]: time="2025-11-03T16:29:24.349069051Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 3 16:29:24.350297 kubelet[2802]: E1103 16:29:24.350237 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 3 16:29:24.351214 kubelet[2802]: E1103 16:29:24.350311 2802 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 3 16:29:24.351214 kubelet[2802]: E1103 16:29:24.351038 2802 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-84fddd5b49-7hzd4_calico-system(78ce1ad2-3ddf-4f0f-8b04-471a10465b0c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 3 16:29:24.351214 kubelet[2802]: E1103 16:29:24.351096 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84fddd5b49-7hzd4" podUID="78ce1ad2-3ddf-4f0f-8b04-471a10465b0c" Nov 3 16:29:24.351386 containerd[1613]: time="2025-11-03T16:29:24.351023811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 3 16:29:24.413321 containerd[1613]: time="2025-11-03T16:29:24.413040678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lnj89,Uid:6648243c-1869-4d41-a84f-1ec8db284c55,Namespace:calico-system,Attempt:0,}" Nov 3 16:29:24.569600 kubelet[2802]: E1103 16:29:24.568963 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:29:24.569600 kubelet[2802]: E1103 16:29:24.569405 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-848d75bc5c-dq964" podUID="974e3016-4f92-40ef-b564-73c74925d5f3" Nov 3 16:29:24.570979 kubelet[2802]: E1103 16:29:24.570959 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:29:24.571430 kubelet[2802]: E1103 16:29:24.571411 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:29:24.575049 kubelet[2802]: E1103 16:29:24.574605 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84fddd5b49-7hzd4" podUID="78ce1ad2-3ddf-4f0f-8b04-471a10465b0c" Nov 3 16:29:24.583328 systemd-networkd[1500]: cali29c80908ef7: Link UP Nov 3 16:29:24.585550 systemd-networkd[1500]: cali29c80908ef7: Gained carrier Nov 3 16:29:24.604563 kubelet[2802]: I1103 16:29:24.604210 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-66brq" podStartSLOduration=37.604183245 podStartE2EDuration="37.604183245s" podCreationTimestamp="2025-11-03 16:28:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-03 16:29:24.587151571 +0000 UTC m=+43.330972567" watchObservedRunningTime="2025-11-03 16:29:24.604183245 +0000 UTC m=+43.348004221" Nov 3 16:29:24.614133 containerd[1613]: 2025-11-03 16:29:24.480 [INFO][4648] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--lnj89-eth0 csi-node-driver- calico-system 6648243c-1869-4d41-a84f-1ec8db284c55 780 0 2025-11-03 16:29:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-lnj89 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali29c80908ef7 [] [] }} ContainerID="f2e97b81ed0712d58215fa424b18498b9eef2ac654e6e108f6a2f53571cb7b35" Namespace="calico-system" Pod="csi-node-driver-lnj89" WorkloadEndpoint="localhost-k8s-csi--node--driver--lnj89-" Nov 3 16:29:24.614133 containerd[1613]: 2025-11-03 16:29:24.481 [INFO][4648] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f2e97b81ed0712d58215fa424b18498b9eef2ac654e6e108f6a2f53571cb7b35" Namespace="calico-system" Pod="csi-node-driver-lnj89" WorkloadEndpoint="localhost-k8s-csi--node--driver--lnj89-eth0" Nov 3 16:29:24.614133 containerd[1613]: 2025-11-03 16:29:24.534 [INFO][4671] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f2e97b81ed0712d58215fa424b18498b9eef2ac654e6e108f6a2f53571cb7b35" HandleID="k8s-pod-network.f2e97b81ed0712d58215fa424b18498b9eef2ac654e6e108f6a2f53571cb7b35" Workload="localhost-k8s-csi--node--driver--lnj89-eth0" Nov 3 16:29:24.614133 containerd[1613]: 2025-11-03 16:29:24.534 [INFO][4671] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f2e97b81ed0712d58215fa424b18498b9eef2ac654e6e108f6a2f53571cb7b35" HandleID="k8s-pod-network.f2e97b81ed0712d58215fa424b18498b9eef2ac654e6e108f6a2f53571cb7b35" Workload="localhost-k8s-csi--node--driver--lnj89-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00051b410), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-lnj89", "timestamp":"2025-11-03 16:29:24.534634348 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 3 16:29:24.614133 containerd[1613]: 2025-11-03 16:29:24.535 [INFO][4671] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 3 16:29:24.614133 containerd[1613]: 2025-11-03 16:29:24.535 [INFO][4671] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 3 16:29:24.614133 containerd[1613]: 2025-11-03 16:29:24.535 [INFO][4671] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 3 16:29:24.614133 containerd[1613]: 2025-11-03 16:29:24.543 [INFO][4671] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f2e97b81ed0712d58215fa424b18498b9eef2ac654e6e108f6a2f53571cb7b35" host="localhost" Nov 3 16:29:24.614133 containerd[1613]: 2025-11-03 16:29:24.548 [INFO][4671] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 3 16:29:24.614133 containerd[1613]: 2025-11-03 16:29:24.552 [INFO][4671] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 3 16:29:24.614133 containerd[1613]: 2025-11-03 16:29:24.554 [INFO][4671] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 3 16:29:24.614133 containerd[1613]: 2025-11-03 16:29:24.556 [INFO][4671] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 3 16:29:24.614133 containerd[1613]: 2025-11-03 16:29:24.556 [INFO][4671] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f2e97b81ed0712d58215fa424b18498b9eef2ac654e6e108f6a2f53571cb7b35" host="localhost" Nov 3 16:29:24.614133 containerd[1613]: 2025-11-03 16:29:24.558 [INFO][4671] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f2e97b81ed0712d58215fa424b18498b9eef2ac654e6e108f6a2f53571cb7b35 Nov 3 16:29:24.614133 containerd[1613]: 2025-11-03 16:29:24.564 [INFO][4671] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f2e97b81ed0712d58215fa424b18498b9eef2ac654e6e108f6a2f53571cb7b35" host="localhost" Nov 3 16:29:24.614133 containerd[1613]: 2025-11-03 16:29:24.571 [INFO][4671] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.f2e97b81ed0712d58215fa424b18498b9eef2ac654e6e108f6a2f53571cb7b35" host="localhost" Nov 3 16:29:24.614133 containerd[1613]: 2025-11-03 16:29:24.571 [INFO][4671] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.f2e97b81ed0712d58215fa424b18498b9eef2ac654e6e108f6a2f53571cb7b35" host="localhost" Nov 3 16:29:24.614133 containerd[1613]: 2025-11-03 16:29:24.571 [INFO][4671] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 3 16:29:24.614133 containerd[1613]: 2025-11-03 16:29:24.571 [INFO][4671] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="f2e97b81ed0712d58215fa424b18498b9eef2ac654e6e108f6a2f53571cb7b35" HandleID="k8s-pod-network.f2e97b81ed0712d58215fa424b18498b9eef2ac654e6e108f6a2f53571cb7b35" Workload="localhost-k8s-csi--node--driver--lnj89-eth0" Nov 3 16:29:24.614758 containerd[1613]: 2025-11-03 16:29:24.579 [INFO][4648] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f2e97b81ed0712d58215fa424b18498b9eef2ac654e6e108f6a2f53571cb7b35" Namespace="calico-system" Pod="csi-node-driver-lnj89" WorkloadEndpoint="localhost-k8s-csi--node--driver--lnj89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lnj89-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6648243c-1869-4d41-a84f-1ec8db284c55", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.November, 3, 16, 29, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-lnj89", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali29c80908ef7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 3 16:29:24.614758 containerd[1613]: 2025-11-03 16:29:24.579 [INFO][4648] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="f2e97b81ed0712d58215fa424b18498b9eef2ac654e6e108f6a2f53571cb7b35" Namespace="calico-system" Pod="csi-node-driver-lnj89" WorkloadEndpoint="localhost-k8s-csi--node--driver--lnj89-eth0" Nov 3 16:29:24.614758 containerd[1613]: 2025-11-03 16:29:24.579 [INFO][4648] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali29c80908ef7 ContainerID="f2e97b81ed0712d58215fa424b18498b9eef2ac654e6e108f6a2f53571cb7b35" Namespace="calico-system" Pod="csi-node-driver-lnj89" WorkloadEndpoint="localhost-k8s-csi--node--driver--lnj89-eth0" Nov 3 16:29:24.614758 containerd[1613]: 2025-11-03 16:29:24.586 [INFO][4648] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f2e97b81ed0712d58215fa424b18498b9eef2ac654e6e108f6a2f53571cb7b35" Namespace="calico-system" Pod="csi-node-driver-lnj89" WorkloadEndpoint="localhost-k8s-csi--node--driver--lnj89-eth0" Nov 3 16:29:24.614758 containerd[1613]: 2025-11-03 16:29:24.587 [INFO][4648] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f2e97b81ed0712d58215fa424b18498b9eef2ac654e6e108f6a2f53571cb7b35" Namespace="calico-system" Pod="csi-node-driver-lnj89" WorkloadEndpoint="localhost-k8s-csi--node--driver--lnj89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lnj89-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6648243c-1869-4d41-a84f-1ec8db284c55", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.November, 3, 16, 29, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f2e97b81ed0712d58215fa424b18498b9eef2ac654e6e108f6a2f53571cb7b35", Pod:"csi-node-driver-lnj89", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali29c80908ef7", MAC:"fe:92:d2:cb:8c:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 3 16:29:24.614758 containerd[1613]: 2025-11-03 16:29:24.605 [INFO][4648] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f2e97b81ed0712d58215fa424b18498b9eef2ac654e6e108f6a2f53571cb7b35" Namespace="calico-system" Pod="csi-node-driver-lnj89" WorkloadEndpoint="localhost-k8s-csi--node--driver--lnj89-eth0" Nov 3 16:29:24.654859 containerd[1613]: time="2025-11-03T16:29:24.654800036Z" level=info msg="connecting to shim f2e97b81ed0712d58215fa424b18498b9eef2ac654e6e108f6a2f53571cb7b35" address="unix:///run/containerd/s/e336df0a94de7170923a538389041a0ad3d65a0867d4201d06675857921d2977" namespace=k8s.io protocol=ttrpc version=3 Nov 3 16:29:24.698144 systemd[1]: Started cri-containerd-f2e97b81ed0712d58215fa424b18498b9eef2ac654e6e108f6a2f53571cb7b35.scope - libcontainer container f2e97b81ed0712d58215fa424b18498b9eef2ac654e6e108f6a2f53571cb7b35. Nov 3 16:29:24.713704 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 3 16:29:24.719880 containerd[1613]: time="2025-11-03T16:29:24.719836259Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 3 16:29:24.721199 containerd[1613]: time="2025-11-03T16:29:24.721131611Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 3 16:29:24.721244 containerd[1613]: time="2025-11-03T16:29:24.721226913Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 3 16:29:24.721517 kubelet[2802]: E1103 16:29:24.721469 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 3 16:29:24.721571 kubelet[2802]: E1103 16:29:24.721529 2802 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 3 16:29:24.721656 kubelet[2802]: E1103 16:29:24.721629 2802 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-65577d7bd7-xn8xr_calico-system(dffa515e-d491-4502-8fb4-90d289e9e24a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 3 16:29:24.721687 kubelet[2802]: E1103 16:29:24.721668 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65577d7bd7-xn8xr" podUID="dffa515e-d491-4502-8fb4-90d289e9e24a" Nov 3 16:29:24.738672 containerd[1613]: time="2025-11-03T16:29:24.738609729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lnj89,Uid:6648243c-1869-4d41-a84f-1ec8db284c55,Namespace:calico-system,Attempt:0,} returns sandbox id \"f2e97b81ed0712d58215fa424b18498b9eef2ac654e6e108f6a2f53571cb7b35\"" Nov 3 16:29:24.745683 containerd[1613]: time="2025-11-03T16:29:24.745620633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 3 16:29:24.894685 systemd-networkd[1500]: vxlan.calico: Link UP Nov 3 16:29:24.895213 systemd-networkd[1500]: vxlan.calico: Gained carrier Nov 3 16:29:25.120788 containerd[1613]: time="2025-11-03T16:29:25.120698601Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 3 16:29:25.122262 containerd[1613]: time="2025-11-03T16:29:25.122229681Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 3 16:29:25.122382 containerd[1613]: time="2025-11-03T16:29:25.122303754Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 3 16:29:25.122587 kubelet[2802]: E1103 16:29:25.122522 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 3 16:29:25.122641 kubelet[2802]: E1103 16:29:25.122586 2802 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 3 16:29:25.122723 kubelet[2802]: E1103 16:29:25.122689 2802 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-lnj89_calico-system(6648243c-1869-4d41-a84f-1ec8db284c55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 3 16:29:25.124315 containerd[1613]: time="2025-11-03T16:29:25.124276240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 3 16:29:25.412609 containerd[1613]: time="2025-11-03T16:29:25.412553974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bbd84b756-vf947,Uid:9b5d0d2f-8a47-4e07-931b-0ddd4bf1a984,Namespace:calico-apiserver,Attempt:0,}" Nov 3 16:29:25.414582 containerd[1613]: time="2025-11-03T16:29:25.414513497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-kw74x,Uid:cf32291d-629d-4182-829b-587a319625b7,Namespace:calico-system,Attempt:0,}" Nov 3 16:29:25.466133 containerd[1613]: time="2025-11-03T16:29:25.465425108Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 3 16:29:25.468174 containerd[1613]: time="2025-11-03T16:29:25.468134442Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 3 16:29:25.469189 containerd[1613]: time="2025-11-03T16:29:25.468178943Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 3 16:29:25.469243 kubelet[2802]: E1103 16:29:25.468991 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 3 16:29:25.469243 kubelet[2802]: E1103 16:29:25.469073 2802 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 3 16:29:25.469660 kubelet[2802]: E1103 16:29:25.469257 2802 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-lnj89_calico-system(6648243c-1869-4d41-a84f-1ec8db284c55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 3 16:29:25.469660 kubelet[2802]: E1103 16:29:25.469412 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lnj89" podUID="6648243c-1869-4d41-a84f-1ec8db284c55" Nov 3 16:29:25.485341 systemd-networkd[1500]: cali1e95ef3e1ca: Gained IPv6LL Nov 3 16:29:25.545029 systemd-networkd[1500]: cali4f7b98ffdfb: Link UP Nov 3 16:29:25.545259 systemd-networkd[1500]: cali4f7b98ffdfb: Gained carrier Nov 3 16:29:25.560157 containerd[1613]: 2025-11-03 16:29:25.465 [INFO][4833] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6bbd84b756--vf947-eth0 calico-apiserver-6bbd84b756- calico-apiserver 9b5d0d2f-8a47-4e07-931b-0ddd4bf1a984 890 0 2025-11-03 16:28:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6bbd84b756 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6bbd84b756-vf947 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4f7b98ffdfb [] [] }} ContainerID="66c7e1ee806344765886c371301dbee737fe34d6815eb2761fccb0dc41c5b294" Namespace="calico-apiserver" Pod="calico-apiserver-6bbd84b756-vf947" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bbd84b756--vf947-" Nov 3 16:29:25.560157 containerd[1613]: 2025-11-03 16:29:25.465 [INFO][4833] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="66c7e1ee806344765886c371301dbee737fe34d6815eb2761fccb0dc41c5b294" Namespace="calico-apiserver" Pod="calico-apiserver-6bbd84b756-vf947" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bbd84b756--vf947-eth0" Nov 3 16:29:25.560157 containerd[1613]: 2025-11-03 16:29:25.505 [INFO][4865] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="66c7e1ee806344765886c371301dbee737fe34d6815eb2761fccb0dc41c5b294" HandleID="k8s-pod-network.66c7e1ee806344765886c371301dbee737fe34d6815eb2761fccb0dc41c5b294" Workload="localhost-k8s-calico--apiserver--6bbd84b756--vf947-eth0" Nov 3 16:29:25.560157 containerd[1613]: 2025-11-03 16:29:25.505 [INFO][4865] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="66c7e1ee806344765886c371301dbee737fe34d6815eb2761fccb0dc41c5b294" HandleID="k8s-pod-network.66c7e1ee806344765886c371301dbee737fe34d6815eb2761fccb0dc41c5b294" Workload="localhost-k8s-calico--apiserver--6bbd84b756--vf947-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003f8440), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6bbd84b756-vf947", "timestamp":"2025-11-03 16:29:25.505672855 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 3 16:29:25.560157 containerd[1613]: 2025-11-03 16:29:25.505 [INFO][4865] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 3 16:29:25.560157 containerd[1613]: 2025-11-03 16:29:25.505 [INFO][4865] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 3 16:29:25.560157 containerd[1613]: 2025-11-03 16:29:25.505 [INFO][4865] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 3 16:29:25.560157 containerd[1613]: 2025-11-03 16:29:25.512 [INFO][4865] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.66c7e1ee806344765886c371301dbee737fe34d6815eb2761fccb0dc41c5b294" host="localhost" Nov 3 16:29:25.560157 containerd[1613]: 2025-11-03 16:29:25.516 [INFO][4865] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 3 16:29:25.560157 containerd[1613]: 2025-11-03 16:29:25.520 [INFO][4865] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 3 16:29:25.560157 containerd[1613]: 2025-11-03 16:29:25.522 [INFO][4865] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 3 16:29:25.560157 containerd[1613]: 2025-11-03 16:29:25.523 [INFO][4865] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 3 16:29:25.560157 containerd[1613]: 2025-11-03 16:29:25.524 [INFO][4865] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.66c7e1ee806344765886c371301dbee737fe34d6815eb2761fccb0dc41c5b294" host="localhost" Nov 3 16:29:25.560157 containerd[1613]: 2025-11-03 16:29:25.525 [INFO][4865] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.66c7e1ee806344765886c371301dbee737fe34d6815eb2761fccb0dc41c5b294 Nov 3 16:29:25.560157 containerd[1613]: 2025-11-03 16:29:25.529 [INFO][4865] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.66c7e1ee806344765886c371301dbee737fe34d6815eb2761fccb0dc41c5b294" host="localhost" Nov 3 16:29:25.560157 containerd[1613]: 2025-11-03 16:29:25.536 [INFO][4865] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.66c7e1ee806344765886c371301dbee737fe34d6815eb2761fccb0dc41c5b294" host="localhost" Nov 3 16:29:25.560157 containerd[1613]: 2025-11-03 16:29:25.536 [INFO][4865] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.66c7e1ee806344765886c371301dbee737fe34d6815eb2761fccb0dc41c5b294" host="localhost" Nov 3 16:29:25.560157 containerd[1613]: 2025-11-03 16:29:25.536 [INFO][4865] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 3 16:29:25.560157 containerd[1613]: 2025-11-03 16:29:25.536 [INFO][4865] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="66c7e1ee806344765886c371301dbee737fe34d6815eb2761fccb0dc41c5b294" HandleID="k8s-pod-network.66c7e1ee806344765886c371301dbee737fe34d6815eb2761fccb0dc41c5b294" Workload="localhost-k8s-calico--apiserver--6bbd84b756--vf947-eth0" Nov 3 16:29:25.560699 containerd[1613]: 2025-11-03 16:29:25.539 [INFO][4833] cni-plugin/k8s.go 418: Populated endpoint ContainerID="66c7e1ee806344765886c371301dbee737fe34d6815eb2761fccb0dc41c5b294" Namespace="calico-apiserver" Pod="calico-apiserver-6bbd84b756-vf947" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bbd84b756--vf947-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bbd84b756--vf947-eth0", GenerateName:"calico-apiserver-6bbd84b756-", Namespace:"calico-apiserver", SelfLink:"", UID:"9b5d0d2f-8a47-4e07-931b-0ddd4bf1a984", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.November, 3, 16, 28, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bbd84b756", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6bbd84b756-vf947", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f7b98ffdfb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 3 16:29:25.560699 containerd[1613]: 2025-11-03 16:29:25.539 [INFO][4833] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="66c7e1ee806344765886c371301dbee737fe34d6815eb2761fccb0dc41c5b294" Namespace="calico-apiserver" Pod="calico-apiserver-6bbd84b756-vf947" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bbd84b756--vf947-eth0" Nov 3 16:29:25.560699 containerd[1613]: 2025-11-03 16:29:25.539 [INFO][4833] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4f7b98ffdfb ContainerID="66c7e1ee806344765886c371301dbee737fe34d6815eb2761fccb0dc41c5b294" Namespace="calico-apiserver" Pod="calico-apiserver-6bbd84b756-vf947" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bbd84b756--vf947-eth0" Nov 3 16:29:25.560699 containerd[1613]: 2025-11-03 16:29:25.546 [INFO][4833] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="66c7e1ee806344765886c371301dbee737fe34d6815eb2761fccb0dc41c5b294" Namespace="calico-apiserver" Pod="calico-apiserver-6bbd84b756-vf947" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bbd84b756--vf947-eth0" Nov 3 16:29:25.560699 containerd[1613]: 2025-11-03 16:29:25.547 [INFO][4833] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="66c7e1ee806344765886c371301dbee737fe34d6815eb2761fccb0dc41c5b294" Namespace="calico-apiserver" Pod="calico-apiserver-6bbd84b756-vf947" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bbd84b756--vf947-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bbd84b756--vf947-eth0", GenerateName:"calico-apiserver-6bbd84b756-", Namespace:"calico-apiserver", SelfLink:"", UID:"9b5d0d2f-8a47-4e07-931b-0ddd4bf1a984", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.November, 3, 16, 28, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bbd84b756", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"66c7e1ee806344765886c371301dbee737fe34d6815eb2761fccb0dc41c5b294", Pod:"calico-apiserver-6bbd84b756-vf947", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f7b98ffdfb", MAC:"12:81:72:25:3f:99", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 3 16:29:25.560699 containerd[1613]: 2025-11-03 16:29:25.555 [INFO][4833] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="66c7e1ee806344765886c371301dbee737fe34d6815eb2761fccb0dc41c5b294" Namespace="calico-apiserver" Pod="calico-apiserver-6bbd84b756-vf947" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bbd84b756--vf947-eth0" Nov 3 16:29:25.572711 kubelet[2802]: E1103 16:29:25.572623 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:29:25.573316 kubelet[2802]: E1103 16:29:25.573267 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:29:25.573890 kubelet[2802]: E1103 16:29:25.573266 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65577d7bd7-xn8xr" podUID="dffa515e-d491-4502-8fb4-90d289e9e24a" Nov 3 16:29:25.574157 kubelet[2802]: E1103 16:29:25.574111 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lnj89" podUID="6648243c-1869-4d41-a84f-1ec8db284c55" Nov 3 16:29:25.596602 containerd[1613]: time="2025-11-03T16:29:25.596541248Z" level=info msg="connecting to shim 66c7e1ee806344765886c371301dbee737fe34d6815eb2761fccb0dc41c5b294" address="unix:///run/containerd/s/251df6cefb3d06e0566d75ba9e2f90e053cf522b1e03cbf9443a21dc056563fd" namespace=k8s.io protocol=ttrpc version=3 Nov 3 16:29:25.630190 systemd[1]: Started cri-containerd-66c7e1ee806344765886c371301dbee737fe34d6815eb2761fccb0dc41c5b294.scope - libcontainer container 66c7e1ee806344765886c371301dbee737fe34d6815eb2761fccb0dc41c5b294. Nov 3 16:29:25.654114 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 3 16:29:25.655145 systemd-networkd[1500]: cali250ed32ac66: Link UP Nov 3 16:29:25.655973 systemd-networkd[1500]: cali250ed32ac66: Gained carrier Nov 3 16:29:25.672910 containerd[1613]: 2025-11-03 16:29:25.472 [INFO][4847] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--kw74x-eth0 goldmane-7c778bb748- calico-system cf32291d-629d-4182-829b-587a319625b7 893 0 2025-11-03 16:28:58 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-kw74x eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali250ed32ac66 [] [] }} ContainerID="93e0dca48e2fa85cab6ac9362d7bc8af2822cbef5ff87af11c366531b35e5772" Namespace="calico-system" Pod="goldmane-7c778bb748-kw74x" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--kw74x-" Nov 3 16:29:25.672910 containerd[1613]: 2025-11-03 16:29:25.472 [INFO][4847] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="93e0dca48e2fa85cab6ac9362d7bc8af2822cbef5ff87af11c366531b35e5772" Namespace="calico-system" Pod="goldmane-7c778bb748-kw74x" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--kw74x-eth0" Nov 3 16:29:25.672910 containerd[1613]: 2025-11-03 16:29:25.505 [INFO][4867] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="93e0dca48e2fa85cab6ac9362d7bc8af2822cbef5ff87af11c366531b35e5772" HandleID="k8s-pod-network.93e0dca48e2fa85cab6ac9362d7bc8af2822cbef5ff87af11c366531b35e5772" Workload="localhost-k8s-goldmane--7c778bb748--kw74x-eth0" Nov 3 16:29:25.672910 containerd[1613]: 2025-11-03 16:29:25.506 [INFO][4867] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="93e0dca48e2fa85cab6ac9362d7bc8af2822cbef5ff87af11c366531b35e5772" HandleID="k8s-pod-network.93e0dca48e2fa85cab6ac9362d7bc8af2822cbef5ff87af11c366531b35e5772" Workload="localhost-k8s-goldmane--7c778bb748--kw74x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c70d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-kw74x", "timestamp":"2025-11-03 16:29:25.505662226 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 3 16:29:25.672910 containerd[1613]: 2025-11-03 16:29:25.506 [INFO][4867] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 3 16:29:25.672910 containerd[1613]: 2025-11-03 16:29:25.536 [INFO][4867] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 3 16:29:25.672910 containerd[1613]: 2025-11-03 16:29:25.536 [INFO][4867] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 3 16:29:25.672910 containerd[1613]: 2025-11-03 16:29:25.613 [INFO][4867] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.93e0dca48e2fa85cab6ac9362d7bc8af2822cbef5ff87af11c366531b35e5772" host="localhost" Nov 3 16:29:25.672910 containerd[1613]: 2025-11-03 16:29:25.622 [INFO][4867] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 3 16:29:25.672910 containerd[1613]: 2025-11-03 16:29:25.629 [INFO][4867] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 3 16:29:25.672910 containerd[1613]: 2025-11-03 16:29:25.631 [INFO][4867] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 3 16:29:25.672910 containerd[1613]: 2025-11-03 16:29:25.633 [INFO][4867] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 3 16:29:25.672910 containerd[1613]: 2025-11-03 16:29:25.633 [INFO][4867] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.93e0dca48e2fa85cab6ac9362d7bc8af2822cbef5ff87af11c366531b35e5772" host="localhost" Nov 3 16:29:25.672910 containerd[1613]: 2025-11-03 16:29:25.635 [INFO][4867] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.93e0dca48e2fa85cab6ac9362d7bc8af2822cbef5ff87af11c366531b35e5772 Nov 3 16:29:25.672910 containerd[1613]: 2025-11-03 16:29:25.640 [INFO][4867] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.93e0dca48e2fa85cab6ac9362d7bc8af2822cbef5ff87af11c366531b35e5772" host="localhost" Nov 3 16:29:25.672910 containerd[1613]: 2025-11-03 16:29:25.647 [INFO][4867] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.93e0dca48e2fa85cab6ac9362d7bc8af2822cbef5ff87af11c366531b35e5772" host="localhost" Nov 3 16:29:25.672910 containerd[1613]: 2025-11-03 16:29:25.647 [INFO][4867] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.93e0dca48e2fa85cab6ac9362d7bc8af2822cbef5ff87af11c366531b35e5772" host="localhost" Nov 3 16:29:25.672910 containerd[1613]: 2025-11-03 16:29:25.647 [INFO][4867] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 3 16:29:25.672910 containerd[1613]: 2025-11-03 16:29:25.647 [INFO][4867] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="93e0dca48e2fa85cab6ac9362d7bc8af2822cbef5ff87af11c366531b35e5772" HandleID="k8s-pod-network.93e0dca48e2fa85cab6ac9362d7bc8af2822cbef5ff87af11c366531b35e5772" Workload="localhost-k8s-goldmane--7c778bb748--kw74x-eth0" Nov 3 16:29:25.673583 containerd[1613]: 2025-11-03 16:29:25.651 [INFO][4847] cni-plugin/k8s.go 418: Populated endpoint ContainerID="93e0dca48e2fa85cab6ac9362d7bc8af2822cbef5ff87af11c366531b35e5772" Namespace="calico-system" Pod="goldmane-7c778bb748-kw74x" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--kw74x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--kw74x-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"cf32291d-629d-4182-829b-587a319625b7", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.November, 3, 16, 28, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-kw74x", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali250ed32ac66", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 3 16:29:25.673583 containerd[1613]: 2025-11-03 16:29:25.651 [INFO][4847] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="93e0dca48e2fa85cab6ac9362d7bc8af2822cbef5ff87af11c366531b35e5772" Namespace="calico-system" Pod="goldmane-7c778bb748-kw74x" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--kw74x-eth0" Nov 3 16:29:25.673583 containerd[1613]: 2025-11-03 16:29:25.651 [INFO][4847] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali250ed32ac66 ContainerID="93e0dca48e2fa85cab6ac9362d7bc8af2822cbef5ff87af11c366531b35e5772" Namespace="calico-system" Pod="goldmane-7c778bb748-kw74x" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--kw74x-eth0" Nov 3 16:29:25.673583 containerd[1613]: 2025-11-03 16:29:25.655 [INFO][4847] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="93e0dca48e2fa85cab6ac9362d7bc8af2822cbef5ff87af11c366531b35e5772" Namespace="calico-system" Pod="goldmane-7c778bb748-kw74x" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--kw74x-eth0" Nov 3 16:29:25.673583 containerd[1613]: 2025-11-03 16:29:25.656 [INFO][4847] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="93e0dca48e2fa85cab6ac9362d7bc8af2822cbef5ff87af11c366531b35e5772" Namespace="calico-system" Pod="goldmane-7c778bb748-kw74x" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--kw74x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--kw74x-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"cf32291d-629d-4182-829b-587a319625b7", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.November, 3, 16, 28, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"93e0dca48e2fa85cab6ac9362d7bc8af2822cbef5ff87af11c366531b35e5772", Pod:"goldmane-7c778bb748-kw74x", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali250ed32ac66", MAC:"e6:f0:fb:0b:6d:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 3 16:29:25.673583 containerd[1613]: 2025-11-03 16:29:25.668 [INFO][4847] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="93e0dca48e2fa85cab6ac9362d7bc8af2822cbef5ff87af11c366531b35e5772" Namespace="calico-system" Pod="goldmane-7c778bb748-kw74x" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--kw74x-eth0" Nov 3 16:29:25.677175 systemd-networkd[1500]: cali772754c3dfb: Gained IPv6LL Nov 3 16:29:25.694701 containerd[1613]: time="2025-11-03T16:29:25.693808697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bbd84b756-vf947,Uid:9b5d0d2f-8a47-4e07-931b-0ddd4bf1a984,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"66c7e1ee806344765886c371301dbee737fe34d6815eb2761fccb0dc41c5b294\"" Nov 3 16:29:25.696930 containerd[1613]: time="2025-11-03T16:29:25.696775165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 3 16:29:25.710623 containerd[1613]: time="2025-11-03T16:29:25.710548686Z" level=info msg="connecting to shim 93e0dca48e2fa85cab6ac9362d7bc8af2822cbef5ff87af11c366531b35e5772" address="unix:///run/containerd/s/d59585454dbd2f08417e5367401714cb24bcfa761b4809cdf829fbadf83bac8f" namespace=k8s.io protocol=ttrpc version=3 Nov 3 16:29:25.743767 systemd[1]: Started cri-containerd-93e0dca48e2fa85cab6ac9362d7bc8af2822cbef5ff87af11c366531b35e5772.scope - libcontainer container 93e0dca48e2fa85cab6ac9362d7bc8af2822cbef5ff87af11c366531b35e5772. Nov 3 16:29:25.761566 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 3 16:29:25.808849 containerd[1613]: time="2025-11-03T16:29:25.808806550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-kw74x,Uid:cf32291d-629d-4182-829b-587a319625b7,Namespace:calico-system,Attempt:0,} returns sandbox id \"93e0dca48e2fa85cab6ac9362d7bc8af2822cbef5ff87af11c366531b35e5772\"" Nov 3 16:29:26.066781 containerd[1613]: time="2025-11-03T16:29:26.066568039Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 3 16:29:26.068182 containerd[1613]: time="2025-11-03T16:29:26.068120441Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 3 16:29:26.068245 containerd[1613]: time="2025-11-03T16:29:26.068136059Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 3 16:29:26.068488 kubelet[2802]: E1103 16:29:26.068429 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 3 16:29:26.068543 kubelet[2802]: E1103 16:29:26.068500 2802 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 3 16:29:26.068886 kubelet[2802]: E1103 16:29:26.068692 2802 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6bbd84b756-vf947_calico-apiserver(9b5d0d2f-8a47-4e07-931b-0ddd4bf1a984): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 3 16:29:26.068886 kubelet[2802]: E1103 16:29:26.068740 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bbd84b756-vf947" podUID="9b5d0d2f-8a47-4e07-931b-0ddd4bf1a984" Nov 3 16:29:26.078590 containerd[1613]: time="2025-11-03T16:29:26.078534318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 3 16:29:26.195582 systemd[1]: Started sshd@10-10.0.0.124:22-10.0.0.1:48156.service - OpenSSH per-connection server daemon (10.0.0.1:48156). Nov 3 16:29:26.261074 sshd[4997]: Accepted publickey for core from 10.0.0.1 port 48156 ssh2: RSA SHA256:6IgjKsfLloMODYUZWLJOfDFsK2vE75XcxHBEtXf0d48 Nov 3 16:29:26.263427 sshd-session[4997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 3 16:29:26.268469 systemd-logind[1582]: New session 11 of user core. Nov 3 16:29:26.281148 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 3 16:29:26.369841 sshd[5000]: Connection closed by 10.0.0.1 port 48156 Nov 3 16:29:26.370192 sshd-session[4997]: pam_unix(sshd:session): session closed for user core Nov 3 16:29:26.374652 systemd[1]: sshd@10-10.0.0.124:22-10.0.0.1:48156.service: Deactivated successfully. Nov 3 16:29:26.376796 systemd[1]: session-11.scope: Deactivated successfully. Nov 3 16:29:26.377700 systemd-logind[1582]: Session 11 logged out. Waiting for processes to exit. Nov 3 16:29:26.378824 systemd-logind[1582]: Removed session 11. Nov 3 16:29:26.409040 containerd[1613]: time="2025-11-03T16:29:26.408973876Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 3 16:29:26.411621 containerd[1613]: time="2025-11-03T16:29:26.411582895Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 3 16:29:26.411696 containerd[1613]: time="2025-11-03T16:29:26.411665364Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 3 16:29:26.412048 kubelet[2802]: E1103 16:29:26.411871 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 3 16:29:26.412048 kubelet[2802]: E1103 16:29:26.411940 2802 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 3 16:29:26.412172 containerd[1613]: time="2025-11-03T16:29:26.411890500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bbd84b756-z5kfx,Uid:84a57abf-59e3-4ca8-817d-7787f6e42d37,Namespace:calico-apiserver,Attempt:0,}" Nov 3 16:29:26.412200 kubelet[2802]: E1103 16:29:26.412066 2802 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-kw74x_calico-system(cf32291d-629d-4182-829b-587a319625b7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 3 16:29:26.412200 kubelet[2802]: E1103 16:29:26.412103 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-kw74x" podUID="cf32291d-629d-4182-829b-587a319625b7" Nov 3 16:29:26.510490 systemd-networkd[1500]: cali1a3f6b10673: Link UP Nov 3 16:29:26.511663 systemd-networkd[1500]: cali1a3f6b10673: Gained carrier Nov 3 16:29:26.526827 containerd[1613]: 2025-11-03 16:29:26.449 [INFO][5014] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6bbd84b756--z5kfx-eth0 calico-apiserver-6bbd84b756- calico-apiserver 84a57abf-59e3-4ca8-817d-7787f6e42d37 892 0 2025-11-03 16:28:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6bbd84b756 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6bbd84b756-z5kfx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1a3f6b10673 [] [] }} ContainerID="5a55ef199b41bf99bad828bb2dec3585cbc1ee567625fdaa126c16ec7bf35f15" Namespace="calico-apiserver" Pod="calico-apiserver-6bbd84b756-z5kfx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bbd84b756--z5kfx-" Nov 3 16:29:26.526827 containerd[1613]: 2025-11-03 16:29:26.449 [INFO][5014] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5a55ef199b41bf99bad828bb2dec3585cbc1ee567625fdaa126c16ec7bf35f15" Namespace="calico-apiserver" Pod="calico-apiserver-6bbd84b756-z5kfx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bbd84b756--z5kfx-eth0" Nov 3 16:29:26.526827 containerd[1613]: 2025-11-03 16:29:26.474 [INFO][5029] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5a55ef199b41bf99bad828bb2dec3585cbc1ee567625fdaa126c16ec7bf35f15" HandleID="k8s-pod-network.5a55ef199b41bf99bad828bb2dec3585cbc1ee567625fdaa126c16ec7bf35f15" Workload="localhost-k8s-calico--apiserver--6bbd84b756--z5kfx-eth0" Nov 3 16:29:26.526827 containerd[1613]: 2025-11-03 16:29:26.474 [INFO][5029] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5a55ef199b41bf99bad828bb2dec3585cbc1ee567625fdaa126c16ec7bf35f15" HandleID="k8s-pod-network.5a55ef199b41bf99bad828bb2dec3585cbc1ee567625fdaa126c16ec7bf35f15" Workload="localhost-k8s-calico--apiserver--6bbd84b756--z5kfx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000511390), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6bbd84b756-z5kfx", "timestamp":"2025-11-03 16:29:26.474429434 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 3 16:29:26.526827 containerd[1613]: 2025-11-03 16:29:26.474 [INFO][5029] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 3 16:29:26.526827 containerd[1613]: 2025-11-03 16:29:26.474 [INFO][5029] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 3 16:29:26.526827 containerd[1613]: 2025-11-03 16:29:26.474 [INFO][5029] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 3 16:29:26.526827 containerd[1613]: 2025-11-03 16:29:26.481 [INFO][5029] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5a55ef199b41bf99bad828bb2dec3585cbc1ee567625fdaa126c16ec7bf35f15" host="localhost" Nov 3 16:29:26.526827 containerd[1613]: 2025-11-03 16:29:26.485 [INFO][5029] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 3 16:29:26.526827 containerd[1613]: 2025-11-03 16:29:26.488 [INFO][5029] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 3 16:29:26.526827 containerd[1613]: 2025-11-03 16:29:26.490 [INFO][5029] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 3 16:29:26.526827 containerd[1613]: 2025-11-03 16:29:26.491 [INFO][5029] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 3 16:29:26.526827 containerd[1613]: 2025-11-03 16:29:26.491 [INFO][5029] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5a55ef199b41bf99bad828bb2dec3585cbc1ee567625fdaa126c16ec7bf35f15" host="localhost" Nov 3 16:29:26.526827 containerd[1613]: 2025-11-03 16:29:26.493 [INFO][5029] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5a55ef199b41bf99bad828bb2dec3585cbc1ee567625fdaa126c16ec7bf35f15 Nov 3 16:29:26.526827 containerd[1613]: 2025-11-03 16:29:26.498 [INFO][5029] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5a55ef199b41bf99bad828bb2dec3585cbc1ee567625fdaa126c16ec7bf35f15" host="localhost" Nov 3 16:29:26.526827 containerd[1613]: 2025-11-03 16:29:26.504 [INFO][5029] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.5a55ef199b41bf99bad828bb2dec3585cbc1ee567625fdaa126c16ec7bf35f15" host="localhost" Nov 3 16:29:26.526827 containerd[1613]: 2025-11-03 16:29:26.504 [INFO][5029] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.5a55ef199b41bf99bad828bb2dec3585cbc1ee567625fdaa126c16ec7bf35f15" host="localhost" Nov 3 16:29:26.526827 containerd[1613]: 2025-11-03 16:29:26.504 [INFO][5029] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 3 16:29:26.526827 containerd[1613]: 2025-11-03 16:29:26.504 [INFO][5029] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="5a55ef199b41bf99bad828bb2dec3585cbc1ee567625fdaa126c16ec7bf35f15" HandleID="k8s-pod-network.5a55ef199b41bf99bad828bb2dec3585cbc1ee567625fdaa126c16ec7bf35f15" Workload="localhost-k8s-calico--apiserver--6bbd84b756--z5kfx-eth0" Nov 3 16:29:26.527662 containerd[1613]: 2025-11-03 16:29:26.507 [INFO][5014] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5a55ef199b41bf99bad828bb2dec3585cbc1ee567625fdaa126c16ec7bf35f15" Namespace="calico-apiserver" Pod="calico-apiserver-6bbd84b756-z5kfx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bbd84b756--z5kfx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bbd84b756--z5kfx-eth0", GenerateName:"calico-apiserver-6bbd84b756-", Namespace:"calico-apiserver", SelfLink:"", UID:"84a57abf-59e3-4ca8-817d-7787f6e42d37", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.November, 3, 16, 28, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bbd84b756", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6bbd84b756-z5kfx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1a3f6b10673", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 3 16:29:26.527662 containerd[1613]: 2025-11-03 16:29:26.508 [INFO][5014] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="5a55ef199b41bf99bad828bb2dec3585cbc1ee567625fdaa126c16ec7bf35f15" Namespace="calico-apiserver" Pod="calico-apiserver-6bbd84b756-z5kfx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bbd84b756--z5kfx-eth0" Nov 3 16:29:26.527662 containerd[1613]: 2025-11-03 16:29:26.508 [INFO][5014] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1a3f6b10673 ContainerID="5a55ef199b41bf99bad828bb2dec3585cbc1ee567625fdaa126c16ec7bf35f15" Namespace="calico-apiserver" Pod="calico-apiserver-6bbd84b756-z5kfx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bbd84b756--z5kfx-eth0" Nov 3 16:29:26.527662 containerd[1613]: 2025-11-03 16:29:26.511 [INFO][5014] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5a55ef199b41bf99bad828bb2dec3585cbc1ee567625fdaa126c16ec7bf35f15" Namespace="calico-apiserver" Pod="calico-apiserver-6bbd84b756-z5kfx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bbd84b756--z5kfx-eth0" Nov 3 16:29:26.527662 containerd[1613]: 2025-11-03 16:29:26.512 [INFO][5014] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5a55ef199b41bf99bad828bb2dec3585cbc1ee567625fdaa126c16ec7bf35f15" Namespace="calico-apiserver" Pod="calico-apiserver-6bbd84b756-z5kfx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bbd84b756--z5kfx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bbd84b756--z5kfx-eth0", GenerateName:"calico-apiserver-6bbd84b756-", Namespace:"calico-apiserver", SelfLink:"", UID:"84a57abf-59e3-4ca8-817d-7787f6e42d37", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.November, 3, 16, 28, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bbd84b756", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5a55ef199b41bf99bad828bb2dec3585cbc1ee567625fdaa126c16ec7bf35f15", Pod:"calico-apiserver-6bbd84b756-z5kfx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1a3f6b10673", MAC:"32:d6:8c:5e:04:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 3 16:29:26.527662 containerd[1613]: 2025-11-03 16:29:26.523 [INFO][5014] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5a55ef199b41bf99bad828bb2dec3585cbc1ee567625fdaa126c16ec7bf35f15" Namespace="calico-apiserver" Pod="calico-apiserver-6bbd84b756-z5kfx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bbd84b756--z5kfx-eth0" Nov 3 16:29:26.548810 containerd[1613]: time="2025-11-03T16:29:26.548736309Z" level=info msg="connecting to shim 5a55ef199b41bf99bad828bb2dec3585cbc1ee567625fdaa126c16ec7bf35f15" address="unix:///run/containerd/s/c508ca8ca6febe151701fd4028ec225cd50487712d2ad7cf041bdc524116b072" namespace=k8s.io protocol=ttrpc version=3 Nov 3 16:29:26.573332 systemd-networkd[1500]: cali29c80908ef7: Gained IPv6LL Nov 3 16:29:26.576345 systemd[1]: Started cri-containerd-5a55ef199b41bf99bad828bb2dec3585cbc1ee567625fdaa126c16ec7bf35f15.scope - libcontainer container 5a55ef199b41bf99bad828bb2dec3585cbc1ee567625fdaa126c16ec7bf35f15. Nov 3 16:29:26.578147 kubelet[2802]: E1103 16:29:26.578072 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-kw74x" podUID="cf32291d-629d-4182-829b-587a319625b7" Nov 3 16:29:26.581107 kubelet[2802]: E1103 16:29:26.580928 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bbd84b756-vf947" podUID="9b5d0d2f-8a47-4e07-931b-0ddd4bf1a984" Nov 3 16:29:26.583972 kubelet[2802]: E1103 16:29:26.583908 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lnj89" podUID="6648243c-1869-4d41-a84f-1ec8db284c55" Nov 3 16:29:26.620095 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 3 16:29:26.653171 containerd[1613]: time="2025-11-03T16:29:26.653125232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bbd84b756-z5kfx,Uid:84a57abf-59e3-4ca8-817d-7787f6e42d37,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5a55ef199b41bf99bad828bb2dec3585cbc1ee567625fdaa126c16ec7bf35f15\"" Nov 3 16:29:26.654929 containerd[1613]: time="2025-11-03T16:29:26.654903531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 3 16:29:26.701157 systemd-networkd[1500]: vxlan.calico: Gained IPv6LL Nov 3 16:29:26.829193 systemd-networkd[1500]: cali4f7b98ffdfb: Gained IPv6LL Nov 3 16:29:26.893227 systemd-networkd[1500]: cali250ed32ac66: Gained IPv6LL Nov 3 16:29:26.986927 containerd[1613]: time="2025-11-03T16:29:26.986846132Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 3 16:29:26.997135 containerd[1613]: time="2025-11-03T16:29:26.997075265Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 3 16:29:26.997233 containerd[1613]: time="2025-11-03T16:29:26.997135454Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 3 16:29:26.997419 kubelet[2802]: E1103 16:29:26.997364 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 3 16:29:26.997496 kubelet[2802]: E1103 16:29:26.997426 2802 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 3 16:29:26.997531 kubelet[2802]: E1103 16:29:26.997513 2802 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6bbd84b756-z5kfx_calico-apiserver(84a57abf-59e3-4ca8-817d-7787f6e42d37): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 3 16:29:26.997573 kubelet[2802]: E1103 16:29:26.997550 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bbd84b756-z5kfx" podUID="84a57abf-59e3-4ca8-817d-7787f6e42d37" Nov 3 16:29:27.583751 kubelet[2802]: E1103 16:29:27.583688 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bbd84b756-z5kfx" podUID="84a57abf-59e3-4ca8-817d-7787f6e42d37" Nov 3 16:29:27.584258 kubelet[2802]: E1103 16:29:27.584144 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-kw74x" podUID="cf32291d-629d-4182-829b-587a319625b7" Nov 3 16:29:27.584258 kubelet[2802]: E1103 16:29:27.584209 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bbd84b756-vf947" podUID="9b5d0d2f-8a47-4e07-931b-0ddd4bf1a984" Nov 3 16:29:27.598354 systemd-networkd[1500]: cali1a3f6b10673: Gained IPv6LL Nov 3 16:29:28.585850 kubelet[2802]: E1103 16:29:28.585759 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bbd84b756-z5kfx" podUID="84a57abf-59e3-4ca8-817d-7787f6e42d37" Nov 3 16:29:31.390543 systemd[1]: Started sshd@11-10.0.0.124:22-10.0.0.1:48168.service - OpenSSH per-connection server daemon (10.0.0.1:48168). Nov 3 16:29:31.452555 sshd[5102]: Accepted publickey for core from 10.0.0.1 port 48168 ssh2: RSA SHA256:6IgjKsfLloMODYUZWLJOfDFsK2vE75XcxHBEtXf0d48 Nov 3 16:29:31.453874 sshd-session[5102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 3 16:29:31.458044 systemd-logind[1582]: New session 12 of user core. Nov 3 16:29:31.464182 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 3 16:29:31.536746 sshd[5105]: Connection closed by 10.0.0.1 port 48168 Nov 3 16:29:31.537114 sshd-session[5102]: pam_unix(sshd:session): session closed for user core Nov 3 16:29:31.547877 systemd[1]: sshd@11-10.0.0.124:22-10.0.0.1:48168.service: Deactivated successfully. Nov 3 16:29:31.549721 systemd[1]: session-12.scope: Deactivated successfully. Nov 3 16:29:31.550513 systemd-logind[1582]: Session 12 logged out. Waiting for processes to exit. Nov 3 16:29:31.553387 systemd[1]: Started sshd@12-10.0.0.124:22-10.0.0.1:48170.service - OpenSSH per-connection server daemon (10.0.0.1:48170). Nov 3 16:29:31.554137 systemd-logind[1582]: Removed session 12. Nov 3 16:29:31.611611 sshd[5120]: Accepted publickey for core from 10.0.0.1 port 48170 ssh2: RSA SHA256:6IgjKsfLloMODYUZWLJOfDFsK2vE75XcxHBEtXf0d48 Nov 3 16:29:31.612989 sshd-session[5120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 3 16:29:31.617508 systemd-logind[1582]: New session 13 of user core. Nov 3 16:29:31.624212 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 3 16:29:31.730081 sshd[5127]: Connection closed by 10.0.0.1 port 48170 Nov 3 16:29:31.730514 sshd-session[5120]: pam_unix(sshd:session): session closed for user core Nov 3 16:29:31.743847 systemd[1]: sshd@12-10.0.0.124:22-10.0.0.1:48170.service: Deactivated successfully. Nov 3 16:29:31.745892 systemd[1]: session-13.scope: Deactivated successfully. Nov 3 16:29:31.748758 systemd-logind[1582]: Session 13 logged out. Waiting for processes to exit. Nov 3 16:29:31.751830 systemd-logind[1582]: Removed session 13. Nov 3 16:29:31.754779 systemd[1]: Started sshd@13-10.0.0.124:22-10.0.0.1:48174.service - OpenSSH per-connection server daemon (10.0.0.1:48174). Nov 3 16:29:31.812229 sshd[5139]: Accepted publickey for core from 10.0.0.1 port 48174 ssh2: RSA SHA256:6IgjKsfLloMODYUZWLJOfDFsK2vE75XcxHBEtXf0d48 Nov 3 16:29:31.814304 sshd-session[5139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 3 16:29:31.820316 systemd-logind[1582]: New session 14 of user core. Nov 3 16:29:31.826151 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 3 16:29:31.913674 sshd[5142]: Connection closed by 10.0.0.1 port 48174 Nov 3 16:29:31.914095 sshd-session[5139]: pam_unix(sshd:session): session closed for user core Nov 3 16:29:31.919341 systemd[1]: sshd@13-10.0.0.124:22-10.0.0.1:48174.service: Deactivated successfully. Nov 3 16:29:31.921683 systemd[1]: session-14.scope: Deactivated successfully. Nov 3 16:29:31.922591 systemd-logind[1582]: Session 14 logged out. Waiting for processes to exit. Nov 3 16:29:31.924062 systemd-logind[1582]: Removed session 14. Nov 3 16:29:35.409902 containerd[1613]: time="2025-11-03T16:29:35.409838751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 3 16:29:35.724142 containerd[1613]: time="2025-11-03T16:29:35.723960690Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 3 16:29:35.725259 containerd[1613]: time="2025-11-03T16:29:35.725198175Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 3 16:29:35.725337 containerd[1613]: time="2025-11-03T16:29:35.725282009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 3 16:29:35.725590 kubelet[2802]: E1103 16:29:35.725526 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 3 16:29:35.725991 kubelet[2802]: E1103 16:29:35.725604 2802 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 3 16:29:35.725991 kubelet[2802]: E1103 16:29:35.725743 2802 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-848d75bc5c-dq964_calico-apiserver(974e3016-4f92-40ef-b564-73c74925d5f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 3 16:29:35.725991 kubelet[2802]: E1103 16:29:35.725799 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-848d75bc5c-dq964" podUID="974e3016-4f92-40ef-b564-73c74925d5f3" Nov 3 16:29:36.929675 systemd[1]: Started sshd@14-10.0.0.124:22-10.0.0.1:60632.service - OpenSSH per-connection server daemon (10.0.0.1:60632). Nov 3 16:29:36.987610 sshd[5163]: Accepted publickey for core from 10.0.0.1 port 60632 ssh2: RSA SHA256:6IgjKsfLloMODYUZWLJOfDFsK2vE75XcxHBEtXf0d48 Nov 3 16:29:36.988926 sshd-session[5163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 3 16:29:36.993358 systemd-logind[1582]: New session 15 of user core. Nov 3 16:29:37.004172 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 3 16:29:37.075671 sshd[5166]: Connection closed by 10.0.0.1 port 60632 Nov 3 16:29:37.075983 sshd-session[5163]: pam_unix(sshd:session): session closed for user core Nov 3 16:29:37.080165 systemd[1]: sshd@14-10.0.0.124:22-10.0.0.1:60632.service: Deactivated successfully. Nov 3 16:29:37.082436 systemd[1]: session-15.scope: Deactivated successfully. Nov 3 16:29:37.083224 systemd-logind[1582]: Session 15 logged out. Waiting for processes to exit. Nov 3 16:29:37.084593 systemd-logind[1582]: Removed session 15. Nov 3 16:29:38.409558 containerd[1613]: time="2025-11-03T16:29:38.409507493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 3 16:29:38.768914 containerd[1613]: time="2025-11-03T16:29:38.768741431Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 3 16:29:38.769956 containerd[1613]: time="2025-11-03T16:29:38.769911168Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 3 16:29:38.769956 containerd[1613]: time="2025-11-03T16:29:38.769988259Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 3 16:29:38.770205 kubelet[2802]: E1103 16:29:38.770148 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 3 16:29:38.770205 kubelet[2802]: E1103 16:29:38.770198 2802 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 3 16:29:38.770596 kubelet[2802]: E1103 16:29:38.770388 2802 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-84fddd5b49-7hzd4_calico-system(78ce1ad2-3ddf-4f0f-8b04-471a10465b0c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 3 16:29:38.770936 containerd[1613]: time="2025-11-03T16:29:38.770650769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 3 16:29:39.112471 containerd[1613]: time="2025-11-03T16:29:39.112373416Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 3 16:29:39.113893 containerd[1613]: time="2025-11-03T16:29:39.113832063Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 3 16:29:39.114130 containerd[1613]: time="2025-11-03T16:29:39.113937506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 3 16:29:39.114867 kubelet[2802]: E1103 16:29:39.114681 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 3 16:29:39.115044 kubelet[2802]: E1103 16:29:39.114989 2802 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 3 16:29:39.115721 kubelet[2802]: E1103 16:29:39.115610 2802 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-65577d7bd7-xn8xr_calico-system(dffa515e-d491-4502-8fb4-90d289e9e24a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 3 16:29:39.116139 kubelet[2802]: E1103 16:29:39.116047 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65577d7bd7-xn8xr" podUID="dffa515e-d491-4502-8fb4-90d289e9e24a" Nov 3 16:29:39.116544 containerd[1613]: time="2025-11-03T16:29:39.116476871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 3 16:29:39.488766 containerd[1613]: time="2025-11-03T16:29:39.488448317Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 3 16:29:39.490169 containerd[1613]: time="2025-11-03T16:29:39.490117279Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 3 16:29:39.490307 containerd[1613]: time="2025-11-03T16:29:39.490235285Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 3 16:29:39.491461 kubelet[2802]: E1103 16:29:39.490446 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 3 16:29:39.491580 kubelet[2802]: E1103 16:29:39.491489 2802 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 3 16:29:39.491937 kubelet[2802]: E1103 16:29:39.491730 2802 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-84fddd5b49-7hzd4_calico-system(78ce1ad2-3ddf-4f0f-8b04-471a10465b0c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 3 16:29:39.491937 kubelet[2802]: E1103 16:29:39.491851 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84fddd5b49-7hzd4" podUID="78ce1ad2-3ddf-4f0f-8b04-471a10465b0c" Nov 3 16:29:40.414102 containerd[1613]: time="2025-11-03T16:29:40.413867220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 3 16:29:40.753870 containerd[1613]: time="2025-11-03T16:29:40.753688086Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 3 16:29:40.755120 containerd[1613]: time="2025-11-03T16:29:40.755037986Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 3 16:29:40.755120 containerd[1613]: time="2025-11-03T16:29:40.755113514Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 3 16:29:40.755356 kubelet[2802]: E1103 16:29:40.755235 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 3 16:29:40.755356 kubelet[2802]: E1103 16:29:40.755296 2802 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 3 16:29:40.755801 kubelet[2802]: E1103 16:29:40.755427 2802 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-lnj89_calico-system(6648243c-1869-4d41-a84f-1ec8db284c55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 3 16:29:40.757175 containerd[1613]: time="2025-11-03T16:29:40.757117753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 3 16:29:41.133841 containerd[1613]: time="2025-11-03T16:29:41.133783816Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 3 16:29:41.135079 containerd[1613]: time="2025-11-03T16:29:41.135027093Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 3 16:29:41.135079 containerd[1613]: time="2025-11-03T16:29:41.135055595Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 3 16:29:41.135377 kubelet[2802]: E1103 16:29:41.135303 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 3 16:29:41.135462 kubelet[2802]: E1103 16:29:41.135378 2802 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 3 16:29:41.135509 kubelet[2802]: E1103 16:29:41.135482 2802 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-lnj89_calico-system(6648243c-1869-4d41-a84f-1ec8db284c55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 3 16:29:41.135596 kubelet[2802]: E1103 16:29:41.135554 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lnj89" podUID="6648243c-1869-4d41-a84f-1ec8db284c55" Nov 3 16:29:41.409889 containerd[1613]: time="2025-11-03T16:29:41.409596936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 3 16:29:41.736067 containerd[1613]: time="2025-11-03T16:29:41.735917422Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 3 16:29:41.737307 containerd[1613]: time="2025-11-03T16:29:41.737249772Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 3 16:29:41.737422 containerd[1613]: time="2025-11-03T16:29:41.737292831Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 3 16:29:41.737538 kubelet[2802]: E1103 16:29:41.737491 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 3 16:29:41.737603 kubelet[2802]: E1103 16:29:41.737540 2802 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 3 16:29:41.737761 kubelet[2802]: E1103 16:29:41.737726 2802 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-kw74x_calico-system(cf32291d-629d-4182-829b-587a319625b7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 3 16:29:41.737847 containerd[1613]: time="2025-11-03T16:29:41.737818263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 3 16:29:41.737924 kubelet[2802]: E1103 16:29:41.737876 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-kw74x" podUID="cf32291d-629d-4182-829b-587a319625b7" Nov 3 16:29:42.087853 systemd[1]: Started sshd@15-10.0.0.124:22-10.0.0.1:60648.service - OpenSSH per-connection server daemon (10.0.0.1:60648). Nov 3 16:29:42.132770 containerd[1613]: time="2025-11-03T16:29:42.132706892Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 3 16:29:42.134025 containerd[1613]: time="2025-11-03T16:29:42.133927750Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 3 16:29:42.134025 containerd[1613]: time="2025-11-03T16:29:42.133972642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 3 16:29:42.134360 kubelet[2802]: E1103 16:29:42.134322 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 3 16:29:42.134628 kubelet[2802]: E1103 16:29:42.134373 2802 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 3 16:29:42.134628 kubelet[2802]: E1103 16:29:42.134459 2802 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6bbd84b756-vf947_calico-apiserver(9b5d0d2f-8a47-4e07-931b-0ddd4bf1a984): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 3 16:29:42.134628 kubelet[2802]: E1103 16:29:42.134495 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bbd84b756-vf947" podUID="9b5d0d2f-8a47-4e07-931b-0ddd4bf1a984" Nov 3 16:29:42.148401 sshd[5185]: Accepted publickey for core from 10.0.0.1 port 60648 ssh2: RSA SHA256:6IgjKsfLloMODYUZWLJOfDFsK2vE75XcxHBEtXf0d48 Nov 3 16:29:42.150221 sshd-session[5185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 3 16:29:42.154715 systemd-logind[1582]: New session 16 of user core. Nov 3 16:29:42.169141 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 3 16:29:42.240573 sshd[5188]: Connection closed by 10.0.0.1 port 60648 Nov 3 16:29:42.240866 sshd-session[5185]: pam_unix(sshd:session): session closed for user core Nov 3 16:29:42.245117 systemd[1]: sshd@15-10.0.0.124:22-10.0.0.1:60648.service: Deactivated successfully. Nov 3 16:29:42.247230 systemd[1]: session-16.scope: Deactivated successfully. Nov 3 16:29:42.247954 systemd-logind[1582]: Session 16 logged out. Waiting for processes to exit. Nov 3 16:29:42.249754 systemd-logind[1582]: Removed session 16. Nov 3 16:29:44.409927 containerd[1613]: time="2025-11-03T16:29:44.409865446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 3 16:29:44.745623 containerd[1613]: time="2025-11-03T16:29:44.745410803Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 3 16:29:44.746761 containerd[1613]: time="2025-11-03T16:29:44.746713303Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 3 16:29:44.746801 containerd[1613]: time="2025-11-03T16:29:44.746767213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 3 16:29:44.746984 kubelet[2802]: E1103 16:29:44.746945 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 3 16:29:44.747354 kubelet[2802]: E1103 16:29:44.746997 2802 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 3 16:29:44.747354 kubelet[2802]: E1103 16:29:44.747117 2802 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6bbd84b756-z5kfx_calico-apiserver(84a57abf-59e3-4ca8-817d-7787f6e42d37): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 3 16:29:44.747354 kubelet[2802]: E1103 16:29:44.747149 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bbd84b756-z5kfx" podUID="84a57abf-59e3-4ca8-817d-7787f6e42d37" Nov 3 16:29:47.257620 systemd[1]: Started sshd@16-10.0.0.124:22-10.0.0.1:50482.service - OpenSSH per-connection server daemon (10.0.0.1:50482). Nov 3 16:29:47.310240 sshd[5209]: Accepted publickey for core from 10.0.0.1 port 50482 ssh2: RSA SHA256:6IgjKsfLloMODYUZWLJOfDFsK2vE75XcxHBEtXf0d48 Nov 3 16:29:47.311627 sshd-session[5209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 3 16:29:47.316254 systemd-logind[1582]: New session 17 of user core. Nov 3 16:29:47.324199 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 3 16:29:47.399266 sshd[5212]: Connection closed by 10.0.0.1 port 50482 Nov 3 16:29:47.399589 sshd-session[5209]: pam_unix(sshd:session): session closed for user core Nov 3 16:29:47.403292 systemd[1]: sshd@16-10.0.0.124:22-10.0.0.1:50482.service: Deactivated successfully. Nov 3 16:29:47.405352 systemd[1]: session-17.scope: Deactivated successfully. Nov 3 16:29:47.406686 systemd-logind[1582]: Session 17 logged out. Waiting for processes to exit. Nov 3 16:29:47.407799 systemd-logind[1582]: Removed session 17. Nov 3 16:29:47.409714 kubelet[2802]: E1103 16:29:47.409481 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-848d75bc5c-dq964" podUID="974e3016-4f92-40ef-b564-73c74925d5f3" Nov 3 16:29:51.409929 kubelet[2802]: E1103 16:29:51.409864 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65577d7bd7-xn8xr" podUID="dffa515e-d491-4502-8fb4-90d289e9e24a" Nov 3 16:29:52.416868 systemd[1]: Started sshd@17-10.0.0.124:22-10.0.0.1:50492.service - OpenSSH per-connection server daemon (10.0.0.1:50492). Nov 3 16:29:52.488785 sshd[5228]: Accepted publickey for core from 10.0.0.1 port 50492 ssh2: RSA SHA256:6IgjKsfLloMODYUZWLJOfDFsK2vE75XcxHBEtXf0d48 Nov 3 16:29:52.490875 sshd-session[5228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 3 16:29:52.495728 systemd-logind[1582]: New session 18 of user core. Nov 3 16:29:52.503134 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 3 16:29:52.592169 sshd[5231]: Connection closed by 10.0.0.1 port 50492 Nov 3 16:29:52.592664 sshd-session[5228]: pam_unix(sshd:session): session closed for user core Nov 3 16:29:52.606166 systemd[1]: sshd@17-10.0.0.124:22-10.0.0.1:50492.service: Deactivated successfully. Nov 3 16:29:52.608143 systemd[1]: session-18.scope: Deactivated successfully. Nov 3 16:29:52.609199 systemd-logind[1582]: Session 18 logged out. Waiting for processes to exit. Nov 3 16:29:52.612201 systemd[1]: Started sshd@18-10.0.0.124:22-10.0.0.1:50508.service - OpenSSH per-connection server daemon (10.0.0.1:50508). Nov 3 16:29:52.613105 systemd-logind[1582]: Removed session 18. Nov 3 16:29:52.674378 sshd[5245]: Accepted publickey for core from 10.0.0.1 port 50508 ssh2: RSA SHA256:6IgjKsfLloMODYUZWLJOfDFsK2vE75XcxHBEtXf0d48 Nov 3 16:29:52.676214 sshd-session[5245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 3 16:29:52.681520 systemd-logind[1582]: New session 19 of user core. Nov 3 16:29:52.689170 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 3 16:29:53.061492 sshd[5248]: Connection closed by 10.0.0.1 port 50508 Nov 3 16:29:53.061762 sshd-session[5245]: pam_unix(sshd:session): session closed for user core Nov 3 16:29:53.074940 systemd[1]: sshd@18-10.0.0.124:22-10.0.0.1:50508.service: Deactivated successfully. Nov 3 16:29:53.077080 systemd[1]: session-19.scope: Deactivated successfully. Nov 3 16:29:53.078052 systemd-logind[1582]: Session 19 logged out. Waiting for processes to exit. Nov 3 16:29:53.081089 systemd[1]: Started sshd@19-10.0.0.124:22-10.0.0.1:58410.service - OpenSSH per-connection server daemon (10.0.0.1:58410). Nov 3 16:29:53.081753 systemd-logind[1582]: Removed session 19. Nov 3 16:29:53.145227 sshd[5260]: Accepted publickey for core from 10.0.0.1 port 58410 ssh2: RSA SHA256:6IgjKsfLloMODYUZWLJOfDFsK2vE75XcxHBEtXf0d48 Nov 3 16:29:53.146511 sshd-session[5260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 3 16:29:53.151203 systemd-logind[1582]: New session 20 of user core. Nov 3 16:29:53.163176 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 3 16:29:53.271663 kubelet[2802]: E1103 16:29:53.270969 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:29:53.409699 kubelet[2802]: E1103 16:29:53.409628 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lnj89" podUID="6648243c-1869-4d41-a84f-1ec8db284c55" Nov 3 16:29:53.642762 sshd[5264]: Connection closed by 10.0.0.1 port 58410 Nov 3 16:29:53.643162 sshd-session[5260]: pam_unix(sshd:session): session closed for user core Nov 3 16:29:53.655414 systemd[1]: sshd@19-10.0.0.124:22-10.0.0.1:58410.service: Deactivated successfully. Nov 3 16:29:53.658237 systemd[1]: session-20.scope: Deactivated successfully. Nov 3 16:29:53.662122 systemd-logind[1582]: Session 20 logged out. Waiting for processes to exit. Nov 3 16:29:53.664628 systemd[1]: Started sshd@20-10.0.0.124:22-10.0.0.1:58422.service - OpenSSH per-connection server daemon (10.0.0.1:58422). Nov 3 16:29:53.666514 systemd-logind[1582]: Removed session 20. Nov 3 16:29:53.716962 sshd[5306]: Accepted publickey for core from 10.0.0.1 port 58422 ssh2: RSA SHA256:6IgjKsfLloMODYUZWLJOfDFsK2vE75XcxHBEtXf0d48 Nov 3 16:29:53.718372 sshd-session[5306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 3 16:29:53.723189 systemd-logind[1582]: New session 21 of user core. Nov 3 16:29:53.737166 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 3 16:29:53.924504 sshd[5309]: Connection closed by 10.0.0.1 port 58422 Nov 3 16:29:53.926837 sshd-session[5306]: pam_unix(sshd:session): session closed for user core Nov 3 16:29:53.935522 systemd[1]: sshd@20-10.0.0.124:22-10.0.0.1:58422.service: Deactivated successfully. Nov 3 16:29:53.938685 systemd[1]: session-21.scope: Deactivated successfully. Nov 3 16:29:53.939536 systemd-logind[1582]: Session 21 logged out. Waiting for processes to exit. Nov 3 16:29:53.943580 systemd[1]: Started sshd@21-10.0.0.124:22-10.0.0.1:58436.service - OpenSSH per-connection server daemon (10.0.0.1:58436). Nov 3 16:29:53.944509 systemd-logind[1582]: Removed session 21. Nov 3 16:29:53.998941 sshd[5320]: Accepted publickey for core from 10.0.0.1 port 58436 ssh2: RSA SHA256:6IgjKsfLloMODYUZWLJOfDFsK2vE75XcxHBEtXf0d48 Nov 3 16:29:54.000432 sshd-session[5320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 3 16:29:54.005178 systemd-logind[1582]: New session 22 of user core. Nov 3 16:29:54.016138 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 3 16:29:54.086225 sshd[5323]: Connection closed by 10.0.0.1 port 58436 Nov 3 16:29:54.086538 sshd-session[5320]: pam_unix(sshd:session): session closed for user core Nov 3 16:29:54.090708 systemd[1]: sshd@21-10.0.0.124:22-10.0.0.1:58436.service: Deactivated successfully. Nov 3 16:29:54.092742 systemd[1]: session-22.scope: Deactivated successfully. Nov 3 16:29:54.093577 systemd-logind[1582]: Session 22 logged out. Waiting for processes to exit. Nov 3 16:29:54.094899 systemd-logind[1582]: Removed session 22. Nov 3 16:29:54.409161 kubelet[2802]: E1103 16:29:54.409071 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84fddd5b49-7hzd4" podUID="78ce1ad2-3ddf-4f0f-8b04-471a10465b0c" Nov 3 16:29:55.407983 kubelet[2802]: E1103 16:29:55.407932 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:29:56.410556 kubelet[2802]: E1103 16:29:56.410413 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-kw74x" podUID="cf32291d-629d-4182-829b-587a319625b7" Nov 3 16:29:57.408483 kubelet[2802]: E1103 16:29:57.408443 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:29:57.409716 kubelet[2802]: E1103 16:29:57.409664 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bbd84b756-vf947" podUID="9b5d0d2f-8a47-4e07-931b-0ddd4bf1a984" Nov 3 16:29:58.409496 kubelet[2802]: E1103 16:29:58.409401 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bbd84b756-z5kfx" podUID="84a57abf-59e3-4ca8-817d-7787f6e42d37" Nov 3 16:29:59.098813 systemd[1]: Started sshd@22-10.0.0.124:22-10.0.0.1:58450.service - OpenSSH per-connection server daemon (10.0.0.1:58450). Nov 3 16:29:59.158289 sshd[5340]: Accepted publickey for core from 10.0.0.1 port 58450 ssh2: RSA SHA256:6IgjKsfLloMODYUZWLJOfDFsK2vE75XcxHBEtXf0d48 Nov 3 16:29:59.159583 sshd-session[5340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 3 16:29:59.163947 systemd-logind[1582]: New session 23 of user core. Nov 3 16:29:59.173160 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 3 16:29:59.247946 sshd[5343]: Connection closed by 10.0.0.1 port 58450 Nov 3 16:29:59.248284 sshd-session[5340]: pam_unix(sshd:session): session closed for user core Nov 3 16:29:59.252864 systemd[1]: sshd@22-10.0.0.124:22-10.0.0.1:58450.service: Deactivated successfully. Nov 3 16:29:59.255192 systemd[1]: session-23.scope: Deactivated successfully. Nov 3 16:29:59.256117 systemd-logind[1582]: Session 23 logged out. Waiting for processes to exit. Nov 3 16:29:59.257413 systemd-logind[1582]: Removed session 23. Nov 3 16:30:02.409635 containerd[1613]: time="2025-11-03T16:30:02.409508971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 3 16:30:02.915982 containerd[1613]: time="2025-11-03T16:30:02.915911284Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 3 16:30:02.917125 containerd[1613]: time="2025-11-03T16:30:02.917031999Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 3 16:30:02.917214 containerd[1613]: time="2025-11-03T16:30:02.917154948Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 3 16:30:02.917378 kubelet[2802]: E1103 16:30:02.917320 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 3 16:30:02.917772 kubelet[2802]: E1103 16:30:02.917381 2802 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 3 16:30:02.917772 kubelet[2802]: E1103 16:30:02.917591 2802 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-65577d7bd7-xn8xr_calico-system(dffa515e-d491-4502-8fb4-90d289e9e24a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 3 16:30:02.917772 kubelet[2802]: E1103 16:30:02.917637 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65577d7bd7-xn8xr" podUID="dffa515e-d491-4502-8fb4-90d289e9e24a" Nov 3 16:30:02.918000 containerd[1613]: time="2025-11-03T16:30:02.917819519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 3 16:30:03.261247 containerd[1613]: time="2025-11-03T16:30:03.261065216Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 3 16:30:03.262437 containerd[1613]: time="2025-11-03T16:30:03.262390151Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 3 16:30:03.262516 containerd[1613]: time="2025-11-03T16:30:03.262434493Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 3 16:30:03.262631 kubelet[2802]: E1103 16:30:03.262589 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 3 16:30:03.262685 kubelet[2802]: E1103 16:30:03.262638 2802 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 3 16:30:03.262746 kubelet[2802]: E1103 16:30:03.262720 2802 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-848d75bc5c-dq964_calico-apiserver(974e3016-4f92-40ef-b564-73c74925d5f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 3 16:30:03.262791 kubelet[2802]: E1103 16:30:03.262759 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-848d75bc5c-dq964" podUID="974e3016-4f92-40ef-b564-73c74925d5f3" Nov 3 16:30:04.261984 systemd[1]: Started sshd@23-10.0.0.124:22-10.0.0.1:49756.service - OpenSSH per-connection server daemon (10.0.0.1:49756). Nov 3 16:30:04.341993 sshd[5359]: Accepted publickey for core from 10.0.0.1 port 49756 ssh2: RSA SHA256:6IgjKsfLloMODYUZWLJOfDFsK2vE75XcxHBEtXf0d48 Nov 3 16:30:04.344279 sshd-session[5359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 3 16:30:04.348954 systemd-logind[1582]: New session 24 of user core. Nov 3 16:30:04.357161 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 3 16:30:04.410041 containerd[1613]: time="2025-11-03T16:30:04.409981320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 3 16:30:04.564978 sshd[5362]: Connection closed by 10.0.0.1 port 49756 Nov 3 16:30:04.565248 sshd-session[5359]: pam_unix(sshd:session): session closed for user core Nov 3 16:30:04.571114 systemd[1]: sshd@23-10.0.0.124:22-10.0.0.1:49756.service: Deactivated successfully. Nov 3 16:30:04.573821 systemd[1]: session-24.scope: Deactivated successfully. Nov 3 16:30:04.575071 systemd-logind[1582]: Session 24 logged out. Waiting for processes to exit. Nov 3 16:30:04.576994 systemd-logind[1582]: Removed session 24. Nov 3 16:30:04.950175 containerd[1613]: time="2025-11-03T16:30:04.950125779Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 3 16:30:04.951268 containerd[1613]: time="2025-11-03T16:30:04.951217513Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 3 16:30:04.951268 containerd[1613]: time="2025-11-03T16:30:04.951247849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 3 16:30:04.951496 kubelet[2802]: E1103 16:30:04.951454 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 3 16:30:04.951847 kubelet[2802]: E1103 16:30:04.951502 2802 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 3 16:30:04.951847 kubelet[2802]: E1103 16:30:04.951587 2802 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-lnj89_calico-system(6648243c-1869-4d41-a84f-1ec8db284c55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 3 16:30:04.952744 containerd[1613]: time="2025-11-03T16:30:04.952677470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 3 16:30:05.735190 containerd[1613]: time="2025-11-03T16:30:05.735110974Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 3 16:30:05.808273 containerd[1613]: time="2025-11-03T16:30:05.808125260Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 3 16:30:05.808273 containerd[1613]: time="2025-11-03T16:30:05.808200339Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 3 16:30:05.808566 kubelet[2802]: E1103 16:30:05.808446 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 3 16:30:05.808566 kubelet[2802]: E1103 16:30:05.808502 2802 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 3 16:30:05.808838 kubelet[2802]: E1103 16:30:05.808793 2802 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-lnj89_calico-system(6648243c-1869-4d41-a84f-1ec8db284c55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 3 16:30:05.808911 kubelet[2802]: E1103 16:30:05.808862 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lnj89" podUID="6648243c-1869-4d41-a84f-1ec8db284c55" Nov 3 16:30:05.809035 containerd[1613]: time="2025-11-03T16:30:05.808843475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 3 16:30:06.156261 containerd[1613]: time="2025-11-03T16:30:06.156191379Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 3 16:30:06.157859 containerd[1613]: time="2025-11-03T16:30:06.157787838Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 3 16:30:06.157859 containerd[1613]: time="2025-11-03T16:30:06.157865819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 3 16:30:06.158142 kubelet[2802]: E1103 16:30:06.158065 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 3 16:30:06.158142 kubelet[2802]: E1103 16:30:06.158116 2802 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 3 16:30:06.158522 kubelet[2802]: E1103 16:30:06.158202 2802 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-84fddd5b49-7hzd4_calico-system(78ce1ad2-3ddf-4f0f-8b04-471a10465b0c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 3 16:30:06.159352 containerd[1613]: time="2025-11-03T16:30:06.159283013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 3 16:30:06.739539 containerd[1613]: time="2025-11-03T16:30:06.739478838Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 3 16:30:06.740571 containerd[1613]: time="2025-11-03T16:30:06.740526461Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 3 16:30:06.740654 containerd[1613]: time="2025-11-03T16:30:06.740581447Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 3 16:30:06.740800 kubelet[2802]: E1103 16:30:06.740752 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 3 16:30:06.740874 kubelet[2802]: E1103 16:30:06.740807 2802 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 3 16:30:06.740923 kubelet[2802]: E1103 16:30:06.740904 2802 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-84fddd5b49-7hzd4_calico-system(78ce1ad2-3ddf-4f0f-8b04-471a10465b0c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 3 16:30:06.740984 kubelet[2802]: E1103 16:30:06.740942 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84fddd5b49-7hzd4" podUID="78ce1ad2-3ddf-4f0f-8b04-471a10465b0c" Nov 3 16:30:09.578122 systemd[1]: Started sshd@24-10.0.0.124:22-10.0.0.1:49770.service - OpenSSH per-connection server daemon (10.0.0.1:49770). Nov 3 16:30:09.639162 sshd[5384]: Accepted publickey for core from 10.0.0.1 port 49770 ssh2: RSA SHA256:6IgjKsfLloMODYUZWLJOfDFsK2vE75XcxHBEtXf0d48 Nov 3 16:30:09.640449 sshd-session[5384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 3 16:30:09.645290 systemd-logind[1582]: New session 25 of user core. Nov 3 16:30:09.653277 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 3 16:30:09.734295 sshd[5387]: Connection closed by 10.0.0.1 port 49770 Nov 3 16:30:09.734650 sshd-session[5384]: pam_unix(sshd:session): session closed for user core Nov 3 16:30:09.740055 systemd[1]: sshd@24-10.0.0.124:22-10.0.0.1:49770.service: Deactivated successfully. Nov 3 16:30:09.742297 systemd[1]: session-25.scope: Deactivated successfully. Nov 3 16:30:09.743135 systemd-logind[1582]: Session 25 logged out. Waiting for processes to exit. Nov 3 16:30:09.744494 systemd-logind[1582]: Removed session 25. Nov 3 16:30:10.409367 containerd[1613]: time="2025-11-03T16:30:10.409099965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 3 16:30:10.745729 containerd[1613]: time="2025-11-03T16:30:10.745579536Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 3 16:30:10.746908 containerd[1613]: time="2025-11-03T16:30:10.746838093Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 3 16:30:10.746947 containerd[1613]: time="2025-11-03T16:30:10.746912235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 3 16:30:10.747153 kubelet[2802]: E1103 16:30:10.747103 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 3 16:30:10.747497 kubelet[2802]: E1103 16:30:10.747165 2802 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 3 16:30:10.747497 kubelet[2802]: E1103 16:30:10.747269 2802 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-kw74x_calico-system(cf32291d-629d-4182-829b-587a319625b7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 3 16:30:10.747497 kubelet[2802]: E1103 16:30:10.747309 2802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-kw74x" podUID="cf32291d-629d-4182-829b-587a319625b7" Nov 3 16:30:11.409070 kubelet[2802]: E1103 16:30:11.408975 2802 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 3 16:30:11.409235 containerd[1613]: time="2025-11-03T16:30:11.409156538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\""