Oct 30 13:17:41.321218 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Thu Oct 30 11:31:03 -00 2025 Oct 30 13:17:41.321242 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=9059fc71bb508d9916e045ba086d15ed58da6c6a917da2fc328a48e57682d73b Oct 30 13:17:41.321259 kernel: BIOS-provided physical RAM map: Oct 30 13:17:41.321266 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 30 13:17:41.321273 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 30 13:17:41.321280 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 30 13:17:41.321288 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Oct 30 13:17:41.321295 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Oct 30 13:17:41.321304 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 30 13:17:41.321311 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 30 13:17:41.321325 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 30 13:17:41.321331 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 30 13:17:41.321338 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 30 13:17:41.321345 kernel: NX (Execute Disable) protection: active Oct 30 13:17:41.321354 kernel: APIC: Static calls initialized Oct 30 13:17:41.321367 kernel: SMBIOS 2.8 present. Oct 30 13:17:41.321377 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Oct 30 13:17:41.321385 kernel: DMI: Memory slots populated: 1/1 Oct 30 13:17:41.321392 kernel: Hypervisor detected: KVM Oct 30 13:17:41.321399 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Oct 30 13:17:41.321407 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 30 13:17:41.321414 kernel: kvm-clock: using sched offset of 3983599791 cycles Oct 30 13:17:41.321422 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 30 13:17:41.321430 kernel: tsc: Detected 2794.748 MHz processor Oct 30 13:17:41.321445 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 30 13:17:41.321453 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 30 13:17:41.321461 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Oct 30 13:17:41.321469 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 30 13:17:41.321477 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 30 13:17:41.321494 kernel: Using GB pages for direct mapping Oct 30 13:17:41.321502 kernel: ACPI: Early table checksum verification disabled Oct 30 13:17:41.321509 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Oct 30 13:17:41.321524 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 13:17:41.321532 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 13:17:41.321540 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 13:17:41.321548 kernel: ACPI: FACS 0x000000009CFE0000 000040 Oct 30 13:17:41.321556 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 13:17:41.321564 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 13:17:41.321571 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 13:17:41.321586 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 13:17:41.321601 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Oct 30 13:17:41.321611 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Oct 30 13:17:41.321620 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Oct 30 13:17:41.321630 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Oct 30 13:17:41.321645 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Oct 30 13:17:41.321652 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Oct 30 13:17:41.321660 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Oct 30 13:17:41.321668 kernel: No NUMA configuration found Oct 30 13:17:41.321676 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Oct 30 13:17:41.321684 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Oct 30 13:17:41.321699 kernel: Zone ranges: Oct 30 13:17:41.321707 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 30 13:17:41.321715 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Oct 30 13:17:41.321723 kernel: Normal empty Oct 30 13:17:41.321731 kernel: Device empty Oct 30 13:17:41.321738 kernel: Movable zone start for each node Oct 30 13:17:41.321747 kernel: Early memory node ranges Oct 30 13:17:41.321754 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 30 13:17:41.321768 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Oct 30 13:17:41.321777 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Oct 30 13:17:41.321785 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 30 13:17:41.321795 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 30 13:17:41.321803 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Oct 30 13:17:41.321813 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 30 13:17:41.321821 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 30 13:17:41.321836 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 30 13:17:41.321844 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 30 13:17:41.321854 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 30 13:17:41.321862 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 30 13:17:41.321870 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 30 13:17:41.321878 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 30 13:17:41.321886 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 30 13:17:41.321900 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 30 13:17:41.321908 kernel: TSC deadline timer available Oct 30 13:17:41.321916 kernel: CPU topo: Max. logical packages: 1 Oct 30 13:17:41.321924 kernel: CPU topo: Max. logical dies: 1 Oct 30 13:17:41.321932 kernel: CPU topo: Max. dies per package: 1 Oct 30 13:17:41.321940 kernel: CPU topo: Max. threads per core: 1 Oct 30 13:17:41.321948 kernel: CPU topo: Num. cores per package: 4 Oct 30 13:17:41.321956 kernel: CPU topo: Num. threads per package: 4 Oct 30 13:17:41.321969 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Oct 30 13:17:41.321993 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 30 13:17:41.322001 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 30 13:17:41.322009 kernel: kvm-guest: setup PV sched yield Oct 30 13:17:41.322017 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 30 13:17:41.322025 kernel: Booting paravirtualized kernel on KVM Oct 30 13:17:41.322033 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 30 13:17:41.322049 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 30 13:17:41.322057 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Oct 30 13:17:41.322065 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Oct 30 13:17:41.322073 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 30 13:17:41.322081 kernel: kvm-guest: PV spinlocks enabled Oct 30 13:17:41.322089 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 30 13:17:41.322098 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=9059fc71bb508d9916e045ba086d15ed58da6c6a917da2fc328a48e57682d73b Oct 30 13:17:41.322113 kernel: random: crng init done Oct 30 13:17:41.322121 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 30 13:17:41.322129 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 30 13:17:41.322137 kernel: Fallback order for Node 0: 0 Oct 30 13:17:41.322145 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Oct 30 13:17:41.322153 kernel: Policy zone: DMA32 Oct 30 13:17:41.322162 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 30 13:17:41.322176 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 30 13:17:41.322184 kernel: ftrace: allocating 40092 entries in 157 pages Oct 30 13:17:41.322192 kernel: ftrace: allocated 157 pages with 5 groups Oct 30 13:17:41.322200 kernel: Dynamic Preempt: voluntary Oct 30 13:17:41.322208 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 30 13:17:41.322217 kernel: rcu: RCU event tracing is enabled. Oct 30 13:17:41.322225 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 30 13:17:41.322233 kernel: Trampoline variant of Tasks RCU enabled. Oct 30 13:17:41.322250 kernel: Rude variant of Tasks RCU enabled. Oct 30 13:17:41.322258 kernel: Tracing variant of Tasks RCU enabled. Oct 30 13:17:41.322266 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 30 13:17:41.322274 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 30 13:17:41.322281 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 30 13:17:41.322290 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 30 13:17:41.322298 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 30 13:17:41.322312 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 30 13:17:41.322321 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 30 13:17:41.322349 kernel: Console: colour VGA+ 80x25 Oct 30 13:17:41.322363 kernel: printk: legacy console [ttyS0] enabled Oct 30 13:17:41.322372 kernel: ACPI: Core revision 20240827 Oct 30 13:17:41.322380 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 30 13:17:41.322388 kernel: APIC: Switch to symmetric I/O mode setup Oct 30 13:17:41.322397 kernel: x2apic enabled Oct 30 13:17:41.322405 kernel: APIC: Switched APIC routing to: physical x2apic Oct 30 13:17:41.322416 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 30 13:17:41.322431 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 30 13:17:41.322439 kernel: kvm-guest: setup PV IPIs Oct 30 13:17:41.322447 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 30 13:17:41.322462 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Oct 30 13:17:41.322470 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 30 13:17:41.322479 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 30 13:17:41.322495 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 30 13:17:41.322503 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 30 13:17:41.322512 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 30 13:17:41.322520 kernel: Spectre V2 : Mitigation: Retpolines Oct 30 13:17:41.322536 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 30 13:17:41.322544 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 30 13:17:41.322553 kernel: active return thunk: retbleed_return_thunk Oct 30 13:17:41.322561 kernel: RETBleed: Mitigation: untrained return thunk Oct 30 13:17:41.322569 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 30 13:17:41.322578 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 30 13:17:41.322586 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 30 13:17:41.322601 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 30 13:17:41.322610 kernel: active return thunk: srso_return_thunk Oct 30 13:17:41.322619 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 30 13:17:41.322627 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 30 13:17:41.322638 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 30 13:17:41.322646 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 30 13:17:41.322657 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 30 13:17:41.322676 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 30 13:17:41.322685 kernel: Freeing SMP alternatives memory: 32K Oct 30 13:17:41.322693 kernel: pid_max: default: 32768 minimum: 301 Oct 30 13:17:41.322701 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 30 13:17:41.322709 kernel: landlock: Up and running. Oct 30 13:17:41.322718 kernel: SELinux: Initializing. Oct 30 13:17:41.322728 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 30 13:17:41.322742 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 30 13:17:41.322751 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 30 13:17:41.322759 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 30 13:17:41.322768 kernel: ... version: 0 Oct 30 13:17:41.322776 kernel: ... bit width: 48 Oct 30 13:17:41.322784 kernel: ... generic registers: 6 Oct 30 13:17:41.322792 kernel: ... value mask: 0000ffffffffffff Oct 30 13:17:41.322807 kernel: ... max period: 00007fffffffffff Oct 30 13:17:41.322815 kernel: ... fixed-purpose events: 0 Oct 30 13:17:41.322823 kernel: ... event mask: 000000000000003f Oct 30 13:17:41.322832 kernel: signal: max sigframe size: 1776 Oct 30 13:17:41.322840 kernel: rcu: Hierarchical SRCU implementation. Oct 30 13:17:41.322848 kernel: rcu: Max phase no-delay instances is 400. Oct 30 13:17:41.322857 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 30 13:17:41.322865 kernel: smp: Bringing up secondary CPUs ... Oct 30 13:17:41.322880 kernel: smpboot: x86: Booting SMP configuration: Oct 30 13:17:41.322888 kernel: .... node #0, CPUs: #1 #2 #3 Oct 30 13:17:41.322896 kernel: smp: Brought up 1 node, 4 CPUs Oct 30 13:17:41.322904 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 30 13:17:41.322913 kernel: Memory: 2447336K/2571752K available (14336K kernel code, 2443K rwdata, 29892K rodata, 15964K init, 2080K bss, 118476K reserved, 0K cma-reserved) Oct 30 13:17:41.322921 kernel: devtmpfs: initialized Oct 30 13:17:41.322930 kernel: x86/mm: Memory block size: 128MB Oct 30 13:17:41.322944 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 30 13:17:41.322953 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 30 13:17:41.322961 kernel: pinctrl core: initialized pinctrl subsystem Oct 30 13:17:41.322970 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 30 13:17:41.322990 kernel: audit: initializing netlink subsys (disabled) Oct 30 13:17:41.322998 kernel: audit: type=2000 audit(1761830258.340:1): state=initialized audit_enabled=0 res=1 Oct 30 13:17:41.323007 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 30 13:17:41.323022 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 30 13:17:41.323030 kernel: cpuidle: using governor menu Oct 30 13:17:41.323039 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 30 13:17:41.323047 kernel: dca service started, version 1.12.1 Oct 30 13:17:41.323055 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Oct 30 13:17:41.323064 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 30 13:17:41.323072 kernel: PCI: Using configuration type 1 for base access Oct 30 13:17:41.323087 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 30 13:17:41.323095 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 30 13:17:41.323104 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 30 13:17:41.323112 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 30 13:17:41.323120 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 30 13:17:41.323128 kernel: ACPI: Added _OSI(Module Device) Oct 30 13:17:41.323137 kernel: ACPI: Added _OSI(Processor Device) Oct 30 13:17:41.323151 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 30 13:17:41.323159 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 30 13:17:41.323168 kernel: ACPI: Interpreter enabled Oct 30 13:17:41.323176 kernel: ACPI: PM: (supports S0 S3 S5) Oct 30 13:17:41.323184 kernel: ACPI: Using IOAPIC for interrupt routing Oct 30 13:17:41.323193 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 30 13:17:41.323201 kernel: PCI: Using E820 reservations for host bridge windows Oct 30 13:17:41.323216 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 30 13:17:41.323224 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 30 13:17:41.323493 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 30 13:17:41.323714 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 30 13:17:41.323936 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 30 13:17:41.323952 kernel: PCI host bridge to bus 0000:00 Oct 30 13:17:41.324166 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 30 13:17:41.324330 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 30 13:17:41.324499 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 30 13:17:41.324661 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 30 13:17:41.324820 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 30 13:17:41.324992 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Oct 30 13:17:41.325170 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 30 13:17:41.325365 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Oct 30 13:17:41.325570 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Oct 30 13:17:41.325789 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Oct 30 13:17:41.326160 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Oct 30 13:17:41.326475 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Oct 30 13:17:41.326768 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 30 13:17:41.327021 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 30 13:17:41.327202 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Oct 30 13:17:41.327396 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Oct 30 13:17:41.327585 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Oct 30 13:17:41.327784 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 30 13:17:41.327958 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Oct 30 13:17:41.328163 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Oct 30 13:17:41.328338 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Oct 30 13:17:41.328538 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 30 13:17:41.328732 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Oct 30 13:17:41.328905 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Oct 30 13:17:41.329093 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Oct 30 13:17:41.329268 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Oct 30 13:17:41.329452 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Oct 30 13:17:41.329636 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 30 13:17:41.329843 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Oct 30 13:17:41.330060 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Oct 30 13:17:41.330242 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Oct 30 13:17:41.330429 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Oct 30 13:17:41.330613 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Oct 30 13:17:41.330638 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 30 13:17:41.330647 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 30 13:17:41.330656 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 30 13:17:41.330665 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 30 13:17:41.330673 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 30 13:17:41.330682 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 30 13:17:41.330690 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 30 13:17:41.330705 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 30 13:17:41.330714 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 30 13:17:41.330722 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 30 13:17:41.330730 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 30 13:17:41.330739 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 30 13:17:41.330747 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 30 13:17:41.330756 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 30 13:17:41.330770 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 30 13:17:41.330779 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 30 13:17:41.330787 kernel: iommu: Default domain type: Translated Oct 30 13:17:41.330796 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 30 13:17:41.330804 kernel: PCI: Using ACPI for IRQ routing Oct 30 13:17:41.330813 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 30 13:17:41.330821 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 30 13:17:41.330830 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Oct 30 13:17:41.331028 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 30 13:17:41.331202 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 30 13:17:41.331374 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 30 13:17:41.331384 kernel: vgaarb: loaded Oct 30 13:17:41.331393 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 30 13:17:41.331402 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 30 13:17:41.331423 kernel: clocksource: Switched to clocksource kvm-clock Oct 30 13:17:41.331432 kernel: VFS: Disk quotas dquot_6.6.0 Oct 30 13:17:41.331441 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 30 13:17:41.331449 kernel: pnp: PnP ACPI init Oct 30 13:17:41.331651 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 30 13:17:41.331664 kernel: pnp: PnP ACPI: found 6 devices Oct 30 13:17:41.331673 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 30 13:17:41.331692 kernel: NET: Registered PF_INET protocol family Oct 30 13:17:41.331701 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 30 13:17:41.331710 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 30 13:17:41.331718 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 30 13:17:41.331726 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 30 13:17:41.331735 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 30 13:17:41.331743 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 30 13:17:41.331759 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 30 13:17:41.331767 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 30 13:17:41.331776 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 30 13:17:41.331784 kernel: NET: Registered PF_XDP protocol family Oct 30 13:17:41.331950 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 30 13:17:41.332131 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 30 13:17:41.332292 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 30 13:17:41.332465 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 30 13:17:41.332635 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 30 13:17:41.332794 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Oct 30 13:17:41.332806 kernel: PCI: CLS 0 bytes, default 64 Oct 30 13:17:41.332815 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Oct 30 13:17:41.332824 kernel: Initialise system trusted keyrings Oct 30 13:17:41.332843 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 30 13:17:41.332852 kernel: Key type asymmetric registered Oct 30 13:17:41.332860 kernel: Asymmetric key parser 'x509' registered Oct 30 13:17:41.332869 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 30 13:17:41.332878 kernel: io scheduler mq-deadline registered Oct 30 13:17:41.332887 kernel: io scheduler kyber registered Oct 30 13:17:41.332895 kernel: io scheduler bfq registered Oct 30 13:17:41.332904 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 30 13:17:41.332919 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 30 13:17:41.332928 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 30 13:17:41.332936 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 30 13:17:41.332945 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 30 13:17:41.332954 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 30 13:17:41.332962 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 30 13:17:41.332971 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 30 13:17:41.333003 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 30 13:17:41.333184 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 30 13:17:41.333197 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 30 13:17:41.333361 kernel: rtc_cmos 00:04: registered as rtc0 Oct 30 13:17:41.333535 kernel: rtc_cmos 00:04: setting system clock to 2025-10-30T13:17:39 UTC (1761830259) Oct 30 13:17:41.333705 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 30 13:17:41.333728 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 30 13:17:41.333736 kernel: NET: Registered PF_INET6 protocol family Oct 30 13:17:41.333745 kernel: Segment Routing with IPv6 Oct 30 13:17:41.333753 kernel: In-situ OAM (IOAM) with IPv6 Oct 30 13:17:41.333762 kernel: NET: Registered PF_PACKET protocol family Oct 30 13:17:41.333770 kernel: Key type dns_resolver registered Oct 30 13:17:41.333779 kernel: IPI shorthand broadcast: enabled Oct 30 13:17:41.333794 kernel: sched_clock: Marking stable (1184004346, 200851100)->(1487661727, -102806281) Oct 30 13:17:41.333802 kernel: registered taskstats version 1 Oct 30 13:17:41.333811 kernel: Loading compiled-in X.509 certificates Oct 30 13:17:41.333819 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 94f1b718c5ca9e16ea420e725d7bfe648cbb4295' Oct 30 13:17:41.333828 kernel: Demotion targets for Node 0: null Oct 30 13:17:41.333836 kernel: Key type .fscrypt registered Oct 30 13:17:41.333845 kernel: Key type fscrypt-provisioning registered Oct 30 13:17:41.333859 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 30 13:17:41.333868 kernel: ima: Allocated hash algorithm: sha1 Oct 30 13:17:41.333876 kernel: ima: No architecture policies found Oct 30 13:17:41.333884 kernel: clk: Disabling unused clocks Oct 30 13:17:41.333893 kernel: Freeing unused kernel image (initmem) memory: 15964K Oct 30 13:17:41.333901 kernel: Write protecting the kernel read-only data: 45056k Oct 30 13:17:41.333910 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Oct 30 13:17:41.333925 kernel: Run /init as init process Oct 30 13:17:41.333934 kernel: with arguments: Oct 30 13:17:41.333942 kernel: /init Oct 30 13:17:41.333950 kernel: with environment: Oct 30 13:17:41.333958 kernel: HOME=/ Oct 30 13:17:41.333967 kernel: TERM=linux Oct 30 13:17:41.333975 kernel: SCSI subsystem initialized Oct 30 13:17:41.334005 kernel: libata version 3.00 loaded. Oct 30 13:17:41.334195 kernel: ahci 0000:00:1f.2: version 3.0 Oct 30 13:17:41.334258 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 30 13:17:41.334434 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Oct 30 13:17:41.334619 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Oct 30 13:17:41.334812 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 30 13:17:41.335042 kernel: scsi host0: ahci Oct 30 13:17:41.335249 kernel: scsi host1: ahci Oct 30 13:17:41.335516 kernel: scsi host2: ahci Oct 30 13:17:41.335707 kernel: scsi host3: ahci Oct 30 13:17:41.335894 kernel: scsi host4: ahci Oct 30 13:17:41.336110 kernel: scsi host5: ahci Oct 30 13:17:41.336135 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Oct 30 13:17:41.336145 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Oct 30 13:17:41.336154 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Oct 30 13:17:41.336163 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Oct 30 13:17:41.336172 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Oct 30 13:17:41.336181 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Oct 30 13:17:41.336196 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 30 13:17:41.336205 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 30 13:17:41.336214 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 30 13:17:41.336223 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 30 13:17:41.336238 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 30 13:17:41.336256 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 30 13:17:41.336275 kernel: ata3.00: LPM support broken, forcing max_power Oct 30 13:17:41.336291 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 30 13:17:41.336300 kernel: ata3.00: applying bridge limits Oct 30 13:17:41.336309 kernel: ata3.00: LPM support broken, forcing max_power Oct 30 13:17:41.336317 kernel: ata3.00: configured for UDMA/100 Oct 30 13:17:41.336549 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 30 13:17:41.336744 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 30 13:17:41.336918 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 30 13:17:41.336942 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 30 13:17:41.336951 kernel: GPT:16515071 != 27000831 Oct 30 13:17:41.336959 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 30 13:17:41.336968 kernel: GPT:16515071 != 27000831 Oct 30 13:17:41.336991 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 30 13:17:41.337011 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 30 13:17:41.337217 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 30 13:17:41.337229 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 30 13:17:41.337417 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 30 13:17:41.337430 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 30 13:17:41.337438 kernel: device-mapper: uevent: version 1.0.3 Oct 30 13:17:41.337447 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 30 13:17:41.337457 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 30 13:17:41.337482 kernel: raid6: avx2x4 gen() 23528 MB/s Oct 30 13:17:41.337498 kernel: raid6: avx2x2 gen() 21549 MB/s Oct 30 13:17:41.337523 kernel: raid6: avx2x1 gen() 20979 MB/s Oct 30 13:17:41.337532 kernel: raid6: using algorithm avx2x4 gen() 23528 MB/s Oct 30 13:17:41.337541 kernel: raid6: .... xor() 6646 MB/s, rmw enabled Oct 30 13:17:41.337559 kernel: raid6: using avx2x2 recovery algorithm Oct 30 13:17:41.337569 kernel: xor: automatically using best checksumming function avx Oct 30 13:17:41.337578 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 30 13:17:41.337588 kernel: BTRFS: device fsid eda3d582-32f5-4286-9f04-debab6c64300 devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (182) Oct 30 13:17:41.337598 kernel: BTRFS info (device dm-0): first mount of filesystem eda3d582-32f5-4286-9f04-debab6c64300 Oct 30 13:17:41.337607 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 30 13:17:41.337616 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 30 13:17:41.337633 kernel: BTRFS info (device dm-0): enabling free space tree Oct 30 13:17:41.337642 kernel: loop: module loaded Oct 30 13:17:41.337651 kernel: loop0: detected capacity change from 0 to 100136 Oct 30 13:17:41.337661 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 30 13:17:41.337671 systemd[1]: Successfully made /usr/ read-only. Oct 30 13:17:41.337684 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 30 13:17:41.337700 systemd[1]: Detected virtualization kvm. Oct 30 13:17:41.337710 systemd[1]: Detected architecture x86-64. Oct 30 13:17:41.337719 systemd[1]: Running in initrd. Oct 30 13:17:41.337729 systemd[1]: No hostname configured, using default hostname. Oct 30 13:17:41.337739 systemd[1]: Hostname set to . Oct 30 13:17:41.337749 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 30 13:17:41.337765 systemd[1]: Queued start job for default target initrd.target. Oct 30 13:17:41.337775 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 30 13:17:41.337785 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 13:17:41.337795 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 13:17:41.337805 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 30 13:17:41.337815 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 30 13:17:41.337832 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 30 13:17:41.337843 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 30 13:17:41.337852 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 13:17:41.337862 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 30 13:17:41.337872 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 30 13:17:41.337882 systemd[1]: Reached target paths.target - Path Units. Oct 30 13:17:41.337903 systemd[1]: Reached target slices.target - Slice Units. Oct 30 13:17:41.337913 systemd[1]: Reached target swap.target - Swaps. Oct 30 13:17:41.337923 systemd[1]: Reached target timers.target - Timer Units. Oct 30 13:17:41.337932 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 30 13:17:41.337942 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 30 13:17:41.337952 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 30 13:17:41.337962 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 30 13:17:41.337993 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 30 13:17:41.338003 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 30 13:17:41.338024 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 13:17:41.338034 systemd[1]: Reached target sockets.target - Socket Units. Oct 30 13:17:41.338044 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 30 13:17:41.338053 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 30 13:17:41.338063 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 30 13:17:41.338081 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 30 13:17:41.338091 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 30 13:17:41.338100 systemd[1]: Starting systemd-fsck-usr.service... Oct 30 13:17:41.338110 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 30 13:17:41.338119 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 30 13:17:41.338128 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 13:17:41.338145 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 30 13:17:41.338155 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 13:17:41.338164 systemd[1]: Finished systemd-fsck-usr.service. Oct 30 13:17:41.338174 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 30 13:17:41.338222 systemd-journald[317]: Collecting audit messages is disabled. Oct 30 13:17:41.338250 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 30 13:17:41.338260 kernel: Bridge firewalling registered Oct 30 13:17:41.338280 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 30 13:17:41.338290 systemd-journald[317]: Journal started Oct 30 13:17:41.338309 systemd-journald[317]: Runtime Journal (/run/log/journal/3d3c15b18662403bb2a514873787360a) is 6M, max 48.2M, 42.2M free. Oct 30 13:17:41.336292 systemd-modules-load[318]: Inserted module 'br_netfilter' Oct 30 13:17:41.342001 systemd[1]: Started systemd-journald.service - Journal Service. Oct 30 13:17:41.345101 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 30 13:17:41.349856 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 30 13:17:41.351204 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 30 13:17:41.357283 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 30 13:17:41.372123 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 13:17:41.445240 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 13:17:41.448873 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 30 13:17:41.451444 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 30 13:17:41.456159 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 30 13:17:41.466563 systemd-tmpfiles[336]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 30 13:17:41.479469 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 13:17:41.496194 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 30 13:17:41.502698 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 30 13:17:41.536299 systemd-resolved[347]: Positive Trust Anchors: Oct 30 13:17:41.536323 systemd-resolved[347]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 30 13:17:41.536329 systemd-resolved[347]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 30 13:17:41.536372 systemd-resolved[347]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 30 13:17:41.561578 dracut-cmdline[363]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=9059fc71bb508d9916e045ba086d15ed58da6c6a917da2fc328a48e57682d73b Oct 30 13:17:41.581543 systemd-resolved[347]: Defaulting to hostname 'linux'. Oct 30 13:17:41.582992 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 30 13:17:41.583998 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 30 13:17:41.686017 kernel: Loading iSCSI transport class v2.0-870. Oct 30 13:17:41.700017 kernel: iscsi: registered transport (tcp) Oct 30 13:17:41.726384 kernel: iscsi: registered transport (qla4xxx) Oct 30 13:17:41.726445 kernel: QLogic iSCSI HBA Driver Oct 30 13:17:41.755233 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 30 13:17:41.792654 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 13:17:41.798453 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 30 13:17:41.861513 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 30 13:17:41.865105 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 30 13:17:41.867456 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 30 13:17:41.910275 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 30 13:17:41.915616 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 13:17:41.951959 systemd-udevd[604]: Using default interface naming scheme 'v257'. Oct 30 13:17:41.967292 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 13:17:41.973812 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 30 13:17:42.000404 dracut-pre-trigger[673]: rd.md=0: removing MD RAID activation Oct 30 13:17:42.006388 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 30 13:17:42.012492 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 30 13:17:42.038190 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 30 13:17:42.040796 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 30 13:17:42.086648 systemd-networkd[715]: lo: Link UP Oct 30 13:17:42.086657 systemd-networkd[715]: lo: Gained carrier Oct 30 13:17:42.087488 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 30 13:17:42.088527 systemd[1]: Reached target network.target - Network. Oct 30 13:17:42.146062 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 13:17:42.153121 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 30 13:17:42.219830 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 30 13:17:42.243013 kernel: cryptd: max_cpu_qlen set to 1000 Oct 30 13:17:42.256780 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 30 13:17:42.269022 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 30 13:17:42.273572 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 30 13:17:42.280008 kernel: AES CTR mode by8 optimization enabled Oct 30 13:17:42.284647 systemd-networkd[715]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 30 13:17:42.284657 systemd-networkd[715]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 30 13:17:42.287218 systemd-networkd[715]: eth0: Link UP Oct 30 13:17:42.287461 systemd-networkd[715]: eth0: Gained carrier Oct 30 13:17:42.287471 systemd-networkd[715]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 30 13:17:42.291314 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 30 13:17:42.302057 systemd-networkd[715]: eth0: DHCPv4 address 10.0.0.37/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 30 13:17:42.312220 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 30 13:17:42.314785 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 13:17:42.315001 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 13:17:42.315662 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 13:17:42.318278 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 13:17:42.353279 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 30 13:17:42.357992 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 30 13:17:42.361919 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 13:17:42.362758 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 30 13:17:42.370157 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 30 13:17:42.421200 disk-uuid[843]: Primary Header is updated. Oct 30 13:17:42.421200 disk-uuid[843]: Secondary Entries is updated. Oct 30 13:17:42.421200 disk-uuid[843]: Secondary Header is updated. Oct 30 13:17:42.468637 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 13:17:42.488628 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 30 13:17:43.469068 disk-uuid[847]: Warning: The kernel is still using the old partition table. Oct 30 13:17:43.469068 disk-uuid[847]: The new table will be used at the next reboot or after you Oct 30 13:17:43.469068 disk-uuid[847]: run partprobe(8) or kpartx(8) Oct 30 13:17:43.469068 disk-uuid[847]: The operation has completed successfully. Oct 30 13:17:43.478904 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 30 13:17:43.479083 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 30 13:17:43.484809 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 30 13:17:43.524054 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (865) Oct 30 13:17:43.524127 kernel: BTRFS info (device vda6): first mount of filesystem 11cbc96a-f6fd-4ff3-b9c8-ab48038f6d57 Oct 30 13:17:43.527113 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 30 13:17:43.530862 kernel: BTRFS info (device vda6): turning on async discard Oct 30 13:17:43.530893 kernel: BTRFS info (device vda6): enabling free space tree Oct 30 13:17:43.540000 kernel: BTRFS info (device vda6): last unmount of filesystem 11cbc96a-f6fd-4ff3-b9c8-ab48038f6d57 Oct 30 13:17:43.540804 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 30 13:17:43.544272 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 30 13:17:43.812080 ignition[884]: Ignition 2.22.0 Oct 30 13:17:43.812094 ignition[884]: Stage: fetch-offline Oct 30 13:17:43.812151 ignition[884]: no configs at "/usr/lib/ignition/base.d" Oct 30 13:17:43.812163 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 13:17:43.812269 ignition[884]: parsed url from cmdline: "" Oct 30 13:17:43.812273 ignition[884]: no config URL provided Oct 30 13:17:43.812288 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Oct 30 13:17:43.812300 ignition[884]: no config at "/usr/lib/ignition/user.ign" Oct 30 13:17:43.812354 ignition[884]: op(1): [started] loading QEMU firmware config module Oct 30 13:17:43.812359 ignition[884]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 30 13:17:43.824134 ignition[884]: op(1): [finished] loading QEMU firmware config module Oct 30 13:17:43.902909 ignition[884]: parsing config with SHA512: 7fcac67fd9a95ffaa8b190d1baef998f484d19fcb070aceb8229aeb80f84d118e1bd4eda65fb8f03a212940f2ba88aebadabfec8639daf7d5b61b416c4c22e63 Oct 30 13:17:43.911791 unknown[884]: fetched base config from "system" Oct 30 13:17:43.912564 unknown[884]: fetched user config from "qemu" Oct 30 13:17:43.912962 ignition[884]: fetch-offline: fetch-offline passed Oct 30 13:17:43.913049 ignition[884]: Ignition finished successfully Oct 30 13:17:43.920134 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 30 13:17:43.924188 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 30 13:17:43.925481 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 30 13:17:43.993198 ignition[894]: Ignition 2.22.0 Oct 30 13:17:43.993213 ignition[894]: Stage: kargs Oct 30 13:17:43.993348 ignition[894]: no configs at "/usr/lib/ignition/base.d" Oct 30 13:17:43.993359 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 13:17:43.994086 ignition[894]: kargs: kargs passed Oct 30 13:17:43.999615 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 30 13:17:43.994138 ignition[894]: Ignition finished successfully Oct 30 13:17:44.003084 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 30 13:17:44.062289 ignition[902]: Ignition 2.22.0 Oct 30 13:17:44.062303 ignition[902]: Stage: disks Oct 30 13:17:44.062482 ignition[902]: no configs at "/usr/lib/ignition/base.d" Oct 30 13:17:44.062493 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 13:17:44.066880 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 30 13:17:44.063415 ignition[902]: disks: disks passed Oct 30 13:17:44.069577 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 30 13:17:44.063464 ignition[902]: Ignition finished successfully Oct 30 13:17:44.072723 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 30 13:17:44.075749 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 30 13:17:44.076476 systemd[1]: Reached target sysinit.target - System Initialization. Oct 30 13:17:44.077016 systemd[1]: Reached target basic.target - Basic System. Oct 30 13:17:44.078304 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 30 13:17:44.132083 systemd-fsck[912]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 30 13:17:44.142180 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 30 13:17:44.145149 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 30 13:17:44.226157 systemd-networkd[715]: eth0: Gained IPv6LL Oct 30 13:17:44.281015 kernel: EXT4-fs (vda9): mounted filesystem 6e47eb19-ed37-4e0f-85fc-4a1fde834fe4 r/w with ordered data mode. Quota mode: none. Oct 30 13:17:44.281711 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 30 13:17:44.283023 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 30 13:17:44.286200 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 30 13:17:44.290573 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 30 13:17:44.291725 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 30 13:17:44.291764 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 30 13:17:44.291789 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 30 13:17:44.311517 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 30 13:17:44.315904 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (920) Oct 30 13:17:44.315120 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 30 13:17:44.323572 kernel: BTRFS info (device vda6): first mount of filesystem 11cbc96a-f6fd-4ff3-b9c8-ab48038f6d57 Oct 30 13:17:44.323589 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 30 13:17:44.323601 kernel: BTRFS info (device vda6): turning on async discard Oct 30 13:17:44.323612 kernel: BTRFS info (device vda6): enabling free space tree Oct 30 13:17:44.326055 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 30 13:17:44.379691 initrd-setup-root[944]: cut: /sysroot/etc/passwd: No such file or directory Oct 30 13:17:44.384248 initrd-setup-root[951]: cut: /sysroot/etc/group: No such file or directory Oct 30 13:17:44.390356 initrd-setup-root[958]: cut: /sysroot/etc/shadow: No such file or directory Oct 30 13:17:44.395962 initrd-setup-root[965]: cut: /sysroot/etc/gshadow: No such file or directory Oct 30 13:17:44.498008 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 30 13:17:44.501698 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 30 13:17:44.503109 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 30 13:17:44.530089 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 30 13:17:44.533386 kernel: BTRFS info (device vda6): last unmount of filesystem 11cbc96a-f6fd-4ff3-b9c8-ab48038f6d57 Oct 30 13:17:44.546168 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 30 13:17:44.602666 ignition[1034]: INFO : Ignition 2.22.0 Oct 30 13:17:44.602666 ignition[1034]: INFO : Stage: mount Oct 30 13:17:44.605494 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 13:17:44.605494 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 13:17:44.605494 ignition[1034]: INFO : mount: mount passed Oct 30 13:17:44.605494 ignition[1034]: INFO : Ignition finished successfully Oct 30 13:17:44.606739 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 30 13:17:44.610028 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 30 13:17:44.643761 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 30 13:17:44.665115 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1046) Oct 30 13:17:44.665166 kernel: BTRFS info (device vda6): first mount of filesystem 11cbc96a-f6fd-4ff3-b9c8-ab48038f6d57 Oct 30 13:17:44.665178 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 30 13:17:44.670419 kernel: BTRFS info (device vda6): turning on async discard Oct 30 13:17:44.670444 kernel: BTRFS info (device vda6): enabling free space tree Oct 30 13:17:44.672246 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 30 13:17:44.713303 ignition[1063]: INFO : Ignition 2.22.0 Oct 30 13:17:44.713303 ignition[1063]: INFO : Stage: files Oct 30 13:17:44.715850 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 13:17:44.715850 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 13:17:44.715850 ignition[1063]: DEBUG : files: compiled without relabeling support, skipping Oct 30 13:17:44.721926 ignition[1063]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 30 13:17:44.721926 ignition[1063]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 30 13:17:44.728954 ignition[1063]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 30 13:17:44.731281 ignition[1063]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 30 13:17:44.733655 unknown[1063]: wrote ssh authorized keys file for user: core Oct 30 13:17:44.735390 ignition[1063]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 30 13:17:44.737641 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 30 13:17:44.737641 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 30 13:17:44.773791 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 30 13:17:44.899089 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 30 13:17:44.899089 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 30 13:17:44.905564 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 30 13:17:44.905564 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 30 13:17:44.905564 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 30 13:17:44.905564 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 30 13:17:44.905564 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 30 13:17:44.905564 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 30 13:17:44.905564 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 30 13:17:44.905564 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 30 13:17:44.905564 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 30 13:17:44.905564 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 30 13:17:44.934424 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 30 13:17:44.934424 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 30 13:17:44.934424 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Oct 30 13:17:45.383271 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 30 13:17:45.960000 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 30 13:17:45.964436 ignition[1063]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 30 13:17:45.964436 ignition[1063]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 30 13:17:45.969728 ignition[1063]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 30 13:17:45.969728 ignition[1063]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 30 13:17:45.969728 ignition[1063]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 30 13:17:45.969728 ignition[1063]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 30 13:17:45.969728 ignition[1063]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 30 13:17:45.969728 ignition[1063]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 30 13:17:45.969728 ignition[1063]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 30 13:17:45.991336 ignition[1063]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 30 13:17:45.998284 ignition[1063]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 30 13:17:46.001314 ignition[1063]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 30 13:17:46.001314 ignition[1063]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 30 13:17:46.005910 ignition[1063]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 30 13:17:46.005910 ignition[1063]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 30 13:17:46.005910 ignition[1063]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 30 13:17:46.005910 ignition[1063]: INFO : files: files passed Oct 30 13:17:46.005910 ignition[1063]: INFO : Ignition finished successfully Oct 30 13:17:46.009362 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 30 13:17:46.018584 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 30 13:17:46.020662 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 30 13:17:46.042026 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 30 13:17:46.042155 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 30 13:17:46.048178 initrd-setup-root-after-ignition[1093]: grep: /sysroot/oem/oem-release: No such file or directory Oct 30 13:17:46.050590 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 30 13:17:46.050590 initrd-setup-root-after-ignition[1096]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 30 13:17:46.057187 initrd-setup-root-after-ignition[1100]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 30 13:17:46.061348 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 30 13:17:46.062181 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 30 13:17:46.068787 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 30 13:17:46.129284 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 30 13:17:46.129422 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 30 13:17:46.134675 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 30 13:17:46.137819 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 30 13:17:46.138926 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 30 13:17:46.143454 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 30 13:17:46.181903 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 30 13:17:46.183883 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 30 13:17:46.207154 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 30 13:17:46.207336 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 30 13:17:46.208599 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 13:17:46.213689 systemd[1]: Stopped target timers.target - Timer Units. Oct 30 13:17:46.214574 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 30 13:17:46.214786 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 30 13:17:46.220650 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 30 13:17:46.224941 systemd[1]: Stopped target basic.target - Basic System. Oct 30 13:17:46.228629 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 30 13:17:46.231774 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 30 13:17:46.235112 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 30 13:17:46.235862 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 30 13:17:46.236756 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 30 13:17:46.246151 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 30 13:17:46.251352 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 30 13:17:46.252817 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 30 13:17:46.257871 systemd[1]: Stopped target swap.target - Swaps. Oct 30 13:17:46.260809 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 30 13:17:46.261194 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 30 13:17:46.267525 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 30 13:17:46.270965 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 13:17:46.274828 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 30 13:17:46.275308 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 13:17:46.275851 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 30 13:17:46.275963 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 30 13:17:46.283369 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 30 13:17:46.283486 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 30 13:17:46.286778 systemd[1]: Stopped target paths.target - Path Units. Oct 30 13:17:46.289611 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 30 13:17:46.294238 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 13:17:46.296457 systemd[1]: Stopped target slices.target - Slice Units. Oct 30 13:17:46.297684 systemd[1]: Stopped target sockets.target - Socket Units. Oct 30 13:17:46.304298 systemd[1]: iscsid.socket: Deactivated successfully. Oct 30 13:17:46.304414 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 30 13:17:46.307350 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 30 13:17:46.307457 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 30 13:17:46.308628 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 30 13:17:46.308844 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 30 13:17:46.312676 systemd[1]: ignition-files.service: Deactivated successfully. Oct 30 13:17:46.312882 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 30 13:17:46.318785 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 30 13:17:46.322076 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 30 13:17:46.322227 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 13:17:46.324425 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 30 13:17:46.334814 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 30 13:17:46.335011 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 13:17:46.335960 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 30 13:17:46.336187 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 13:17:46.341549 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 30 13:17:46.341745 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 30 13:17:46.354424 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 30 13:17:46.354592 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 30 13:17:46.383359 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 30 13:17:46.386135 ignition[1120]: INFO : Ignition 2.22.0 Oct 30 13:17:46.386135 ignition[1120]: INFO : Stage: umount Oct 30 13:17:46.388927 ignition[1120]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 13:17:46.388927 ignition[1120]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 13:17:46.388927 ignition[1120]: INFO : umount: umount passed Oct 30 13:17:46.388927 ignition[1120]: INFO : Ignition finished successfully Oct 30 13:17:46.396774 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 30 13:17:46.396917 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 30 13:17:46.402309 systemd[1]: Stopped target network.target - Network. Oct 30 13:17:46.403385 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 30 13:17:46.403453 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 30 13:17:46.406619 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 30 13:17:46.406677 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 30 13:17:46.409840 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 30 13:17:46.409897 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 30 13:17:46.413451 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 30 13:17:46.413510 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 30 13:17:46.418387 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 30 13:17:46.419744 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 30 13:17:46.444396 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 30 13:17:46.444563 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 30 13:17:46.451759 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 30 13:17:46.451902 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 30 13:17:46.457715 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 30 13:17:46.457832 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 30 13:17:46.459705 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 30 13:17:46.462554 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 30 13:17:46.462604 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 30 13:17:46.463069 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 30 13:17:46.463125 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 30 13:17:46.470915 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 30 13:17:46.472510 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 30 13:17:46.472574 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 30 13:17:46.475428 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 30 13:17:46.475485 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 30 13:17:46.478738 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 30 13:17:46.478797 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 30 13:17:46.481918 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 13:17:46.518664 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 30 13:17:46.529164 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 13:17:46.530220 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 30 13:17:46.530270 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 30 13:17:46.536374 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 30 13:17:46.536419 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 13:17:46.537492 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 30 13:17:46.537545 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 30 13:17:46.544101 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 30 13:17:46.544322 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 30 13:17:46.548745 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 30 13:17:46.548805 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 30 13:17:46.554462 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 30 13:17:46.555654 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 30 13:17:46.555712 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 13:17:46.559360 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 30 13:17:46.559418 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 13:17:46.563034 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 13:17:46.563087 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 13:17:46.567182 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 30 13:17:46.577190 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 30 13:17:46.587558 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 30 13:17:46.587696 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 30 13:17:46.591581 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 30 13:17:46.593442 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 30 13:17:46.632696 systemd[1]: Switching root. Oct 30 13:17:46.670686 systemd-journald[317]: Journal stopped Oct 30 13:17:48.063855 systemd-journald[317]: Received SIGTERM from PID 1 (systemd). Oct 30 13:17:48.066962 kernel: SELinux: policy capability network_peer_controls=1 Oct 30 13:17:48.067023 kernel: SELinux: policy capability open_perms=1 Oct 30 13:17:48.067068 kernel: SELinux: policy capability extended_socket_class=1 Oct 30 13:17:48.067082 kernel: SELinux: policy capability always_check_network=0 Oct 30 13:17:48.067095 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 30 13:17:48.067108 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 30 13:17:48.067120 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 30 13:17:48.067138 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 30 13:17:48.067151 kernel: SELinux: policy capability userspace_initial_context=0 Oct 30 13:17:48.067172 kernel: audit: type=1403 audit(1761830267.143:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 30 13:17:48.067188 systemd[1]: Successfully loaded SELinux policy in 68.032ms. Oct 30 13:17:48.067225 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.870ms. Oct 30 13:17:48.067241 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 30 13:17:48.067255 systemd[1]: Detected virtualization kvm. Oct 30 13:17:48.067268 systemd[1]: Detected architecture x86-64. Oct 30 13:17:48.067282 systemd[1]: Detected first boot. Oct 30 13:17:48.067304 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 30 13:17:48.067316 kernel: Guest personality initialized and is inactive Oct 30 13:17:48.067329 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 30 13:17:48.067342 kernel: Initialized host personality Oct 30 13:17:48.067356 zram_generator::config[1167]: No configuration found. Oct 30 13:17:48.067371 kernel: NET: Registered PF_VSOCK protocol family Oct 30 13:17:48.067383 systemd[1]: Populated /etc with preset unit settings. Oct 30 13:17:48.067404 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 30 13:17:48.067418 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 30 13:17:48.067431 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 30 13:17:48.067446 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 30 13:17:48.067460 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 30 13:17:48.067473 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 30 13:17:48.067486 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 30 13:17:48.067513 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 30 13:17:48.067527 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 30 13:17:48.067539 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 30 13:17:48.067552 systemd[1]: Created slice user.slice - User and Session Slice. Oct 30 13:17:48.067566 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 13:17:48.067579 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 13:17:48.067614 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 30 13:17:48.067635 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 30 13:17:48.067649 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 30 13:17:48.067669 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 30 13:17:48.067683 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 30 13:17:48.067696 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 13:17:48.067709 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 30 13:17:48.067736 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 30 13:17:48.067749 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 30 13:17:48.067762 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 30 13:17:48.067775 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 30 13:17:48.067788 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 13:17:48.067802 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 30 13:17:48.067814 systemd[1]: Reached target slices.target - Slice Units. Oct 30 13:17:48.067834 systemd[1]: Reached target swap.target - Swaps. Oct 30 13:17:48.067847 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 30 13:17:48.067860 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 30 13:17:48.067873 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 30 13:17:48.067886 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 30 13:17:48.067902 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 30 13:17:48.067915 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 13:17:48.067958 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 30 13:17:48.067972 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 30 13:17:48.067998 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 30 13:17:48.068013 systemd[1]: Mounting media.mount - External Media Directory... Oct 30 13:17:48.068026 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 13:17:48.068039 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 30 13:17:48.068052 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 30 13:17:48.068073 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 30 13:17:48.068087 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 30 13:17:48.068100 systemd[1]: Reached target machines.target - Containers. Oct 30 13:17:48.068113 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 30 13:17:48.068126 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 13:17:48.068138 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 30 13:17:48.068151 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 30 13:17:48.068171 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 13:17:48.068184 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 30 13:17:48.068198 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 13:17:48.068210 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 30 13:17:48.068232 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 13:17:48.068677 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 30 13:17:48.068698 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 30 13:17:48.068724 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 30 13:17:48.068737 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 30 13:17:48.068750 systemd[1]: Stopped systemd-fsck-usr.service. Oct 30 13:17:48.068765 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 13:17:48.068778 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 30 13:17:48.068795 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 30 13:17:48.068816 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 30 13:17:48.068889 systemd-journald[1230]: Collecting audit messages is disabled. Oct 30 13:17:48.068917 kernel: fuse: init (API version 7.41) Oct 30 13:17:48.068931 systemd-journald[1230]: Journal started Oct 30 13:17:48.068956 systemd-journald[1230]: Runtime Journal (/run/log/journal/3d3c15b18662403bb2a514873787360a) is 6M, max 48.2M, 42.2M free. Oct 30 13:17:47.742168 systemd[1]: Queued start job for default target multi-user.target. Oct 30 13:17:47.766207 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 30 13:17:47.766786 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 30 13:17:48.073166 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 30 13:17:48.079295 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 30 13:17:48.089048 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 30 13:17:48.094082 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 13:17:48.098318 systemd[1]: Started systemd-journald.service - Journal Service. Oct 30 13:17:48.101597 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 30 13:17:48.103388 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 30 13:17:48.105557 systemd[1]: Mounted media.mount - External Media Directory. Oct 30 13:17:48.107425 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 30 13:17:48.109259 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 30 13:17:48.111236 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 30 13:17:48.115021 kernel: ACPI: bus type drm_connector registered Oct 30 13:17:48.115464 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 13:17:48.118199 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 30 13:17:48.120371 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 30 13:17:48.120599 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 30 13:17:48.122951 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 13:17:48.123290 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 13:17:48.125376 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 30 13:17:48.125595 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 30 13:17:48.127550 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 13:17:48.127770 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 13:17:48.130011 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 30 13:17:48.130242 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 30 13:17:48.132254 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 13:17:48.132477 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 13:17:48.134529 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 30 13:17:48.136727 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 13:17:48.139915 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 30 13:17:48.142353 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 30 13:17:48.158189 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 30 13:17:48.160364 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 30 13:17:48.163790 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 30 13:17:48.166715 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 30 13:17:48.168497 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 30 13:17:48.168617 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 30 13:17:48.171408 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 30 13:17:48.174224 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 13:17:48.177299 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 30 13:17:48.181150 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 30 13:17:48.183291 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 30 13:17:48.184846 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 30 13:17:48.186855 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 30 13:17:48.192033 systemd-journald[1230]: Time spent on flushing to /var/log/journal/3d3c15b18662403bb2a514873787360a is 16.646ms for 961 entries. Oct 30 13:17:48.192033 systemd-journald[1230]: System Journal (/var/log/journal/3d3c15b18662403bb2a514873787360a) is 8M, max 163.5M, 155.5M free. Oct 30 13:17:48.352234 systemd-journald[1230]: Received client request to flush runtime journal. Oct 30 13:17:48.352302 kernel: loop1: detected capacity change from 0 to 229808 Oct 30 13:17:48.352337 kernel: loop2: detected capacity change from 0 to 128912 Oct 30 13:17:48.352359 kernel: loop3: detected capacity change from 0 to 111544 Oct 30 13:17:48.191110 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 30 13:17:48.355520 kernel: loop4: detected capacity change from 0 to 229808 Oct 30 13:17:48.195485 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 30 13:17:48.200571 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 30 13:17:48.204084 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 30 13:17:48.206159 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 30 13:17:48.210201 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 13:17:48.227485 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 30 13:17:48.336872 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 30 13:17:48.341516 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 30 13:17:48.344656 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 30 13:17:48.346898 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 30 13:17:48.349817 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 30 13:17:48.354888 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 30 13:17:48.358346 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 30 13:17:48.372899 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 30 13:17:48.383537 kernel: loop5: detected capacity change from 0 to 128912 Oct 30 13:17:48.380135 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 30 13:17:48.406021 kernel: loop6: detected capacity change from 0 to 111544 Oct 30 13:17:48.410399 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Oct 30 13:17:48.410415 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Oct 30 13:17:48.416186 (sd-merge)[1300]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 30 13:17:48.417577 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 13:17:48.426239 (sd-merge)[1300]: Merged extensions into '/usr'. Oct 30 13:17:48.430960 systemd[1]: Reload requested from client PID 1285 ('systemd-sysext') (unit systemd-sysext.service)... Oct 30 13:17:48.430995 systemd[1]: Reloading... Oct 30 13:17:48.487225 zram_generator::config[1341]: No configuration found. Oct 30 13:17:48.544674 systemd-resolved[1297]: Positive Trust Anchors: Oct 30 13:17:48.544696 systemd-resolved[1297]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 30 13:17:48.544701 systemd-resolved[1297]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 30 13:17:48.544731 systemd-resolved[1297]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 30 13:17:48.548958 systemd-resolved[1297]: Defaulting to hostname 'linux'. Oct 30 13:17:48.715173 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 30 13:17:48.715324 systemd[1]: Reloading finished in 283 ms. Oct 30 13:17:48.750624 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 30 13:17:48.752920 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 30 13:17:48.755211 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 30 13:17:48.759747 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 30 13:17:48.779358 systemd[1]: Starting ensure-sysext.service... Oct 30 13:17:48.782052 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 30 13:17:48.810700 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 30 13:17:48.818419 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 13:17:48.821571 systemd[1]: Reload requested from client PID 1377 ('systemctl') (unit ensure-sysext.service)... Oct 30 13:17:48.821591 systemd[1]: Reloading... Oct 30 13:17:48.856970 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 30 13:17:48.857032 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 30 13:17:48.857384 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 30 13:17:48.857661 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 30 13:17:48.858637 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 30 13:17:48.859521 systemd-tmpfiles[1378]: ACLs are not supported, ignoring. Oct 30 13:17:48.859602 systemd-tmpfiles[1378]: ACLs are not supported, ignoring. Oct 30 13:17:48.865439 systemd-udevd[1380]: Using default interface naming scheme 'v257'. Oct 30 13:17:48.873495 systemd-tmpfiles[1378]: Detected autofs mount point /boot during canonicalization of boot. Oct 30 13:17:48.873513 systemd-tmpfiles[1378]: Skipping /boot Oct 30 13:17:48.893739 systemd-tmpfiles[1378]: Detected autofs mount point /boot during canonicalization of boot. Oct 30 13:17:48.895793 systemd-tmpfiles[1378]: Skipping /boot Oct 30 13:17:48.897068 zram_generator::config[1412]: No configuration found. Oct 30 13:17:48.999024 kernel: mousedev: PS/2 mouse device common for all mice Oct 30 13:17:49.010004 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 30 13:17:49.017265 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 30 13:17:49.017961 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 30 13:17:49.023016 kernel: ACPI: button: Power Button [PWRF] Oct 30 13:17:49.123456 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 30 13:17:49.126159 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 30 13:17:49.126727 systemd[1]: Reloading finished in 304 ms. Oct 30 13:17:49.179905 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 13:17:49.186473 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 13:17:49.212270 kernel: kvm_amd: TSC scaling supported Oct 30 13:17:49.212337 kernel: kvm_amd: Nested Virtualization enabled Oct 30 13:17:49.212377 kernel: kvm_amd: Nested Paging enabled Oct 30 13:17:49.212402 kernel: kvm_amd: LBR virtualization supported Oct 30 13:17:49.214597 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 30 13:17:49.214693 kernel: kvm_amd: Virtual GIF supported Oct 30 13:17:49.241003 kernel: EDAC MC: Ver: 3.0.0 Oct 30 13:17:49.259515 systemd[1]: Finished ensure-sysext.service. Oct 30 13:17:49.270067 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 13:17:49.271687 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 30 13:17:49.274446 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 30 13:17:49.276376 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 13:17:49.277619 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 30 13:17:49.283553 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 13:17:49.286368 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 30 13:17:49.288285 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 13:17:49.296148 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 13:17:49.298337 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 13:17:49.302109 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 30 13:17:49.304105 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 13:17:49.307098 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 30 13:17:49.315068 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 30 13:17:49.319562 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 30 13:17:49.323751 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 30 13:17:49.329064 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 13:17:49.332389 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 13:17:49.335267 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 13:17:49.336910 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 13:17:49.339315 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 30 13:17:49.340055 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 30 13:17:49.342971 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 13:17:49.343608 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 13:17:49.349360 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 13:17:49.349855 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 13:17:49.353040 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 30 13:17:49.361660 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 30 13:17:49.361830 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 30 13:17:49.370057 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 30 13:17:49.378541 augenrules[1532]: No rules Oct 30 13:17:49.382250 systemd[1]: audit-rules.service: Deactivated successfully. Oct 30 13:17:49.384852 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 30 13:17:49.389202 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 30 13:17:49.430569 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 30 13:17:49.433155 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 30 13:17:49.459737 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 30 13:17:49.466529 systemd-networkd[1508]: lo: Link UP Oct 30 13:17:49.466541 systemd-networkd[1508]: lo: Gained carrier Oct 30 13:17:49.468833 systemd-networkd[1508]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 30 13:17:49.468843 systemd-networkd[1508]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 30 13:17:49.469597 systemd-networkd[1508]: eth0: Link UP Oct 30 13:17:49.469878 systemd-networkd[1508]: eth0: Gained carrier Oct 30 13:17:49.469896 systemd-networkd[1508]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 30 13:17:49.496037 systemd-networkd[1508]: eth0: DHCPv4 address 10.0.0.37/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 30 13:17:49.496791 systemd-timesyncd[1511]: Network configuration changed, trying to establish connection. Oct 30 13:17:49.498764 systemd-timesyncd[1511]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 30 13:17:49.498823 systemd-timesyncd[1511]: Initial clock synchronization to Thu 2025-10-30 13:17:49.681966 UTC. Oct 30 13:17:49.527058 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 30 13:17:49.531178 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 13:17:49.533933 systemd[1]: Reached target network.target - Network. Oct 30 13:17:49.535423 systemd[1]: Reached target time-set.target - System Time Set. Oct 30 13:17:49.538576 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 30 13:17:49.541579 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 30 13:17:49.570529 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 30 13:17:49.820290 ldconfig[1493]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 30 13:17:49.828039 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 30 13:17:49.831946 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 30 13:17:49.873597 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 30 13:17:49.875803 systemd[1]: Reached target sysinit.target - System Initialization. Oct 30 13:17:49.877657 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 30 13:17:49.879699 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 30 13:17:49.881757 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 30 13:17:49.883839 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 30 13:17:49.885699 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 30 13:17:49.887761 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 30 13:17:49.889820 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 30 13:17:49.889854 systemd[1]: Reached target paths.target - Path Units. Oct 30 13:17:49.891598 systemd[1]: Reached target timers.target - Timer Units. Oct 30 13:17:49.894038 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 30 13:17:49.897640 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 30 13:17:49.901447 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 30 13:17:49.903594 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 30 13:17:49.905602 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 30 13:17:49.909794 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 30 13:17:49.911752 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 30 13:17:49.914257 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 30 13:17:49.916720 systemd[1]: Reached target sockets.target - Socket Units. Oct 30 13:17:49.918274 systemd[1]: Reached target basic.target - Basic System. Oct 30 13:17:49.919806 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 30 13:17:49.919844 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 30 13:17:49.921027 systemd[1]: Starting containerd.service - containerd container runtime... Oct 30 13:17:49.924078 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 30 13:17:49.926780 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 30 13:17:49.935636 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 30 13:17:49.938624 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 30 13:17:49.940223 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 30 13:17:49.941531 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 30 13:17:49.943555 jq[1561]: false Oct 30 13:17:49.943556 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 30 13:17:49.947064 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 30 13:17:49.951363 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 30 13:17:49.956059 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 30 13:17:49.962760 extend-filesystems[1562]: Found /dev/vda6 Oct 30 13:17:49.966486 extend-filesystems[1562]: Found /dev/vda9 Oct 30 13:17:49.962967 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 30 13:17:49.965100 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 30 13:17:49.965697 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 30 13:17:49.969151 extend-filesystems[1562]: Checking size of /dev/vda9 Oct 30 13:17:49.970161 systemd[1]: Starting update-engine.service - Update Engine... Oct 30 13:17:49.971431 google_oslogin_nss_cache[1563]: oslogin_cache_refresh[1563]: Refreshing passwd entry cache Oct 30 13:17:49.971415 oslogin_cache_refresh[1563]: Refreshing passwd entry cache Oct 30 13:17:49.973651 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 30 13:17:49.978127 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 30 13:17:49.981481 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 30 13:17:49.981747 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 30 13:17:49.982152 systemd[1]: motdgen.service: Deactivated successfully. Oct 30 13:17:49.982433 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 30 13:17:49.984569 google_oslogin_nss_cache[1563]: oslogin_cache_refresh[1563]: Failure getting users, quitting Oct 30 13:17:49.984561 oslogin_cache_refresh[1563]: Failure getting users, quitting Oct 30 13:17:49.984648 google_oslogin_nss_cache[1563]: oslogin_cache_refresh[1563]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 30 13:17:49.984648 google_oslogin_nss_cache[1563]: oslogin_cache_refresh[1563]: Refreshing group entry cache Oct 30 13:17:49.984582 oslogin_cache_refresh[1563]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 30 13:17:49.984638 oslogin_cache_refresh[1563]: Refreshing group entry cache Oct 30 13:17:49.985249 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 30 13:17:49.985523 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 30 13:17:49.997167 jq[1581]: true Oct 30 13:17:49.995919 oslogin_cache_refresh[1563]: Failure getting groups, quitting Oct 30 13:17:49.997373 extend-filesystems[1562]: Resized partition /dev/vda9 Oct 30 13:17:50.000113 google_oslogin_nss_cache[1563]: oslogin_cache_refresh[1563]: Failure getting groups, quitting Oct 30 13:17:50.000113 google_oslogin_nss_cache[1563]: oslogin_cache_refresh[1563]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 30 13:17:49.995931 oslogin_cache_refresh[1563]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 30 13:17:49.997783 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 30 13:17:49.998626 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 30 13:17:50.006084 extend-filesystems[1605]: resize2fs 1.47.3 (8-Jul-2025) Oct 30 13:17:50.011151 update_engine[1578]: I20251030 13:17:50.009243 1578 main.cc:92] Flatcar Update Engine starting Oct 30 13:17:50.012207 jq[1601]: true Oct 30 13:17:50.016679 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 30 13:17:50.033529 tar[1587]: linux-amd64/LICENSE Oct 30 13:17:50.039423 tar[1587]: linux-amd64/helm Oct 30 13:17:50.049304 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 30 13:17:50.053827 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 30 13:17:50.070597 update_engine[1578]: I20251030 13:17:50.061621 1578 update_check_scheduler.cc:74] Next update check in 11m20s Oct 30 13:17:50.053569 dbus-daemon[1559]: [system] SELinux support is enabled Oct 30 13:17:50.077091 extend-filesystems[1605]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 30 13:17:50.077091 extend-filesystems[1605]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 30 13:17:50.077091 extend-filesystems[1605]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 30 13:17:50.087088 extend-filesystems[1562]: Resized filesystem in /dev/vda9 Oct 30 13:17:50.088971 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 30 13:17:50.089325 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 30 13:17:50.108491 systemd[1]: Started update-engine.service - Update Engine. Oct 30 13:17:50.110795 sshd_keygen[1589]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 30 13:17:50.110873 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 30 13:17:50.110898 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 30 13:17:50.113085 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 30 13:17:50.113107 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 30 13:17:50.116973 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 30 13:17:50.121369 bash[1626]: Updated "/home/core/.ssh/authorized_keys" Oct 30 13:17:50.119253 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 30 13:17:50.129637 systemd-logind[1576]: Watching system buttons on /dev/input/event2 (Power Button) Oct 30 13:17:50.130198 systemd-logind[1576]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 30 13:17:50.130720 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 30 13:17:50.130772 systemd-logind[1576]: New seat seat0. Oct 30 13:17:50.132659 systemd[1]: Started systemd-logind.service - User Login Management. Oct 30 13:17:50.146151 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 30 13:17:50.154287 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 30 13:17:50.207315 systemd[1]: issuegen.service: Deactivated successfully. Oct 30 13:17:50.207623 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 30 13:17:50.214275 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 30 13:17:50.233992 locksmithd[1635]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 30 13:17:50.246565 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 30 13:17:50.250657 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 30 13:17:50.255519 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 30 13:17:50.258808 systemd[1]: Reached target getty.target - Login Prompts. Oct 30 13:17:50.487171 containerd[1588]: time="2025-10-30T13:17:50Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 30 13:17:50.490027 containerd[1588]: time="2025-10-30T13:17:50.488990340Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 30 13:17:50.499608 containerd[1588]: time="2025-10-30T13:17:50.499556435Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.765µs" Oct 30 13:17:50.499608 containerd[1588]: time="2025-10-30T13:17:50.499602135Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 30 13:17:50.499669 containerd[1588]: time="2025-10-30T13:17:50.499627028Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 30 13:17:50.499886 containerd[1588]: time="2025-10-30T13:17:50.499851900Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 30 13:17:50.499918 containerd[1588]: time="2025-10-30T13:17:50.499887176Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 30 13:17:50.499940 containerd[1588]: time="2025-10-30T13:17:50.499918886Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 30 13:17:50.500142 containerd[1588]: time="2025-10-30T13:17:50.500115685Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 30 13:17:50.500173 containerd[1588]: time="2025-10-30T13:17:50.500141440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 30 13:17:50.500580 containerd[1588]: time="2025-10-30T13:17:50.500548188Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 30 13:17:50.500611 containerd[1588]: time="2025-10-30T13:17:50.500579929Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 30 13:17:50.500611 containerd[1588]: time="2025-10-30T13:17:50.500594164Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 30 13:17:50.500611 containerd[1588]: time="2025-10-30T13:17:50.500606361Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 30 13:17:50.500787 containerd[1588]: time="2025-10-30T13:17:50.500721220Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 30 13:17:50.501250 containerd[1588]: time="2025-10-30T13:17:50.501218024Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 30 13:17:50.501288 containerd[1588]: time="2025-10-30T13:17:50.501265333Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 30 13:17:50.501288 containerd[1588]: time="2025-10-30T13:17:50.501278093Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 30 13:17:50.501326 containerd[1588]: time="2025-10-30T13:17:50.501306810Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 30 13:17:50.501584 containerd[1588]: time="2025-10-30T13:17:50.501547597Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 30 13:17:50.501695 containerd[1588]: time="2025-10-30T13:17:50.501666566Z" level=info msg="metadata content store policy set" policy=shared Oct 30 13:17:50.508116 containerd[1588]: time="2025-10-30T13:17:50.508079599Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 30 13:17:50.508156 containerd[1588]: time="2025-10-30T13:17:50.508125197Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 30 13:17:50.508177 containerd[1588]: time="2025-10-30T13:17:50.508167238Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 30 13:17:50.508197 containerd[1588]: time="2025-10-30T13:17:50.508182631Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 30 13:17:50.508217 containerd[1588]: time="2025-10-30T13:17:50.508195688Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 30 13:17:50.508217 containerd[1588]: time="2025-10-30T13:17:50.508207413Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 30 13:17:50.508269 containerd[1588]: time="2025-10-30T13:17:50.508219548Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 30 13:17:50.508269 containerd[1588]: time="2025-10-30T13:17:50.508232318Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 30 13:17:50.508269 containerd[1588]: time="2025-10-30T13:17:50.508243397Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 30 13:17:50.508269 containerd[1588]: time="2025-10-30T13:17:50.508253246Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 30 13:17:50.508338 containerd[1588]: time="2025-10-30T13:17:50.508273877Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 30 13:17:50.508338 containerd[1588]: time="2025-10-30T13:17:50.508298249Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 30 13:17:50.508481 containerd[1588]: time="2025-10-30T13:17:50.508448241Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 30 13:17:50.508481 containerd[1588]: time="2025-10-30T13:17:50.508474489Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 30 13:17:50.508538 containerd[1588]: time="2025-10-30T13:17:50.508488940Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 30 13:17:50.508538 containerd[1588]: time="2025-10-30T13:17:50.508506425Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 30 13:17:50.508538 containerd[1588]: time="2025-10-30T13:17:50.508524349Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 30 13:17:50.508538 containerd[1588]: time="2025-10-30T13:17:50.508536730Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 30 13:17:50.508607 containerd[1588]: time="2025-10-30T13:17:50.508548342Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 30 13:17:50.508607 containerd[1588]: time="2025-10-30T13:17:50.508559001Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 30 13:17:50.508607 containerd[1588]: time="2025-10-30T13:17:50.508570459Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 30 13:17:50.508607 containerd[1588]: time="2025-10-30T13:17:50.508581066Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 30 13:17:50.508607 containerd[1588]: time="2025-10-30T13:17:50.508591644Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 30 13:17:50.508708 containerd[1588]: time="2025-10-30T13:17:50.508657462Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 30 13:17:50.508708 containerd[1588]: time="2025-10-30T13:17:50.508672989Z" level=info msg="Start snapshots syncer" Oct 30 13:17:50.508780 containerd[1588]: time="2025-10-30T13:17:50.508743552Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 30 13:17:50.509049 containerd[1588]: time="2025-10-30T13:17:50.508976981Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 30 13:17:50.509233 containerd[1588]: time="2025-10-30T13:17:50.509057219Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 30 13:17:50.509233 containerd[1588]: time="2025-10-30T13:17:50.509136269Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 30 13:17:50.509288 containerd[1588]: time="2025-10-30T13:17:50.509245594Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 30 13:17:50.509288 containerd[1588]: time="2025-10-30T13:17:50.509264760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 30 13:17:50.509288 containerd[1588]: time="2025-10-30T13:17:50.509276361Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 30 13:17:50.509288 containerd[1588]: time="2025-10-30T13:17:50.509285851Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 30 13:17:50.509365 containerd[1588]: time="2025-10-30T13:17:50.509297597Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 30 13:17:50.509365 containerd[1588]: time="2025-10-30T13:17:50.509307785Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 30 13:17:50.509365 containerd[1588]: time="2025-10-30T13:17:50.509318996Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 30 13:17:50.509365 containerd[1588]: time="2025-10-30T13:17:50.509339362Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 30 13:17:50.509365 containerd[1588]: time="2025-10-30T13:17:50.509349826Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 30 13:17:50.509457 containerd[1588]: time="2025-10-30T13:17:50.509372229Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 30 13:17:50.509457 containerd[1588]: time="2025-10-30T13:17:50.509399235Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 30 13:17:50.509457 containerd[1588]: time="2025-10-30T13:17:50.509411154Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 30 13:17:50.509457 containerd[1588]: time="2025-10-30T13:17:50.509419210Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 30 13:17:50.509457 containerd[1588]: time="2025-10-30T13:17:50.509428065Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 30 13:17:50.509457 containerd[1588]: time="2025-10-30T13:17:50.509436285Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 30 13:17:50.509457 containerd[1588]: time="2025-10-30T13:17:50.509445560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 30 13:17:50.509457 containerd[1588]: time="2025-10-30T13:17:50.509455122Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 30 13:17:50.509612 containerd[1588]: time="2025-10-30T13:17:50.509473530Z" level=info msg="runtime interface created" Oct 30 13:17:50.509612 containerd[1588]: time="2025-10-30T13:17:50.509479095Z" level=info msg="created NRI interface" Oct 30 13:17:50.509612 containerd[1588]: time="2025-10-30T13:17:50.509498619Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 30 13:17:50.509612 containerd[1588]: time="2025-10-30T13:17:50.509510989Z" level=info msg="Connect containerd service" Oct 30 13:17:50.509612 containerd[1588]: time="2025-10-30T13:17:50.509561085Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 30 13:17:50.510420 containerd[1588]: time="2025-10-30T13:17:50.510386602Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 30 13:17:50.564363 tar[1587]: linux-amd64/README.md Oct 30 13:17:50.588946 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 30 13:17:50.636626 containerd[1588]: time="2025-10-30T13:17:50.636558044Z" level=info msg="Start subscribing containerd event" Oct 30 13:17:50.636754 containerd[1588]: time="2025-10-30T13:17:50.636638457Z" level=info msg="Start recovering state" Oct 30 13:17:50.636803 containerd[1588]: time="2025-10-30T13:17:50.636781193Z" level=info msg="Start event monitor" Oct 30 13:17:50.636831 containerd[1588]: time="2025-10-30T13:17:50.636811980Z" level=info msg="Start cni network conf syncer for default" Oct 30 13:17:50.636831 containerd[1588]: time="2025-10-30T13:17:50.636821307Z" level=info msg="Start streaming server" Oct 30 13:17:50.636895 containerd[1588]: time="2025-10-30T13:17:50.636833544Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 30 13:17:50.636895 containerd[1588]: time="2025-10-30T13:17:50.636841958Z" level=info msg="runtime interface starting up..." Oct 30 13:17:50.636895 containerd[1588]: time="2025-10-30T13:17:50.636856030Z" level=info msg="starting plugins..." Oct 30 13:17:50.636895 containerd[1588]: time="2025-10-30T13:17:50.636873423Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 30 13:17:50.637249 containerd[1588]: time="2025-10-30T13:17:50.637209647Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 30 13:17:50.637307 containerd[1588]: time="2025-10-30T13:17:50.637289701Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 30 13:17:50.638347 containerd[1588]: time="2025-10-30T13:17:50.637366199Z" level=info msg="containerd successfully booted in 0.150844s" Oct 30 13:17:50.637577 systemd[1]: Started containerd.service - containerd container runtime. Oct 30 13:17:51.459647 systemd-networkd[1508]: eth0: Gained IPv6LL Oct 30 13:17:51.463098 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 30 13:17:51.465957 systemd[1]: Reached target network-online.target - Network is Online. Oct 30 13:17:51.469417 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 30 13:17:51.472634 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 13:17:51.493700 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 30 13:17:51.513568 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 30 13:17:51.513921 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 30 13:17:51.516404 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 30 13:17:51.523086 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 30 13:17:52.751818 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 13:17:52.754389 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 30 13:17:52.756631 systemd[1]: Startup finished in 2.409s (kernel) + 6.336s (initrd) + 5.678s (userspace) = 14.424s. Oct 30 13:17:52.783544 (kubelet)[1703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 13:17:53.306849 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 30 13:17:53.308341 systemd[1]: Started sshd@0-10.0.0.37:22-10.0.0.1:49952.service - OpenSSH per-connection server daemon (10.0.0.1:49952). Oct 30 13:17:54.879069 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1469035751 wd_nsec: 1469035306 Oct 30 13:17:55.673790 kubelet[1703]: E1030 13:17:55.673693 1703 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 13:17:55.678133 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 13:17:55.678440 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 13:17:55.679021 systemd[1]: kubelet.service: Consumed 3.887s CPU time, 267.7M memory peak. Oct 30 13:17:55.852244 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 49952 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:17:55.854503 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:17:55.862695 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 30 13:17:55.863977 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 30 13:17:55.871024 systemd-logind[1576]: New session 1 of user core. Oct 30 13:17:55.888495 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 30 13:17:55.891760 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 30 13:17:55.907507 (systemd)[1721]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 30 13:17:55.910168 systemd-logind[1576]: New session c1 of user core. Oct 30 13:17:56.063964 systemd[1721]: Queued start job for default target default.target. Oct 30 13:17:56.079472 systemd[1721]: Created slice app.slice - User Application Slice. Oct 30 13:17:56.079504 systemd[1721]: Reached target paths.target - Paths. Oct 30 13:17:56.079565 systemd[1721]: Reached target timers.target - Timers. Oct 30 13:17:56.081293 systemd[1721]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 30 13:17:56.092950 systemd[1721]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 30 13:17:56.093152 systemd[1721]: Reached target sockets.target - Sockets. Oct 30 13:17:56.093209 systemd[1721]: Reached target basic.target - Basic System. Oct 30 13:17:56.093277 systemd[1721]: Reached target default.target - Main User Target. Oct 30 13:17:56.093322 systemd[1721]: Startup finished in 175ms. Oct 30 13:17:56.093589 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 30 13:17:56.095295 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 30 13:17:56.107494 systemd[1]: Started sshd@1-10.0.0.37:22-10.0.0.1:39354.service - OpenSSH per-connection server daemon (10.0.0.1:39354). Oct 30 13:17:56.164009 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 39354 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:17:56.165613 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:17:56.170580 systemd-logind[1576]: New session 2 of user core. Oct 30 13:17:56.180129 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 30 13:17:56.193580 sshd[1735]: Connection closed by 10.0.0.1 port 39354 Oct 30 13:17:56.193879 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Oct 30 13:17:56.206889 systemd[1]: sshd@1-10.0.0.37:22-10.0.0.1:39354.service: Deactivated successfully. Oct 30 13:17:56.208819 systemd[1]: session-2.scope: Deactivated successfully. Oct 30 13:17:56.209633 systemd-logind[1576]: Session 2 logged out. Waiting for processes to exit. Oct 30 13:17:56.212931 systemd[1]: Started sshd@2-10.0.0.37:22-10.0.0.1:39368.service - OpenSSH per-connection server daemon (10.0.0.1:39368). Oct 30 13:17:56.213608 systemd-logind[1576]: Removed session 2. Oct 30 13:17:56.274314 sshd[1741]: Accepted publickey for core from 10.0.0.1 port 39368 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:17:56.275611 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:17:56.279835 systemd-logind[1576]: New session 3 of user core. Oct 30 13:17:56.290123 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 30 13:17:56.298680 sshd[1744]: Connection closed by 10.0.0.1 port 39368 Oct 30 13:17:56.299038 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Oct 30 13:17:56.311497 systemd[1]: sshd@2-10.0.0.37:22-10.0.0.1:39368.service: Deactivated successfully. Oct 30 13:17:56.313205 systemd[1]: session-3.scope: Deactivated successfully. Oct 30 13:17:56.313896 systemd-logind[1576]: Session 3 logged out. Waiting for processes to exit. Oct 30 13:17:56.316441 systemd[1]: Started sshd@3-10.0.0.37:22-10.0.0.1:39380.service - OpenSSH per-connection server daemon (10.0.0.1:39380). Oct 30 13:17:56.317054 systemd-logind[1576]: Removed session 3. Oct 30 13:17:56.384039 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 39380 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:17:56.385475 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:17:56.389898 systemd-logind[1576]: New session 4 of user core. Oct 30 13:17:56.403152 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 30 13:17:56.416409 sshd[1753]: Connection closed by 10.0.0.1 port 39380 Oct 30 13:17:56.416789 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Oct 30 13:17:56.425430 systemd[1]: sshd@3-10.0.0.37:22-10.0.0.1:39380.service: Deactivated successfully. Oct 30 13:17:56.427159 systemd[1]: session-4.scope: Deactivated successfully. Oct 30 13:17:56.427963 systemd-logind[1576]: Session 4 logged out. Waiting for processes to exit. Oct 30 13:17:56.430522 systemd[1]: Started sshd@4-10.0.0.37:22-10.0.0.1:39386.service - OpenSSH per-connection server daemon (10.0.0.1:39386). Oct 30 13:17:56.431058 systemd-logind[1576]: Removed session 4. Oct 30 13:17:56.502259 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 39386 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:17:56.503510 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:17:56.507459 systemd-logind[1576]: New session 5 of user core. Oct 30 13:17:56.517109 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 30 13:17:56.537793 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 30 13:17:56.538157 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 13:17:56.555473 sudo[1763]: pam_unix(sudo:session): session closed for user root Oct 30 13:17:56.557170 sshd[1762]: Connection closed by 10.0.0.1 port 39386 Oct 30 13:17:56.557591 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Oct 30 13:17:56.574407 systemd[1]: sshd@4-10.0.0.37:22-10.0.0.1:39386.service: Deactivated successfully. Oct 30 13:17:56.576506 systemd[1]: session-5.scope: Deactivated successfully. Oct 30 13:17:56.577393 systemd-logind[1576]: Session 5 logged out. Waiting for processes to exit. Oct 30 13:17:56.580274 systemd[1]: Started sshd@5-10.0.0.37:22-10.0.0.1:39390.service - OpenSSH per-connection server daemon (10.0.0.1:39390). Oct 30 13:17:56.580890 systemd-logind[1576]: Removed session 5. Oct 30 13:17:56.643306 sshd[1769]: Accepted publickey for core from 10.0.0.1 port 39390 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:17:56.645030 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:17:56.649918 systemd-logind[1576]: New session 6 of user core. Oct 30 13:17:56.659133 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 30 13:17:56.673502 sudo[1775]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 30 13:17:56.673815 sudo[1775]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 13:17:56.680426 sudo[1775]: pam_unix(sudo:session): session closed for user root Oct 30 13:17:56.687842 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 30 13:17:56.688172 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 13:17:56.698011 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 30 13:17:56.754028 augenrules[1797]: No rules Oct 30 13:17:56.755715 systemd[1]: audit-rules.service: Deactivated successfully. Oct 30 13:17:56.756090 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 30 13:17:56.757665 sudo[1774]: pam_unix(sudo:session): session closed for user root Oct 30 13:17:56.759900 sshd[1773]: Connection closed by 10.0.0.1 port 39390 Oct 30 13:17:56.760205 sshd-session[1769]: pam_unix(sshd:session): session closed for user core Oct 30 13:17:56.776240 systemd[1]: sshd@5-10.0.0.37:22-10.0.0.1:39390.service: Deactivated successfully. Oct 30 13:17:56.778428 systemd[1]: session-6.scope: Deactivated successfully. Oct 30 13:17:56.779293 systemd-logind[1576]: Session 6 logged out. Waiting for processes to exit. Oct 30 13:17:56.782367 systemd[1]: Started sshd@6-10.0.0.37:22-10.0.0.1:39396.service - OpenSSH per-connection server daemon (10.0.0.1:39396). Oct 30 13:17:56.783291 systemd-logind[1576]: Removed session 6. Oct 30 13:17:56.845418 sshd[1806]: Accepted publickey for core from 10.0.0.1 port 39396 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:17:56.847383 sshd-session[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:17:56.852181 systemd-logind[1576]: New session 7 of user core. Oct 30 13:17:56.862367 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 30 13:17:56.877428 sudo[1811]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 30 13:17:56.877785 sudo[1811]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 13:17:58.625088 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 30 13:17:58.646582 (dockerd)[1831]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 30 13:17:59.308324 dockerd[1831]: time="2025-10-30T13:17:59.308230142Z" level=info msg="Starting up" Oct 30 13:17:59.309370 dockerd[1831]: time="2025-10-30T13:17:59.309331757Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 30 13:17:59.346457 dockerd[1831]: time="2025-10-30T13:17:59.346397989Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 30 13:17:59.643724 dockerd[1831]: time="2025-10-30T13:17:59.643548924Z" level=info msg="Loading containers: start." Oct 30 13:17:59.657019 kernel: Initializing XFRM netlink socket Oct 30 13:17:59.961230 systemd-networkd[1508]: docker0: Link UP Oct 30 13:17:59.969414 dockerd[1831]: time="2025-10-30T13:17:59.969347126Z" level=info msg="Loading containers: done." Oct 30 13:17:59.993776 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3535576703-merged.mount: Deactivated successfully. Oct 30 13:17:59.995235 dockerd[1831]: time="2025-10-30T13:17:59.995171869Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 30 13:17:59.995328 dockerd[1831]: time="2025-10-30T13:17:59.995292047Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 30 13:17:59.995433 dockerd[1831]: time="2025-10-30T13:17:59.995408806Z" level=info msg="Initializing buildkit" Oct 30 13:18:00.027141 dockerd[1831]: time="2025-10-30T13:18:00.027106131Z" level=info msg="Completed buildkit initialization" Oct 30 13:18:00.034099 dockerd[1831]: time="2025-10-30T13:18:00.034059271Z" level=info msg="Daemon has completed initialization" Oct 30 13:18:00.034209 dockerd[1831]: time="2025-10-30T13:18:00.034142909Z" level=info msg="API listen on /run/docker.sock" Oct 30 13:18:00.034392 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 30 13:18:01.019032 containerd[1588]: time="2025-10-30T13:18:01.018941651Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Oct 30 13:18:01.622228 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4203977251.mount: Deactivated successfully. Oct 30 13:18:02.777249 containerd[1588]: time="2025-10-30T13:18:02.777166414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:18:02.778035 containerd[1588]: time="2025-10-30T13:18:02.777988172Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Oct 30 13:18:02.779355 containerd[1588]: time="2025-10-30T13:18:02.779297636Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:18:02.781681 containerd[1588]: time="2025-10-30T13:18:02.781648912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:18:02.784004 containerd[1588]: time="2025-10-30T13:18:02.783794295Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 1.764766546s" Oct 30 13:18:02.784004 containerd[1588]: time="2025-10-30T13:18:02.783837183Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Oct 30 13:18:02.785367 containerd[1588]: time="2025-10-30T13:18:02.785321719Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Oct 30 13:18:04.533677 containerd[1588]: time="2025-10-30T13:18:04.533592998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:18:04.534266 containerd[1588]: time="2025-10-30T13:18:04.534217268Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Oct 30 13:18:04.535382 containerd[1588]: time="2025-10-30T13:18:04.535331873Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:18:04.537849 containerd[1588]: time="2025-10-30T13:18:04.537798790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:18:04.538826 containerd[1588]: time="2025-10-30T13:18:04.538775270Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.753132029s" Oct 30 13:18:04.538826 containerd[1588]: time="2025-10-30T13:18:04.538823350Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Oct 30 13:18:04.539603 containerd[1588]: time="2025-10-30T13:18:04.539553260Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Oct 30 13:18:05.761767 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 30 13:18:05.764334 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 13:18:06.306632 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 13:18:06.320357 (kubelet)[2126]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 13:18:06.379006 kubelet[2126]: E1030 13:18:06.378916 2126 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 13:18:06.386000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 13:18:06.386236 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 13:18:06.386693 systemd[1]: kubelet.service: Consumed 324ms CPU time, 110.4M memory peak. Oct 30 13:18:08.452391 containerd[1588]: time="2025-10-30T13:18:08.452273394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:18:08.453129 containerd[1588]: time="2025-10-30T13:18:08.453102182Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Oct 30 13:18:08.454378 containerd[1588]: time="2025-10-30T13:18:08.454342422Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:18:08.457124 containerd[1588]: time="2025-10-30T13:18:08.457070128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:18:08.458066 containerd[1588]: time="2025-10-30T13:18:08.458029361Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 3.918447972s" Oct 30 13:18:08.458116 containerd[1588]: time="2025-10-30T13:18:08.458077300Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Oct 30 13:18:08.459431 containerd[1588]: time="2025-10-30T13:18:08.459387054Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Oct 30 13:18:10.066252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3676287954.mount: Deactivated successfully. Oct 30 13:18:10.482828 containerd[1588]: time="2025-10-30T13:18:10.482664905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:18:10.483444 containerd[1588]: time="2025-10-30T13:18:10.483379316Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Oct 30 13:18:10.484460 containerd[1588]: time="2025-10-30T13:18:10.484427071Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:18:10.486418 containerd[1588]: time="2025-10-30T13:18:10.486350063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:18:10.486998 containerd[1588]: time="2025-10-30T13:18:10.486926979Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 2.02750479s" Oct 30 13:18:10.487062 containerd[1588]: time="2025-10-30T13:18:10.487000875Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Oct 30 13:18:10.487576 containerd[1588]: time="2025-10-30T13:18:10.487551479Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Oct 30 13:18:11.174054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1665834614.mount: Deactivated successfully. Oct 30 13:18:12.314507 containerd[1588]: time="2025-10-30T13:18:12.314427952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:18:12.315188 containerd[1588]: time="2025-10-30T13:18:12.315100199Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Oct 30 13:18:12.316421 containerd[1588]: time="2025-10-30T13:18:12.316381667Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:18:12.318944 containerd[1588]: time="2025-10-30T13:18:12.318907859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:18:12.320091 containerd[1588]: time="2025-10-30T13:18:12.320043105Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.832461976s" Oct 30 13:18:12.320140 containerd[1588]: time="2025-10-30T13:18:12.320092016Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Oct 30 13:18:12.320796 containerd[1588]: time="2025-10-30T13:18:12.320744180Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 30 13:18:12.800104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3152503005.mount: Deactivated successfully. Oct 30 13:18:12.807039 containerd[1588]: time="2025-10-30T13:18:12.806966291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 13:18:12.807758 containerd[1588]: time="2025-10-30T13:18:12.807693145Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 30 13:18:12.809066 containerd[1588]: time="2025-10-30T13:18:12.809029694Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 13:18:12.811025 containerd[1588]: time="2025-10-30T13:18:12.810957878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 13:18:12.811555 containerd[1588]: time="2025-10-30T13:18:12.811507536Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 490.727285ms" Oct 30 13:18:12.811555 containerd[1588]: time="2025-10-30T13:18:12.811549998Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 30 13:18:12.812157 containerd[1588]: time="2025-10-30T13:18:12.812049410Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Oct 30 13:18:13.404244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3510969074.mount: Deactivated successfully. Oct 30 13:18:15.679397 containerd[1588]: time="2025-10-30T13:18:15.679320414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:18:15.680235 containerd[1588]: time="2025-10-30T13:18:15.680146295Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Oct 30 13:18:15.681412 containerd[1588]: time="2025-10-30T13:18:15.681377795Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:18:15.684102 containerd[1588]: time="2025-10-30T13:18:15.684068228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:18:15.685141 containerd[1588]: time="2025-10-30T13:18:15.685110781Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.873028494s" Oct 30 13:18:15.685178 containerd[1588]: time="2025-10-30T13:18:15.685145123Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Oct 30 13:18:16.511706 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 30 13:18:16.513473 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 13:18:16.746122 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 13:18:16.767249 (kubelet)[2285]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 13:18:16.814312 kubelet[2285]: E1030 13:18:16.814247 2285 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 13:18:16.818693 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 13:18:16.818938 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 13:18:16.819398 systemd[1]: kubelet.service: Consumed 240ms CPU time, 111M memory peak. Oct 30 13:18:18.786215 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 13:18:18.786448 systemd[1]: kubelet.service: Consumed 240ms CPU time, 111M memory peak. Oct 30 13:18:18.789379 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 13:18:18.818777 systemd[1]: Reload requested from client PID 2300 ('systemctl') (unit session-7.scope)... Oct 30 13:18:18.818802 systemd[1]: Reloading... Oct 30 13:18:18.892046 zram_generator::config[2343]: No configuration found. Oct 30 13:18:19.464349 systemd[1]: Reloading finished in 645 ms. Oct 30 13:18:19.531592 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 30 13:18:19.531690 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 30 13:18:19.532026 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 13:18:19.532069 systemd[1]: kubelet.service: Consumed 164ms CPU time, 98.3M memory peak. Oct 30 13:18:19.533645 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 13:18:19.732345 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 13:18:19.736501 (kubelet)[2391]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 30 13:18:19.781486 kubelet[2391]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 13:18:19.781486 kubelet[2391]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 30 13:18:19.781486 kubelet[2391]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 13:18:19.781903 kubelet[2391]: I1030 13:18:19.781528 2391 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 30 13:18:20.321770 kubelet[2391]: I1030 13:18:20.321705 2391 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 30 13:18:20.321770 kubelet[2391]: I1030 13:18:20.321736 2391 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 30 13:18:20.322090 kubelet[2391]: I1030 13:18:20.322060 2391 server.go:956] "Client rotation is on, will bootstrap in background" Oct 30 13:18:20.354105 kubelet[2391]: I1030 13:18:20.354028 2391 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 30 13:18:20.357783 kubelet[2391]: E1030 13:18:20.357737 2391 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 30 13:18:20.364875 kubelet[2391]: I1030 13:18:20.364855 2391 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 30 13:18:20.373560 kubelet[2391]: I1030 13:18:20.373494 2391 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 30 13:18:20.373908 kubelet[2391]: I1030 13:18:20.373854 2391 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 30 13:18:20.374099 kubelet[2391]: I1030 13:18:20.373880 2391 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 30 13:18:20.374099 kubelet[2391]: I1030 13:18:20.374095 2391 topology_manager.go:138] "Creating topology manager with none policy" Oct 30 13:18:20.374410 kubelet[2391]: I1030 13:18:20.374110 2391 container_manager_linux.go:303] "Creating device plugin manager" Oct 30 13:18:20.374410 kubelet[2391]: I1030 13:18:20.374307 2391 state_mem.go:36] "Initialized new in-memory state store" Oct 30 13:18:20.377298 kubelet[2391]: I1030 13:18:20.377247 2391 kubelet.go:480] "Attempting to sync node with API server" Oct 30 13:18:20.377298 kubelet[2391]: I1030 13:18:20.377271 2391 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 30 13:18:20.377552 kubelet[2391]: I1030 13:18:20.377330 2391 kubelet.go:386] "Adding apiserver pod source" Oct 30 13:18:20.377552 kubelet[2391]: I1030 13:18:20.377349 2391 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 30 13:18:20.390639 kubelet[2391]: E1030 13:18:20.390297 2391 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 30 13:18:20.390850 kubelet[2391]: I1030 13:18:20.390706 2391 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 30 13:18:20.391474 kubelet[2391]: I1030 13:18:20.391434 2391 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 30 13:18:20.393000 kubelet[2391]: W1030 13:18:20.392960 2391 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 30 13:18:20.394302 kubelet[2391]: E1030 13:18:20.394238 2391 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 30 13:18:20.397514 kubelet[2391]: I1030 13:18:20.397488 2391 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 30 13:18:20.397590 kubelet[2391]: I1030 13:18:20.397551 2391 server.go:1289] "Started kubelet" Oct 30 13:18:20.399972 kubelet[2391]: I1030 13:18:20.399529 2391 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 30 13:18:20.401912 kubelet[2391]: I1030 13:18:20.401803 2391 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 30 13:18:20.403636 kubelet[2391]: I1030 13:18:20.403570 2391 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 30 13:18:20.403636 kubelet[2391]: I1030 13:18:20.403632 2391 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 30 13:18:20.404436 kubelet[2391]: I1030 13:18:20.404410 2391 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 30 13:18:20.404672 kubelet[2391]: I1030 13:18:20.404642 2391 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 30 13:18:20.405301 kubelet[2391]: I1030 13:18:20.405263 2391 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 30 13:18:20.405383 kubelet[2391]: I1030 13:18:20.405338 2391 reconciler.go:26] "Reconciler: start to sync state" Oct 30 13:18:20.405751 kubelet[2391]: I1030 13:18:20.405719 2391 server.go:317] "Adding debug handlers to kubelet server" Oct 30 13:18:20.405907 kubelet[2391]: E1030 13:18:20.405870 2391 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 30 13:18:20.405950 kubelet[2391]: E1030 13:18:20.403197 2391 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.37:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.37:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18734752d9b697ee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-30 13:18:20.39750859 +0000 UTC m=+0.656657445,LastTimestamp:2025-10-30 13:18:20.39750859 +0000 UTC m=+0.656657445,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 30 13:18:20.407106 kubelet[2391]: E1030 13:18:20.406361 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.37:6443: connect: connection refused" interval="200ms" Oct 30 13:18:20.407106 kubelet[2391]: E1030 13:18:20.406408 2391 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 13:18:20.407106 kubelet[2391]: I1030 13:18:20.406799 2391 factory.go:223] Registration of the systemd container factory successfully Oct 30 13:18:20.407106 kubelet[2391]: I1030 13:18:20.406874 2391 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 30 13:18:20.407435 kubelet[2391]: E1030 13:18:20.407407 2391 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 30 13:18:20.407592 kubelet[2391]: I1030 13:18:20.407563 2391 factory.go:223] Registration of the containerd container factory successfully Oct 30 13:18:20.473243 kubelet[2391]: I1030 13:18:20.473166 2391 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 30 13:18:20.475004 kubelet[2391]: I1030 13:18:20.474890 2391 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 30 13:18:20.475004 kubelet[2391]: I1030 13:18:20.474930 2391 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 30 13:18:20.475004 kubelet[2391]: I1030 13:18:20.474964 2391 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 30 13:18:20.475004 kubelet[2391]: I1030 13:18:20.474974 2391 kubelet.go:2436] "Starting kubelet main sync loop" Oct 30 13:18:20.475138 kubelet[2391]: E1030 13:18:20.475055 2391 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 30 13:18:20.477038 kubelet[2391]: E1030 13:18:20.476997 2391 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 30 13:18:20.477624 kubelet[2391]: I1030 13:18:20.477589 2391 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 30 13:18:20.477624 kubelet[2391]: I1030 13:18:20.477615 2391 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 30 13:18:20.477689 kubelet[2391]: I1030 13:18:20.477637 2391 state_mem.go:36] "Initialized new in-memory state store" Oct 30 13:18:20.506939 kubelet[2391]: E1030 13:18:20.506892 2391 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 13:18:20.576336 kubelet[2391]: E1030 13:18:20.576144 2391 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 30 13:18:20.607508 kubelet[2391]: E1030 13:18:20.607455 2391 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 13:18:20.607871 kubelet[2391]: E1030 13:18:20.607825 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.37:6443: connect: connection refused" interval="400ms" Oct 30 13:18:20.625764 kubelet[2391]: I1030 13:18:20.625717 2391 policy_none.go:49] "None policy: Start" Oct 30 13:18:20.625764 kubelet[2391]: I1030 13:18:20.625756 2391 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 30 13:18:20.625835 kubelet[2391]: I1030 13:18:20.625780 2391 state_mem.go:35] "Initializing new in-memory state store" Oct 30 13:18:20.633558 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 30 13:18:20.647269 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 30 13:18:20.651795 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 30 13:18:20.665254 kubelet[2391]: E1030 13:18:20.665215 2391 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 30 13:18:20.665673 kubelet[2391]: I1030 13:18:20.665632 2391 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 30 13:18:20.665723 kubelet[2391]: I1030 13:18:20.665660 2391 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 30 13:18:20.665920 kubelet[2391]: I1030 13:18:20.665891 2391 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 30 13:18:20.666825 kubelet[2391]: E1030 13:18:20.666803 2391 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 30 13:18:20.666882 kubelet[2391]: E1030 13:18:20.666842 2391 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 30 13:18:20.767310 kubelet[2391]: I1030 13:18:20.767266 2391 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 13:18:20.767602 kubelet[2391]: E1030 13:18:20.767580 2391 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.37:6443/api/v1/nodes\": dial tcp 10.0.0.37:6443: connect: connection refused" node="localhost" Oct 30 13:18:20.786961 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Oct 30 13:18:20.807481 kubelet[2391]: I1030 13:18:20.807451 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 30 13:18:20.807481 kubelet[2391]: I1030 13:18:20.807485 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3156c81665388dbde2419c0233b1879b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3156c81665388dbde2419c0233b1879b\") " pod="kube-system/kube-apiserver-localhost" Oct 30 13:18:20.807934 kubelet[2391]: I1030 13:18:20.807505 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:18:20.807934 kubelet[2391]: I1030 13:18:20.807522 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:18:20.807934 kubelet[2391]: I1030 13:18:20.807537 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:18:20.807934 kubelet[2391]: E1030 13:18:20.807547 2391 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 13:18:20.807934 kubelet[2391]: I1030 13:18:20.807550 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3156c81665388dbde2419c0233b1879b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3156c81665388dbde2419c0233b1879b\") " pod="kube-system/kube-apiserver-localhost" Oct 30 13:18:20.807934 kubelet[2391]: I1030 13:18:20.807608 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3156c81665388dbde2419c0233b1879b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3156c81665388dbde2419c0233b1879b\") " pod="kube-system/kube-apiserver-localhost" Oct 30 13:18:20.808089 kubelet[2391]: I1030 13:18:20.807632 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:18:20.808089 kubelet[2391]: I1030 13:18:20.807649 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:18:20.810830 systemd[1]: Created slice kubepods-burstable-pod3156c81665388dbde2419c0233b1879b.slice - libcontainer container kubepods-burstable-pod3156c81665388dbde2419c0233b1879b.slice. Oct 30 13:18:20.812942 kubelet[2391]: E1030 13:18:20.812885 2391 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 13:18:20.826506 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Oct 30 13:18:20.828850 kubelet[2391]: E1030 13:18:20.828808 2391 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 13:18:20.968854 kubelet[2391]: I1030 13:18:20.968809 2391 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 13:18:20.969175 kubelet[2391]: E1030 13:18:20.969148 2391 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.37:6443/api/v1/nodes\": dial tcp 10.0.0.37:6443: connect: connection refused" node="localhost" Oct 30 13:18:21.009429 kubelet[2391]: E1030 13:18:21.009358 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.37:6443: connect: connection refused" interval="800ms" Oct 30 13:18:21.108722 kubelet[2391]: E1030 13:18:21.108563 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:21.109432 containerd[1588]: time="2025-10-30T13:18:21.109379806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Oct 30 13:18:21.113594 kubelet[2391]: E1030 13:18:21.113564 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:21.114079 containerd[1588]: time="2025-10-30T13:18:21.114020865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3156c81665388dbde2419c0233b1879b,Namespace:kube-system,Attempt:0,}" Oct 30 13:18:21.129406 kubelet[2391]: E1030 13:18:21.129375 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:21.129898 containerd[1588]: time="2025-10-30T13:18:21.129856954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Oct 30 13:18:21.145582 containerd[1588]: time="2025-10-30T13:18:21.145519845Z" level=info msg="connecting to shim 482f94688b6554b87945d2bd64d330df71e5577521061362762e15396b533a77" address="unix:///run/containerd/s/2babd3c754a2adceec0cdf6db73f80b07eb790905c6dbd33f884c7c519304888" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:18:21.158168 containerd[1588]: time="2025-10-30T13:18:21.157741848Z" level=info msg="connecting to shim c7c57522800770d7c89aa37681d57af6ab30e1fc60288276258e630da1ce7c40" address="unix:///run/containerd/s/7b6075a7f72469ca071bd745354398f05cbb21bbba057f061f9834464a60752b" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:18:21.162272 containerd[1588]: time="2025-10-30T13:18:21.159841563Z" level=info msg="connecting to shim 40b91a99d947a0b9dabdd50ae0bc00a3699e0d4c008d17128732c78d861ecbf9" address="unix:///run/containerd/s/9d1d1d8f22ae2a1a05c61ed791b8eb3a8a19098d72807e6309ca99965587702b" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:18:21.271264 systemd[1]: Started cri-containerd-482f94688b6554b87945d2bd64d330df71e5577521061362762e15396b533a77.scope - libcontainer container 482f94688b6554b87945d2bd64d330df71e5577521061362762e15396b533a77. Oct 30 13:18:21.276336 systemd[1]: Started cri-containerd-40b91a99d947a0b9dabdd50ae0bc00a3699e0d4c008d17128732c78d861ecbf9.scope - libcontainer container 40b91a99d947a0b9dabdd50ae0bc00a3699e0d4c008d17128732c78d861ecbf9. Oct 30 13:18:21.278524 systemd[1]: Started cri-containerd-c7c57522800770d7c89aa37681d57af6ab30e1fc60288276258e630da1ce7c40.scope - libcontainer container c7c57522800770d7c89aa37681d57af6ab30e1fc60288276258e630da1ce7c40. Oct 30 13:18:21.324046 kubelet[2391]: E1030 13:18:21.320697 2391 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 30 13:18:21.339468 containerd[1588]: time="2025-10-30T13:18:21.339390437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"482f94688b6554b87945d2bd64d330df71e5577521061362762e15396b533a77\"" Oct 30 13:18:21.340899 containerd[1588]: time="2025-10-30T13:18:21.340856922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3156c81665388dbde2419c0233b1879b,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7c57522800770d7c89aa37681d57af6ab30e1fc60288276258e630da1ce7c40\"" Oct 30 13:18:21.341511 kubelet[2391]: E1030 13:18:21.341477 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:21.342506 kubelet[2391]: E1030 13:18:21.342485 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:21.347656 containerd[1588]: time="2025-10-30T13:18:21.347022986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"40b91a99d947a0b9dabdd50ae0bc00a3699e0d4c008d17128732c78d861ecbf9\"" Oct 30 13:18:21.348850 kubelet[2391]: E1030 13:18:21.348796 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:21.353270 containerd[1588]: time="2025-10-30T13:18:21.351115245Z" level=info msg="CreateContainer within sandbox \"482f94688b6554b87945d2bd64d330df71e5577521061362762e15396b533a77\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 30 13:18:21.354553 containerd[1588]: time="2025-10-30T13:18:21.353854111Z" level=info msg="CreateContainer within sandbox \"c7c57522800770d7c89aa37681d57af6ab30e1fc60288276258e630da1ce7c40\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 30 13:18:21.355975 containerd[1588]: time="2025-10-30T13:18:21.355943933Z" level=info msg="CreateContainer within sandbox \"40b91a99d947a0b9dabdd50ae0bc00a3699e0d4c008d17128732c78d861ecbf9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 30 13:18:21.366298 containerd[1588]: time="2025-10-30T13:18:21.366193727Z" level=info msg="Container 43bd5c2c135845be63b9a456ab5a8dc964ae3f98a0762be08908c0d3e6a09a43: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:18:21.369686 containerd[1588]: time="2025-10-30T13:18:21.369663020Z" level=info msg="Container b86486685908e16e1ccaa842228a3c609b0805be8b57ddf05d4174cd18ced8b1: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:18:21.370474 kubelet[2391]: I1030 13:18:21.370438 2391 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 13:18:21.370813 kubelet[2391]: E1030 13:18:21.370772 2391 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.37:6443/api/v1/nodes\": dial tcp 10.0.0.37:6443: connect: connection refused" node="localhost" Oct 30 13:18:21.374759 containerd[1588]: time="2025-10-30T13:18:21.374715770Z" level=info msg="Container 45b9d133c015e6f812a00d9b14036efc335fd4c3451a34ce8a0cbdc00e867566: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:18:21.381026 containerd[1588]: time="2025-10-30T13:18:21.380976196Z" level=info msg="CreateContainer within sandbox \"482f94688b6554b87945d2bd64d330df71e5577521061362762e15396b533a77\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"43bd5c2c135845be63b9a456ab5a8dc964ae3f98a0762be08908c0d3e6a09a43\"" Oct 30 13:18:21.381507 containerd[1588]: time="2025-10-30T13:18:21.381473621Z" level=info msg="StartContainer for \"43bd5c2c135845be63b9a456ab5a8dc964ae3f98a0762be08908c0d3e6a09a43\"" Oct 30 13:18:21.382774 containerd[1588]: time="2025-10-30T13:18:21.382731489Z" level=info msg="connecting to shim 43bd5c2c135845be63b9a456ab5a8dc964ae3f98a0762be08908c0d3e6a09a43" address="unix:///run/containerd/s/2babd3c754a2adceec0cdf6db73f80b07eb790905c6dbd33f884c7c519304888" protocol=ttrpc version=3 Oct 30 13:18:21.383077 containerd[1588]: time="2025-10-30T13:18:21.383043458Z" level=info msg="CreateContainer within sandbox \"40b91a99d947a0b9dabdd50ae0bc00a3699e0d4c008d17128732c78d861ecbf9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"45b9d133c015e6f812a00d9b14036efc335fd4c3451a34ce8a0cbdc00e867566\"" Oct 30 13:18:21.384824 containerd[1588]: time="2025-10-30T13:18:21.384799211Z" level=info msg="CreateContainer within sandbox \"c7c57522800770d7c89aa37681d57af6ab30e1fc60288276258e630da1ce7c40\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b86486685908e16e1ccaa842228a3c609b0805be8b57ddf05d4174cd18ced8b1\"" Oct 30 13:18:21.384932 containerd[1588]: time="2025-10-30T13:18:21.384915671Z" level=info msg="StartContainer for \"45b9d133c015e6f812a00d9b14036efc335fd4c3451a34ce8a0cbdc00e867566\"" Oct 30 13:18:21.385869 containerd[1588]: time="2025-10-30T13:18:21.385847469Z" level=info msg="connecting to shim 45b9d133c015e6f812a00d9b14036efc335fd4c3451a34ce8a0cbdc00e867566" address="unix:///run/containerd/s/9d1d1d8f22ae2a1a05c61ed791b8eb3a8a19098d72807e6309ca99965587702b" protocol=ttrpc version=3 Oct 30 13:18:21.386623 containerd[1588]: time="2025-10-30T13:18:21.386595716Z" level=info msg="StartContainer for \"b86486685908e16e1ccaa842228a3c609b0805be8b57ddf05d4174cd18ced8b1\"" Oct 30 13:18:21.387869 containerd[1588]: time="2025-10-30T13:18:21.387840965Z" level=info msg="connecting to shim b86486685908e16e1ccaa842228a3c609b0805be8b57ddf05d4174cd18ced8b1" address="unix:///run/containerd/s/7b6075a7f72469ca071bd745354398f05cbb21bbba057f061f9834464a60752b" protocol=ttrpc version=3 Oct 30 13:18:21.412211 systemd[1]: Started cri-containerd-43bd5c2c135845be63b9a456ab5a8dc964ae3f98a0762be08908c0d3e6a09a43.scope - libcontainer container 43bd5c2c135845be63b9a456ab5a8dc964ae3f98a0762be08908c0d3e6a09a43. Oct 30 13:18:21.416935 systemd[1]: Started cri-containerd-45b9d133c015e6f812a00d9b14036efc335fd4c3451a34ce8a0cbdc00e867566.scope - libcontainer container 45b9d133c015e6f812a00d9b14036efc335fd4c3451a34ce8a0cbdc00e867566. Oct 30 13:18:21.418738 systemd[1]: Started cri-containerd-b86486685908e16e1ccaa842228a3c609b0805be8b57ddf05d4174cd18ced8b1.scope - libcontainer container b86486685908e16e1ccaa842228a3c609b0805be8b57ddf05d4174cd18ced8b1. Oct 30 13:18:21.435025 kubelet[2391]: E1030 13:18:21.434972 2391 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 30 13:18:21.492093 containerd[1588]: time="2025-10-30T13:18:21.492027567Z" level=info msg="StartContainer for \"43bd5c2c135845be63b9a456ab5a8dc964ae3f98a0762be08908c0d3e6a09a43\" returns successfully" Oct 30 13:18:21.502466 containerd[1588]: time="2025-10-30T13:18:21.502411011Z" level=info msg="StartContainer for \"b86486685908e16e1ccaa842228a3c609b0805be8b57ddf05d4174cd18ced8b1\" returns successfully" Oct 30 13:18:21.505904 containerd[1588]: time="2025-10-30T13:18:21.505855137Z" level=info msg="StartContainer for \"45b9d133c015e6f812a00d9b14036efc335fd4c3451a34ce8a0cbdc00e867566\" returns successfully" Oct 30 13:18:22.172918 kubelet[2391]: I1030 13:18:22.172867 2391 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 13:18:22.503184 kubelet[2391]: E1030 13:18:22.503044 2391 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 13:18:22.503424 kubelet[2391]: E1030 13:18:22.503397 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:22.504288 kubelet[2391]: E1030 13:18:22.504240 2391 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 13:18:22.504459 kubelet[2391]: E1030 13:18:22.504345 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:22.506059 kubelet[2391]: E1030 13:18:22.505926 2391 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 13:18:22.506129 kubelet[2391]: E1030 13:18:22.506072 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:23.255796 kubelet[2391]: E1030 13:18:23.255738 2391 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 30 13:18:23.346696 kubelet[2391]: I1030 13:18:23.346636 2391 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 30 13:18:23.346696 kubelet[2391]: E1030 13:18:23.346672 2391 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 30 13:18:23.354787 kubelet[2391]: E1030 13:18:23.354760 2391 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 13:18:23.387239 kubelet[2391]: I1030 13:18:23.387195 2391 apiserver.go:52] "Watching apiserver" Oct 30 13:18:23.405638 kubelet[2391]: I1030 13:18:23.405575 2391 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 30 13:18:23.407106 kubelet[2391]: I1030 13:18:23.407073 2391 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 30 13:18:23.414310 kubelet[2391]: E1030 13:18:23.414264 2391 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 30 13:18:23.414310 kubelet[2391]: I1030 13:18:23.414290 2391 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 30 13:18:23.415785 kubelet[2391]: E1030 13:18:23.415761 2391 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 30 13:18:23.415785 kubelet[2391]: I1030 13:18:23.415777 2391 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 30 13:18:23.417121 kubelet[2391]: E1030 13:18:23.417098 2391 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 30 13:18:23.507085 kubelet[2391]: I1030 13:18:23.506928 2391 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 30 13:18:23.507085 kubelet[2391]: I1030 13:18:23.506954 2391 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 30 13:18:23.507630 kubelet[2391]: I1030 13:18:23.507590 2391 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 30 13:18:23.508642 kubelet[2391]: E1030 13:18:23.508612 2391 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 30 13:18:23.508788 kubelet[2391]: E1030 13:18:23.508757 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:23.509085 kubelet[2391]: E1030 13:18:23.509050 2391 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 30 13:18:23.509234 kubelet[2391]: E1030 13:18:23.509103 2391 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 30 13:18:23.509234 kubelet[2391]: E1030 13:18:23.509155 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:23.509284 kubelet[2391]: E1030 13:18:23.509264 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:24.509275 kubelet[2391]: I1030 13:18:24.509225 2391 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 30 13:18:24.509927 kubelet[2391]: I1030 13:18:24.509390 2391 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 30 13:18:24.513913 kubelet[2391]: E1030 13:18:24.513867 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:24.514210 kubelet[2391]: E1030 13:18:24.514165 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:25.182013 systemd[1]: Reload requested from client PID 2680 ('systemctl') (unit session-7.scope)... Oct 30 13:18:25.182029 systemd[1]: Reloading... Oct 30 13:18:25.254120 zram_generator::config[2728]: No configuration found. Oct 30 13:18:25.487644 systemd[1]: Reloading finished in 305 ms. Oct 30 13:18:25.511857 kubelet[2391]: E1030 13:18:25.511525 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:25.511857 kubelet[2391]: E1030 13:18:25.511782 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:25.523802 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 13:18:25.541331 systemd[1]: kubelet.service: Deactivated successfully. Oct 30 13:18:25.541659 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 13:18:25.541712 systemd[1]: kubelet.service: Consumed 1.272s CPU time, 131.5M memory peak. Oct 30 13:18:25.543697 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 13:18:25.789280 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 13:18:25.808520 (kubelet)[2769]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 30 13:18:25.851621 kubelet[2769]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 13:18:25.851621 kubelet[2769]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 30 13:18:25.851621 kubelet[2769]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 13:18:25.852103 kubelet[2769]: I1030 13:18:25.851655 2769 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 30 13:18:25.860325 kubelet[2769]: I1030 13:18:25.860288 2769 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 30 13:18:25.860325 kubelet[2769]: I1030 13:18:25.860312 2769 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 30 13:18:25.860513 kubelet[2769]: I1030 13:18:25.860490 2769 server.go:956] "Client rotation is on, will bootstrap in background" Oct 30 13:18:25.861642 kubelet[2769]: I1030 13:18:25.861614 2769 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 30 13:18:25.865285 kubelet[2769]: I1030 13:18:25.865257 2769 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 30 13:18:25.870080 kubelet[2769]: I1030 13:18:25.870038 2769 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 30 13:18:25.876271 kubelet[2769]: I1030 13:18:25.876231 2769 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 30 13:18:25.876586 kubelet[2769]: I1030 13:18:25.876546 2769 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 30 13:18:25.876998 kubelet[2769]: I1030 13:18:25.876574 2769 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 30 13:18:25.877116 kubelet[2769]: I1030 13:18:25.877005 2769 topology_manager.go:138] "Creating topology manager with none policy" Oct 30 13:18:25.877116 kubelet[2769]: I1030 13:18:25.877016 2769 container_manager_linux.go:303] "Creating device plugin manager" Oct 30 13:18:25.877317 kubelet[2769]: I1030 13:18:25.877168 2769 state_mem.go:36] "Initialized new in-memory state store" Oct 30 13:18:25.877920 kubelet[2769]: I1030 13:18:25.877896 2769 kubelet.go:480] "Attempting to sync node with API server" Oct 30 13:18:25.878051 kubelet[2769]: I1030 13:18:25.878024 2769 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 30 13:18:25.878182 kubelet[2769]: I1030 13:18:25.878169 2769 kubelet.go:386] "Adding apiserver pod source" Oct 30 13:18:25.878318 kubelet[2769]: I1030 13:18:25.878305 2769 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 30 13:18:25.879887 kubelet[2769]: I1030 13:18:25.879857 2769 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 30 13:18:25.880362 kubelet[2769]: I1030 13:18:25.880316 2769 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 30 13:18:25.884934 kubelet[2769]: I1030 13:18:25.884902 2769 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 30 13:18:25.885004 kubelet[2769]: I1030 13:18:25.884954 2769 server.go:1289] "Started kubelet" Oct 30 13:18:25.886349 kubelet[2769]: I1030 13:18:25.886321 2769 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 30 13:18:25.891592 kubelet[2769]: I1030 13:18:25.891548 2769 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 30 13:18:25.893042 kubelet[2769]: E1030 13:18:25.892123 2769 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 13:18:25.893042 kubelet[2769]: I1030 13:18:25.892195 2769 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 30 13:18:25.893042 kubelet[2769]: I1030 13:18:25.892416 2769 server.go:317] "Adding debug handlers to kubelet server" Oct 30 13:18:25.893042 kubelet[2769]: I1030 13:18:25.892435 2769 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 30 13:18:25.893042 kubelet[2769]: I1030 13:18:25.892573 2769 reconciler.go:26] "Reconciler: start to sync state" Oct 30 13:18:25.894801 kubelet[2769]: I1030 13:18:25.894758 2769 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 30 13:18:25.895333 kubelet[2769]: I1030 13:18:25.895316 2769 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 30 13:18:25.896105 kubelet[2769]: E1030 13:18:25.896086 2769 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 30 13:18:25.896458 kubelet[2769]: I1030 13:18:25.896431 2769 factory.go:223] Registration of the systemd container factory successfully Oct 30 13:18:25.896607 kubelet[2769]: I1030 13:18:25.896589 2769 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 30 13:18:25.897019 kubelet[2769]: I1030 13:18:25.897004 2769 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 30 13:18:25.899873 kubelet[2769]: I1030 13:18:25.899838 2769 factory.go:223] Registration of the containerd container factory successfully Oct 30 13:18:25.911006 kubelet[2769]: I1030 13:18:25.910837 2769 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 30 13:18:25.912258 kubelet[2769]: I1030 13:18:25.912221 2769 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 30 13:18:25.912258 kubelet[2769]: I1030 13:18:25.912255 2769 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 30 13:18:25.912345 kubelet[2769]: I1030 13:18:25.912277 2769 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 30 13:18:25.912345 kubelet[2769]: I1030 13:18:25.912286 2769 kubelet.go:2436] "Starting kubelet main sync loop" Oct 30 13:18:25.912345 kubelet[2769]: E1030 13:18:25.912335 2769 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 30 13:18:25.936725 kubelet[2769]: I1030 13:18:25.936699 2769 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 30 13:18:25.936725 kubelet[2769]: I1030 13:18:25.936719 2769 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 30 13:18:25.936819 kubelet[2769]: I1030 13:18:25.936739 2769 state_mem.go:36] "Initialized new in-memory state store" Oct 30 13:18:25.936909 kubelet[2769]: I1030 13:18:25.936865 2769 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 30 13:18:25.936909 kubelet[2769]: I1030 13:18:25.936882 2769 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 30 13:18:25.936909 kubelet[2769]: I1030 13:18:25.936902 2769 policy_none.go:49] "None policy: Start" Oct 30 13:18:25.936909 kubelet[2769]: I1030 13:18:25.936913 2769 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 30 13:18:25.936909 kubelet[2769]: I1030 13:18:25.936924 2769 state_mem.go:35] "Initializing new in-memory state store" Oct 30 13:18:25.937160 kubelet[2769]: I1030 13:18:25.937040 2769 state_mem.go:75] "Updated machine memory state" Oct 30 13:18:25.941143 kubelet[2769]: E1030 13:18:25.941006 2769 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 30 13:18:25.941204 kubelet[2769]: I1030 13:18:25.941198 2769 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 30 13:18:25.941232 kubelet[2769]: I1030 13:18:25.941210 2769 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 30 13:18:25.941406 kubelet[2769]: I1030 13:18:25.941389 2769 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 30 13:18:25.942614 kubelet[2769]: E1030 13:18:25.942583 2769 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 30 13:18:26.013198 kubelet[2769]: I1030 13:18:26.013119 2769 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 30 13:18:26.013198 kubelet[2769]: I1030 13:18:26.013196 2769 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 30 13:18:26.013526 kubelet[2769]: I1030 13:18:26.013476 2769 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 30 13:18:26.020761 kubelet[2769]: E1030 13:18:26.020700 2769 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 30 13:18:26.020825 kubelet[2769]: E1030 13:18:26.020801 2769 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 30 13:18:26.048111 kubelet[2769]: I1030 13:18:26.047975 2769 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 13:18:26.055640 kubelet[2769]: I1030 13:18:26.055587 2769 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 30 13:18:26.055796 kubelet[2769]: I1030 13:18:26.055702 2769 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 30 13:18:26.193597 kubelet[2769]: I1030 13:18:26.193528 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3156c81665388dbde2419c0233b1879b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3156c81665388dbde2419c0233b1879b\") " pod="kube-system/kube-apiserver-localhost" Oct 30 13:18:26.193597 kubelet[2769]: I1030 13:18:26.193572 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3156c81665388dbde2419c0233b1879b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3156c81665388dbde2419c0233b1879b\") " pod="kube-system/kube-apiserver-localhost" Oct 30 13:18:26.193597 kubelet[2769]: I1030 13:18:26.193597 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:18:26.193597 kubelet[2769]: I1030 13:18:26.193615 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:18:26.193839 kubelet[2769]: I1030 13:18:26.193649 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3156c81665388dbde2419c0233b1879b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3156c81665388dbde2419c0233b1879b\") " pod="kube-system/kube-apiserver-localhost" Oct 30 13:18:26.193839 kubelet[2769]: I1030 13:18:26.193707 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:18:26.193839 kubelet[2769]: I1030 13:18:26.193723 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:18:26.193839 kubelet[2769]: I1030 13:18:26.193748 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:18:26.193839 kubelet[2769]: I1030 13:18:26.193765 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 30 13:18:26.321853 kubelet[2769]: E1030 13:18:26.321605 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:26.321853 kubelet[2769]: E1030 13:18:26.321672 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:26.322628 kubelet[2769]: E1030 13:18:26.321802 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:26.880971 kubelet[2769]: I1030 13:18:26.880896 2769 apiserver.go:52] "Watching apiserver" Oct 30 13:18:26.926884 kubelet[2769]: I1030 13:18:26.926843 2769 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 30 13:18:26.927095 kubelet[2769]: I1030 13:18:26.926849 2769 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 30 13:18:27.050740 kubelet[2769]: E1030 13:18:27.050067 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:27.094621 kubelet[2769]: I1030 13:18:27.093220 2769 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 30 13:18:27.095232 kubelet[2769]: E1030 13:18:27.094969 2769 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 30 13:18:27.095232 kubelet[2769]: E1030 13:18:27.095056 2769 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 30 13:18:27.095232 kubelet[2769]: E1030 13:18:27.095157 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:27.095327 kubelet[2769]: E1030 13:18:27.095313 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:27.282301 kubelet[2769]: I1030 13:18:27.282229 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.28220884 podStartE2EDuration="3.28220884s" podCreationTimestamp="2025-10-30 13:18:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 13:18:27.281606673 +0000 UTC m=+1.466556751" watchObservedRunningTime="2025-10-30 13:18:27.28220884 +0000 UTC m=+1.467158918" Oct 30 13:18:27.552424 kubelet[2769]: I1030 13:18:27.552200 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.552160182 podStartE2EDuration="3.552160182s" podCreationTimestamp="2025-10-30 13:18:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 13:18:27.551170687 +0000 UTC m=+1.736120765" watchObservedRunningTime="2025-10-30 13:18:27.552160182 +0000 UTC m=+1.737110261" Oct 30 13:18:27.928438 kubelet[2769]: E1030 13:18:27.928309 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:27.928853 kubelet[2769]: E1030 13:18:27.928772 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:27.972700 kubelet[2769]: I1030 13:18:27.972585 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.972568353 podStartE2EDuration="1.972568353s" podCreationTimestamp="2025-10-30 13:18:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 13:18:27.92411459 +0000 UTC m=+2.109064659" watchObservedRunningTime="2025-10-30 13:18:27.972568353 +0000 UTC m=+2.157518431" Oct 30 13:18:30.268660 kubelet[2769]: E1030 13:18:30.268594 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:30.548525 kubelet[2769]: E1030 13:18:30.548392 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:32.203487 kubelet[2769]: I1030 13:18:32.203435 2769 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 30 13:18:32.203960 containerd[1588]: time="2025-10-30T13:18:32.203785352Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 30 13:18:32.204312 kubelet[2769]: I1030 13:18:32.204166 2769 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 30 13:18:32.790658 kubelet[2769]: E1030 13:18:32.790609 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:32.935769 kubelet[2769]: E1030 13:18:32.935723 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:33.335720 systemd[1]: Created slice kubepods-besteffort-podbb4fbc20_1ac3_415a_9dc3_a4f20b5b025b.slice - libcontainer container kubepods-besteffort-podbb4fbc20_1ac3_415a_9dc3_a4f20b5b025b.slice. Oct 30 13:18:33.338003 kubelet[2769]: I1030 13:18:33.336410 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb4fbc20-1ac3-415a-9dc3-a4f20b5b025b-lib-modules\") pod \"kube-proxy-55lm6\" (UID: \"bb4fbc20-1ac3-415a-9dc3-a4f20b5b025b\") " pod="kube-system/kube-proxy-55lm6" Oct 30 13:18:33.338003 kubelet[2769]: I1030 13:18:33.336452 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvgf7\" (UniqueName: \"kubernetes.io/projected/bb4fbc20-1ac3-415a-9dc3-a4f20b5b025b-kube-api-access-mvgf7\") pod \"kube-proxy-55lm6\" (UID: \"bb4fbc20-1ac3-415a-9dc3-a4f20b5b025b\") " pod="kube-system/kube-proxy-55lm6" Oct 30 13:18:33.338003 kubelet[2769]: I1030 13:18:33.336475 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bb4fbc20-1ac3-415a-9dc3-a4f20b5b025b-kube-proxy\") pod \"kube-proxy-55lm6\" (UID: \"bb4fbc20-1ac3-415a-9dc3-a4f20b5b025b\") " pod="kube-system/kube-proxy-55lm6" Oct 30 13:18:33.338003 kubelet[2769]: I1030 13:18:33.336490 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb4fbc20-1ac3-415a-9dc3-a4f20b5b025b-xtables-lock\") pod \"kube-proxy-55lm6\" (UID: \"bb4fbc20-1ac3-415a-9dc3-a4f20b5b025b\") " pod="kube-system/kube-proxy-55lm6" Oct 30 13:18:33.468582 systemd[1]: Created slice kubepods-besteffort-poda51169ff_b040_4f0e_9f2a_23fb6a2bc15f.slice - libcontainer container kubepods-besteffort-poda51169ff_b040_4f0e_9f2a_23fb6a2bc15f.slice. Oct 30 13:18:33.537795 kubelet[2769]: I1030 13:18:33.537746 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a51169ff-b040-4f0e-9f2a-23fb6a2bc15f-var-lib-calico\") pod \"tigera-operator-7dcd859c48-qxnmx\" (UID: \"a51169ff-b040-4f0e-9f2a-23fb6a2bc15f\") " pod="tigera-operator/tigera-operator-7dcd859c48-qxnmx" Oct 30 13:18:33.537795 kubelet[2769]: I1030 13:18:33.537790 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzsfw\" (UniqueName: \"kubernetes.io/projected/a51169ff-b040-4f0e-9f2a-23fb6a2bc15f-kube-api-access-tzsfw\") pod \"tigera-operator-7dcd859c48-qxnmx\" (UID: \"a51169ff-b040-4f0e-9f2a-23fb6a2bc15f\") " pod="tigera-operator/tigera-operator-7dcd859c48-qxnmx" Oct 30 13:18:33.658414 kubelet[2769]: E1030 13:18:33.658241 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:33.659208 containerd[1588]: time="2025-10-30T13:18:33.658897724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-55lm6,Uid:bb4fbc20-1ac3-415a-9dc3-a4f20b5b025b,Namespace:kube-system,Attempt:0,}" Oct 30 13:18:33.700250 containerd[1588]: time="2025-10-30T13:18:33.700192133Z" level=info msg="connecting to shim a9cf130cb49655643a529e4d436655868113fee54917e3055842847d759f9337" address="unix:///run/containerd/s/068b6b99d4f9917e4b9f4e60b298855ab835fa0069884344972d55262da33482" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:18:33.772002 containerd[1588]: time="2025-10-30T13:18:33.771931765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-qxnmx,Uid:a51169ff-b040-4f0e-9f2a-23fb6a2bc15f,Namespace:tigera-operator,Attempt:0,}" Oct 30 13:18:33.775131 systemd[1]: Started cri-containerd-a9cf130cb49655643a529e4d436655868113fee54917e3055842847d759f9337.scope - libcontainer container a9cf130cb49655643a529e4d436655868113fee54917e3055842847d759f9337. Oct 30 13:18:33.805124 containerd[1588]: time="2025-10-30T13:18:33.805087284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-55lm6,Uid:bb4fbc20-1ac3-415a-9dc3-a4f20b5b025b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9cf130cb49655643a529e4d436655868113fee54917e3055842847d759f9337\"" Oct 30 13:18:33.805925 kubelet[2769]: E1030 13:18:33.805890 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:33.813671 containerd[1588]: time="2025-10-30T13:18:33.813635625Z" level=info msg="CreateContainer within sandbox \"a9cf130cb49655643a529e4d436655868113fee54917e3055842847d759f9337\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 30 13:18:33.820032 containerd[1588]: time="2025-10-30T13:18:33.819954869Z" level=info msg="connecting to shim af5d9cfa47448b30d0efc2047aad56feb5b3515a76b1cd0b24cec0331e7cccaa" address="unix:///run/containerd/s/aaa6dc37d22d316063916a5afa3ac4d59a3ed4990cfffc3e0e42c0b85d2d3c6c" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:18:33.827027 containerd[1588]: time="2025-10-30T13:18:33.826276578Z" level=info msg="Container 1b7313aa010d07fca0d925ec680959dbe33ad8e787c53c40ed4e11ef0ac6e3d7: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:18:33.834714 containerd[1588]: time="2025-10-30T13:18:33.834676397Z" level=info msg="CreateContainer within sandbox \"a9cf130cb49655643a529e4d436655868113fee54917e3055842847d759f9337\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1b7313aa010d07fca0d925ec680959dbe33ad8e787c53c40ed4e11ef0ac6e3d7\"" Oct 30 13:18:33.835513 containerd[1588]: time="2025-10-30T13:18:33.835479655Z" level=info msg="StartContainer for \"1b7313aa010d07fca0d925ec680959dbe33ad8e787c53c40ed4e11ef0ac6e3d7\"" Oct 30 13:18:33.837483 containerd[1588]: time="2025-10-30T13:18:33.837437762Z" level=info msg="connecting to shim 1b7313aa010d07fca0d925ec680959dbe33ad8e787c53c40ed4e11ef0ac6e3d7" address="unix:///run/containerd/s/068b6b99d4f9917e4b9f4e60b298855ab835fa0069884344972d55262da33482" protocol=ttrpc version=3 Oct 30 13:18:33.853295 systemd[1]: Started cri-containerd-af5d9cfa47448b30d0efc2047aad56feb5b3515a76b1cd0b24cec0331e7cccaa.scope - libcontainer container af5d9cfa47448b30d0efc2047aad56feb5b3515a76b1cd0b24cec0331e7cccaa. Oct 30 13:18:33.861337 systemd[1]: Started cri-containerd-1b7313aa010d07fca0d925ec680959dbe33ad8e787c53c40ed4e11ef0ac6e3d7.scope - libcontainer container 1b7313aa010d07fca0d925ec680959dbe33ad8e787c53c40ed4e11ef0ac6e3d7. Oct 30 13:18:33.902541 containerd[1588]: time="2025-10-30T13:18:33.902486877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-qxnmx,Uid:a51169ff-b040-4f0e-9f2a-23fb6a2bc15f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"af5d9cfa47448b30d0efc2047aad56feb5b3515a76b1cd0b24cec0331e7cccaa\"" Oct 30 13:18:33.904535 containerd[1588]: time="2025-10-30T13:18:33.904513087Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 30 13:18:33.916510 containerd[1588]: time="2025-10-30T13:18:33.915889092Z" level=info msg="StartContainer for \"1b7313aa010d07fca0d925ec680959dbe33ad8e787c53c40ed4e11ef0ac6e3d7\" returns successfully" Oct 30 13:18:33.942002 kubelet[2769]: E1030 13:18:33.941507 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:33.950086 kubelet[2769]: I1030 13:18:33.949647 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-55lm6" podStartSLOduration=0.949627918 podStartE2EDuration="949.627918ms" podCreationTimestamp="2025-10-30 13:18:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 13:18:33.948697332 +0000 UTC m=+8.133647410" watchObservedRunningTime="2025-10-30 13:18:33.949627918 +0000 UTC m=+8.134577996" Oct 30 13:18:35.211752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2673757880.mount: Deactivated successfully. Oct 30 13:18:35.333640 update_engine[1578]: I20251030 13:18:35.333509 1578 update_attempter.cc:509] Updating boot flags... Oct 30 13:18:35.829008 containerd[1588]: time="2025-10-30T13:18:35.828928844Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:18:35.829700 containerd[1588]: time="2025-10-30T13:18:35.829624271Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 30 13:18:35.830868 containerd[1588]: time="2025-10-30T13:18:35.830831673Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:18:35.832737 containerd[1588]: time="2025-10-30T13:18:35.832703366Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:18:35.833319 containerd[1588]: time="2025-10-30T13:18:35.833275216Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.928735904s" Oct 30 13:18:35.833319 containerd[1588]: time="2025-10-30T13:18:35.833313035Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 30 13:18:35.837296 containerd[1588]: time="2025-10-30T13:18:35.837268644Z" level=info msg="CreateContainer within sandbox \"af5d9cfa47448b30d0efc2047aad56feb5b3515a76b1cd0b24cec0331e7cccaa\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 30 13:18:35.845715 containerd[1588]: time="2025-10-30T13:18:35.845144648Z" level=info msg="Container 3c1f52984c86f314f559d430eb45618f8f9f172a32d46e6b7a891d99d2797cba: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:18:35.848581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1044606506.mount: Deactivated successfully. Oct 30 13:18:35.851594 containerd[1588]: time="2025-10-30T13:18:35.851556337Z" level=info msg="CreateContainer within sandbox \"af5d9cfa47448b30d0efc2047aad56feb5b3515a76b1cd0b24cec0331e7cccaa\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3c1f52984c86f314f559d430eb45618f8f9f172a32d46e6b7a891d99d2797cba\"" Oct 30 13:18:35.852172 containerd[1588]: time="2025-10-30T13:18:35.851961770Z" level=info msg="StartContainer for \"3c1f52984c86f314f559d430eb45618f8f9f172a32d46e6b7a891d99d2797cba\"" Oct 30 13:18:35.852685 containerd[1588]: time="2025-10-30T13:18:35.852658690Z" level=info msg="connecting to shim 3c1f52984c86f314f559d430eb45618f8f9f172a32d46e6b7a891d99d2797cba" address="unix:///run/containerd/s/aaa6dc37d22d316063916a5afa3ac4d59a3ed4990cfffc3e0e42c0b85d2d3c6c" protocol=ttrpc version=3 Oct 30 13:18:35.878137 systemd[1]: Started cri-containerd-3c1f52984c86f314f559d430eb45618f8f9f172a32d46e6b7a891d99d2797cba.scope - libcontainer container 3c1f52984c86f314f559d430eb45618f8f9f172a32d46e6b7a891d99d2797cba. Oct 30 13:18:35.910321 containerd[1588]: time="2025-10-30T13:18:35.910279578Z" level=info msg="StartContainer for \"3c1f52984c86f314f559d430eb45618f8f9f172a32d46e6b7a891d99d2797cba\" returns successfully" Oct 30 13:18:35.954120 kubelet[2769]: I1030 13:18:35.954044 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-qxnmx" podStartSLOduration=1.023893531 podStartE2EDuration="2.954029242s" podCreationTimestamp="2025-10-30 13:18:33 +0000 UTC" firstStartedPulling="2025-10-30 13:18:33.903823988 +0000 UTC m=+8.088774057" lastFinishedPulling="2025-10-30 13:18:35.83395969 +0000 UTC m=+10.018909768" observedRunningTime="2025-10-30 13:18:35.953617956 +0000 UTC m=+10.138568034" watchObservedRunningTime="2025-10-30 13:18:35.954029242 +0000 UTC m=+10.138979520" Oct 30 13:18:37.868483 systemd[1]: cri-containerd-3c1f52984c86f314f559d430eb45618f8f9f172a32d46e6b7a891d99d2797cba.scope: Deactivated successfully. Oct 30 13:18:37.872406 containerd[1588]: time="2025-10-30T13:18:37.872361055Z" level=info msg="received exit event container_id:\"3c1f52984c86f314f559d430eb45618f8f9f172a32d46e6b7a891d99d2797cba\" id:\"3c1f52984c86f314f559d430eb45618f8f9f172a32d46e6b7a891d99d2797cba\" pid:3117 exit_status:1 exited_at:{seconds:1761830317 nanos:870637923}" Oct 30 13:18:37.872719 containerd[1588]: time="2025-10-30T13:18:37.872611511Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3c1f52984c86f314f559d430eb45618f8f9f172a32d46e6b7a891d99d2797cba\" id:\"3c1f52984c86f314f559d430eb45618f8f9f172a32d46e6b7a891d99d2797cba\" pid:3117 exit_status:1 exited_at:{seconds:1761830317 nanos:870637923}" Oct 30 13:18:37.931565 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c1f52984c86f314f559d430eb45618f8f9f172a32d46e6b7a891d99d2797cba-rootfs.mount: Deactivated successfully. Oct 30 13:18:38.953053 kubelet[2769]: I1030 13:18:38.952975 2769 scope.go:117] "RemoveContainer" containerID="3c1f52984c86f314f559d430eb45618f8f9f172a32d46e6b7a891d99d2797cba" Oct 30 13:18:38.954896 containerd[1588]: time="2025-10-30T13:18:38.954833217Z" level=info msg="CreateContainer within sandbox \"af5d9cfa47448b30d0efc2047aad56feb5b3515a76b1cd0b24cec0331e7cccaa\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Oct 30 13:18:38.965678 containerd[1588]: time="2025-10-30T13:18:38.965624023Z" level=info msg="Container 7ae96c58f08f52be272bc7c73450f6a38f9a22ea14802ba7deda953631e67a28: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:18:38.972557 containerd[1588]: time="2025-10-30T13:18:38.972505886Z" level=info msg="CreateContainer within sandbox \"af5d9cfa47448b30d0efc2047aad56feb5b3515a76b1cd0b24cec0331e7cccaa\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"7ae96c58f08f52be272bc7c73450f6a38f9a22ea14802ba7deda953631e67a28\"" Oct 30 13:18:38.973376 containerd[1588]: time="2025-10-30T13:18:38.973309366Z" level=info msg="StartContainer for \"7ae96c58f08f52be272bc7c73450f6a38f9a22ea14802ba7deda953631e67a28\"" Oct 30 13:18:38.974696 containerd[1588]: time="2025-10-30T13:18:38.974652522Z" level=info msg="connecting to shim 7ae96c58f08f52be272bc7c73450f6a38f9a22ea14802ba7deda953631e67a28" address="unix:///run/containerd/s/aaa6dc37d22d316063916a5afa3ac4d59a3ed4990cfffc3e0e42c0b85d2d3c6c" protocol=ttrpc version=3 Oct 30 13:18:38.999132 systemd[1]: Started cri-containerd-7ae96c58f08f52be272bc7c73450f6a38f9a22ea14802ba7deda953631e67a28.scope - libcontainer container 7ae96c58f08f52be272bc7c73450f6a38f9a22ea14802ba7deda953631e67a28. Oct 30 13:18:39.035216 containerd[1588]: time="2025-10-30T13:18:39.035078708Z" level=info msg="StartContainer for \"7ae96c58f08f52be272bc7c73450f6a38f9a22ea14802ba7deda953631e67a28\" returns successfully" Oct 30 13:18:40.272895 kubelet[2769]: E1030 13:18:40.272847 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:40.552448 kubelet[2769]: E1030 13:18:40.552298 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:40.959692 kubelet[2769]: E1030 13:18:40.959562 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:41.091880 sudo[1811]: pam_unix(sudo:session): session closed for user root Oct 30 13:18:41.093802 sshd[1810]: Connection closed by 10.0.0.1 port 39396 Oct 30 13:18:41.094321 sshd-session[1806]: pam_unix(sshd:session): session closed for user core Oct 30 13:18:41.098579 systemd[1]: sshd@6-10.0.0.37:22-10.0.0.1:39396.service: Deactivated successfully. Oct 30 13:18:41.100955 systemd[1]: session-7.scope: Deactivated successfully. Oct 30 13:18:41.101206 systemd[1]: session-7.scope: Consumed 6.810s CPU time, 223M memory peak. Oct 30 13:18:41.103929 systemd-logind[1576]: Session 7 logged out. Waiting for processes to exit. Oct 30 13:18:41.104895 systemd-logind[1576]: Removed session 7. Oct 30 13:18:46.473026 systemd[1]: Created slice kubepods-besteffort-pod2a9c1110_f7dd_42ed_9293_cae7bdbba496.slice - libcontainer container kubepods-besteffort-pod2a9c1110_f7dd_42ed_9293_cae7bdbba496.slice. Oct 30 13:18:46.521353 kubelet[2769]: I1030 13:18:46.521290 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2a9c1110-f7dd-42ed-9293-cae7bdbba496-typha-certs\") pod \"calico-typha-5455c56b47-s245x\" (UID: \"2a9c1110-f7dd-42ed-9293-cae7bdbba496\") " pod="calico-system/calico-typha-5455c56b47-s245x" Oct 30 13:18:46.521353 kubelet[2769]: I1030 13:18:46.521335 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a9c1110-f7dd-42ed-9293-cae7bdbba496-tigera-ca-bundle\") pod \"calico-typha-5455c56b47-s245x\" (UID: \"2a9c1110-f7dd-42ed-9293-cae7bdbba496\") " pod="calico-system/calico-typha-5455c56b47-s245x" Oct 30 13:18:46.521353 kubelet[2769]: I1030 13:18:46.521360 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9wrx\" (UniqueName: \"kubernetes.io/projected/2a9c1110-f7dd-42ed-9293-cae7bdbba496-kube-api-access-t9wrx\") pod \"calico-typha-5455c56b47-s245x\" (UID: \"2a9c1110-f7dd-42ed-9293-cae7bdbba496\") " pod="calico-system/calico-typha-5455c56b47-s245x" Oct 30 13:18:46.560184 systemd[1]: Created slice kubepods-besteffort-pod060a88b0_56fe_4a8f_ad83_4afe49c5685e.slice - libcontainer container kubepods-besteffort-pod060a88b0_56fe_4a8f_ad83_4afe49c5685e.slice. Oct 30 13:18:46.622032 kubelet[2769]: I1030 13:18:46.621953 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/060a88b0-56fe-4a8f-ad83-4afe49c5685e-flexvol-driver-host\") pod \"calico-node-7jhjv\" (UID: \"060a88b0-56fe-4a8f-ad83-4afe49c5685e\") " pod="calico-system/calico-node-7jhjv" Oct 30 13:18:46.622032 kubelet[2769]: I1030 13:18:46.622039 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/060a88b0-56fe-4a8f-ad83-4afe49c5685e-policysync\") pod \"calico-node-7jhjv\" (UID: \"060a88b0-56fe-4a8f-ad83-4afe49c5685e\") " pod="calico-system/calico-node-7jhjv" Oct 30 13:18:46.622224 kubelet[2769]: I1030 13:18:46.622060 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/060a88b0-56fe-4a8f-ad83-4afe49c5685e-cni-bin-dir\") pod \"calico-node-7jhjv\" (UID: \"060a88b0-56fe-4a8f-ad83-4afe49c5685e\") " pod="calico-system/calico-node-7jhjv" Oct 30 13:18:46.622224 kubelet[2769]: I1030 13:18:46.622135 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/060a88b0-56fe-4a8f-ad83-4afe49c5685e-var-lib-calico\") pod \"calico-node-7jhjv\" (UID: \"060a88b0-56fe-4a8f-ad83-4afe49c5685e\") " pod="calico-system/calico-node-7jhjv" Oct 30 13:18:46.622224 kubelet[2769]: I1030 13:18:46.622185 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/060a88b0-56fe-4a8f-ad83-4afe49c5685e-lib-modules\") pod \"calico-node-7jhjv\" (UID: \"060a88b0-56fe-4a8f-ad83-4afe49c5685e\") " pod="calico-system/calico-node-7jhjv" Oct 30 13:18:46.622224 kubelet[2769]: I1030 13:18:46.622204 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/060a88b0-56fe-4a8f-ad83-4afe49c5685e-tigera-ca-bundle\") pod \"calico-node-7jhjv\" (UID: \"060a88b0-56fe-4a8f-ad83-4afe49c5685e\") " pod="calico-system/calico-node-7jhjv" Oct 30 13:18:46.622224 kubelet[2769]: I1030 13:18:46.622220 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/060a88b0-56fe-4a8f-ad83-4afe49c5685e-xtables-lock\") pod \"calico-node-7jhjv\" (UID: \"060a88b0-56fe-4a8f-ad83-4afe49c5685e\") " pod="calico-system/calico-node-7jhjv" Oct 30 13:18:46.622340 kubelet[2769]: I1030 13:18:46.622236 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/060a88b0-56fe-4a8f-ad83-4afe49c5685e-var-run-calico\") pod \"calico-node-7jhjv\" (UID: \"060a88b0-56fe-4a8f-ad83-4afe49c5685e\") " pod="calico-system/calico-node-7jhjv" Oct 30 13:18:46.622340 kubelet[2769]: I1030 13:18:46.622253 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/060a88b0-56fe-4a8f-ad83-4afe49c5685e-node-certs\") pod \"calico-node-7jhjv\" (UID: \"060a88b0-56fe-4a8f-ad83-4afe49c5685e\") " pod="calico-system/calico-node-7jhjv" Oct 30 13:18:46.622340 kubelet[2769]: I1030 13:18:46.622278 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/060a88b0-56fe-4a8f-ad83-4afe49c5685e-cni-log-dir\") pod \"calico-node-7jhjv\" (UID: \"060a88b0-56fe-4a8f-ad83-4afe49c5685e\") " pod="calico-system/calico-node-7jhjv" Oct 30 13:18:46.622340 kubelet[2769]: I1030 13:18:46.622293 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/060a88b0-56fe-4a8f-ad83-4afe49c5685e-cni-net-dir\") pod \"calico-node-7jhjv\" (UID: \"060a88b0-56fe-4a8f-ad83-4afe49c5685e\") " pod="calico-system/calico-node-7jhjv" Oct 30 13:18:46.622340 kubelet[2769]: I1030 13:18:46.622308 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhp2p\" (UniqueName: \"kubernetes.io/projected/060a88b0-56fe-4a8f-ad83-4afe49c5685e-kube-api-access-dhp2p\") pod \"calico-node-7jhjv\" (UID: \"060a88b0-56fe-4a8f-ad83-4afe49c5685e\") " pod="calico-system/calico-node-7jhjv" Oct 30 13:18:46.697705 kubelet[2769]: E1030 13:18:46.697637 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jqhml" podUID="f14ca4d9-aac0-4af6-a374-d183e93fb183" Oct 30 13:18:46.722943 kubelet[2769]: I1030 13:18:46.722891 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f14ca4d9-aac0-4af6-a374-d183e93fb183-registration-dir\") pod \"csi-node-driver-jqhml\" (UID: \"f14ca4d9-aac0-4af6-a374-d183e93fb183\") " pod="calico-system/csi-node-driver-jqhml" Oct 30 13:18:46.722943 kubelet[2769]: I1030 13:18:46.722930 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpfc7\" (UniqueName: \"kubernetes.io/projected/f14ca4d9-aac0-4af6-a374-d183e93fb183-kube-api-access-bpfc7\") pod \"csi-node-driver-jqhml\" (UID: \"f14ca4d9-aac0-4af6-a374-d183e93fb183\") " pod="calico-system/csi-node-driver-jqhml" Oct 30 13:18:46.723192 kubelet[2769]: I1030 13:18:46.723040 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f14ca4d9-aac0-4af6-a374-d183e93fb183-socket-dir\") pod \"csi-node-driver-jqhml\" (UID: \"f14ca4d9-aac0-4af6-a374-d183e93fb183\") " pod="calico-system/csi-node-driver-jqhml" Oct 30 13:18:46.723192 kubelet[2769]: I1030 13:18:46.723058 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f14ca4d9-aac0-4af6-a374-d183e93fb183-varrun\") pod \"csi-node-driver-jqhml\" (UID: \"f14ca4d9-aac0-4af6-a374-d183e93fb183\") " pod="calico-system/csi-node-driver-jqhml" Oct 30 13:18:46.723192 kubelet[2769]: I1030 13:18:46.723089 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f14ca4d9-aac0-4af6-a374-d183e93fb183-kubelet-dir\") pod \"csi-node-driver-jqhml\" (UID: \"f14ca4d9-aac0-4af6-a374-d183e93fb183\") " pod="calico-system/csi-node-driver-jqhml" Oct 30 13:18:46.729257 kubelet[2769]: E1030 13:18:46.729129 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:46.729257 kubelet[2769]: W1030 13:18:46.729160 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:46.730829 kubelet[2769]: E1030 13:18:46.730800 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:46.732203 kubelet[2769]: E1030 13:18:46.732173 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:46.732203 kubelet[2769]: W1030 13:18:46.732190 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:46.732281 kubelet[2769]: E1030 13:18:46.732209 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:46.740810 kubelet[2769]: E1030 13:18:46.740776 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:46.740810 kubelet[2769]: W1030 13:18:46.740796 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:46.740810 kubelet[2769]: E1030 13:18:46.740813 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:46.782455 kubelet[2769]: E1030 13:18:46.782383 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:46.783340 containerd[1588]: time="2025-10-30T13:18:46.783271775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5455c56b47-s245x,Uid:2a9c1110-f7dd-42ed-9293-cae7bdbba496,Namespace:calico-system,Attempt:0,}" Oct 30 13:18:46.805784 containerd[1588]: time="2025-10-30T13:18:46.805721875Z" level=info msg="connecting to shim 25350e1a2a5dc96abeba9a81205f27d26669a7f76b9598ce86658f69e6ec1eb8" address="unix:///run/containerd/s/896b9943ec43ef8d0b27aab2ce2febcc753c90652c557d89507141afdbd3181c" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:18:46.824607 kubelet[2769]: E1030 13:18:46.824368 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:46.824607 kubelet[2769]: W1030 13:18:46.824403 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:46.824607 kubelet[2769]: E1030 13:18:46.824431 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:46.824901 kubelet[2769]: E1030 13:18:46.824849 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:46.824901 kubelet[2769]: W1030 13:18:46.824891 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:46.825032 kubelet[2769]: E1030 13:18:46.824917 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:46.825969 kubelet[2769]: E1030 13:18:46.825831 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:46.825969 kubelet[2769]: W1030 13:18:46.825967 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:46.826166 kubelet[2769]: E1030 13:18:46.826045 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:46.826741 kubelet[2769]: E1030 13:18:46.826700 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:46.826826 kubelet[2769]: W1030 13:18:46.826808 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:46.826885 kubelet[2769]: E1030 13:18:46.826825 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:46.827481 kubelet[2769]: E1030 13:18:46.827451 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:46.827481 kubelet[2769]: W1030 13:18:46.827469 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:46.827481 kubelet[2769]: E1030 13:18:46.827479 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:46.827789 kubelet[2769]: E1030 13:18:46.827772 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:46.827789 kubelet[2769]: W1030 13:18:46.827785 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:46.827872 kubelet[2769]: E1030 13:18:46.827795 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:46.828327 kubelet[2769]: E1030 13:18:46.828310 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:46.828327 kubelet[2769]: W1030 13:18:46.828323 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:46.828397 kubelet[2769]: E1030 13:18:46.828334 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:46.828856 kubelet[2769]: E1030 13:18:46.828831 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:46.828856 kubelet[2769]: W1030 13:18:46.828849 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:46.828856 kubelet[2769]: E1030 13:18:46.828859 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:46.829383 kubelet[2769]: E1030 13:18:46.829360 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:46.829383 kubelet[2769]: W1030 13:18:46.829376 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:46.829459 kubelet[2769]: E1030 13:18:46.829388 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:46.829797 kubelet[2769]: E1030 13:18:46.829779 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:46.829797 kubelet[2769]: W1030 13:18:46.829791 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:46.829860 kubelet[2769]: E1030 13:18:46.829802 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:46.830230 kubelet[2769]: E1030 13:18:46.830200 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:46.830230 kubelet[2769]: W1030 13:18:46.830224 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:46.830348 kubelet[2769]: E1030 13:18:46.830235 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:46.830791 kubelet[2769]: E1030 13:18:46.830758 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:46.830791 kubelet[2769]: W1030 13:18:46.830775 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:46.830791 kubelet[2769]: E1030 13:18:46.830786 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:46.831562 kubelet[2769]: E1030 13:18:46.831533 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:46.831562 kubelet[2769]: W1030 13:18:46.831547 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:46.831562 kubelet[2769]: E1030 13:18:46.831558 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:46.832022 kubelet[2769]: E1030 13:18:46.831975 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:46.832022 kubelet[2769]: W1030 13:18:46.832021 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:46.832080 kubelet[2769]: E1030 13:18:46.832033 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:46.832329 kubelet[2769]: E1030 13:18:46.832293 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:46.832329 kubelet[2769]: W1030 13:18:46.832323 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:46.832429 kubelet[2769]: E1030 13:18:46.832335 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:46.832617 kubelet[2769]: E1030 13:18:46.832586 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:46.832617 kubelet[2769]: W1030 13:18:46.832611 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:46.832688 kubelet[2769]: E1030 13:18:46.832623 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:46.833014 kubelet[2769]: E1030 13:18:46.832972 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:46.833014 kubelet[2769]: W1030 13:18:46.832998 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:46.833097 kubelet[2769]: E1030 13:18:46.833031 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:46.833338 kubelet[2769]: E1030 13:18:46.833316 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:46.833338 kubelet[2769]: W1030 13:18:46.833330 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:46.833338 kubelet[2769]: E1030 13:18:46.833340 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:46.833606 kubelet[2769]: E1030 13:18:46.833585 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:46.833606 kubelet[2769]: W1030 13:18:46.833598 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:46.833606 kubelet[2769]: E1030 13:18:46.833607 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:46.834022 kubelet[2769]: E1030 13:18:46.833845 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:46.834022 kubelet[2769]: W1030 13:18:46.833870 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:46.834022 kubelet[2769]: E1030 13:18:46.833880 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:46.835202 kubelet[2769]: E1030 13:18:46.835166 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:46.835202 kubelet[2769]: W1030 13:18:46.835188 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:46.835202 kubelet[2769]: E1030 13:18:46.835199 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:46.836777 kubelet[2769]: E1030 13:18:46.836737 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:46.836777 kubelet[2769]: W1030 13:18:46.836767 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:46.836857 kubelet[2769]: E1030 13:18:46.836781 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:46.837365 kubelet[2769]: E1030 13:18:46.837312 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:46.837365 kubelet[2769]: W1030 13:18:46.837338 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:46.837365 kubelet[2769]: E1030 13:18:46.837356 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:46.837620 kubelet[2769]: E1030 13:18:46.837595 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:46.837620 kubelet[2769]: W1030 13:18:46.837613 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:46.837734 kubelet[2769]: E1030 13:18:46.837634 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:46.837917 kubelet[2769]: E1030 13:18:46.837898 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:46.837917 kubelet[2769]: W1030 13:18:46.837910 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:46.837917 kubelet[2769]: E1030 13:18:46.837920 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:46.847182 systemd[1]: Started cri-containerd-25350e1a2a5dc96abeba9a81205f27d26669a7f76b9598ce86658f69e6ec1eb8.scope - libcontainer container 25350e1a2a5dc96abeba9a81205f27d26669a7f76b9598ce86658f69e6ec1eb8. Oct 30 13:18:46.851048 kubelet[2769]: E1030 13:18:46.850968 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:46.851048 kubelet[2769]: W1030 13:18:46.851029 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:46.851194 kubelet[2769]: E1030 13:18:46.851075 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:46.863315 kubelet[2769]: E1030 13:18:46.863282 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:46.864466 containerd[1588]: time="2025-10-30T13:18:46.864429310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7jhjv,Uid:060a88b0-56fe-4a8f-ad83-4afe49c5685e,Namespace:calico-system,Attempt:0,}" Oct 30 13:18:46.907066 containerd[1588]: time="2025-10-30T13:18:46.906618418Z" level=info msg="connecting to shim 81a26c6af2c9204ccd9754c9086cb2ddee1788df3a3b89c1f83ec3e11ff595b2" address="unix:///run/containerd/s/28940e1528cfb68f70ff184cb066258c9a28b2ab2fe2ba0a24f80d1432616343" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:18:46.918463 containerd[1588]: time="2025-10-30T13:18:46.918339675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5455c56b47-s245x,Uid:2a9c1110-f7dd-42ed-9293-cae7bdbba496,Namespace:calico-system,Attempt:0,} returns sandbox id \"25350e1a2a5dc96abeba9a81205f27d26669a7f76b9598ce86658f69e6ec1eb8\"" Oct 30 13:18:46.923809 kubelet[2769]: E1030 13:18:46.923485 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:46.928973 containerd[1588]: time="2025-10-30T13:18:46.928918898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 30 13:18:46.972386 systemd[1]: Started cri-containerd-81a26c6af2c9204ccd9754c9086cb2ddee1788df3a3b89c1f83ec3e11ff595b2.scope - libcontainer container 81a26c6af2c9204ccd9754c9086cb2ddee1788df3a3b89c1f83ec3e11ff595b2. Oct 30 13:18:47.004311 containerd[1588]: time="2025-10-30T13:18:47.004187313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7jhjv,Uid:060a88b0-56fe-4a8f-ad83-4afe49c5685e,Namespace:calico-system,Attempt:0,} returns sandbox id \"81a26c6af2c9204ccd9754c9086cb2ddee1788df3a3b89c1f83ec3e11ff595b2\"" Oct 30 13:18:47.005418 kubelet[2769]: E1030 13:18:47.005376 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:48.540135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1714795360.mount: Deactivated successfully. Oct 30 13:18:48.913038 kubelet[2769]: E1030 13:18:48.912862 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jqhml" podUID="f14ca4d9-aac0-4af6-a374-d183e93fb183" Oct 30 13:18:49.112227 containerd[1588]: time="2025-10-30T13:18:49.112164060Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:18:49.112912 containerd[1588]: time="2025-10-30T13:18:49.112873028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Oct 30 13:18:49.113976 containerd[1588]: time="2025-10-30T13:18:49.113930027Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:18:49.115902 containerd[1588]: time="2025-10-30T13:18:49.115871221Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:18:49.116713 containerd[1588]: time="2025-10-30T13:18:49.116658224Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.187686631s" Oct 30 13:18:49.116713 containerd[1588]: time="2025-10-30T13:18:49.116704436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 30 13:18:49.118005 containerd[1588]: time="2025-10-30T13:18:49.117937815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 30 13:18:49.129473 containerd[1588]: time="2025-10-30T13:18:49.129428643Z" level=info msg="CreateContainer within sandbox \"25350e1a2a5dc96abeba9a81205f27d26669a7f76b9598ce86658f69e6ec1eb8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 30 13:18:49.137127 containerd[1588]: time="2025-10-30T13:18:49.137086851Z" level=info msg="Container 67b843ac4df6bb5b9808d7a97ea2740038572f2285b2fa022f46ca446e9d3dc4: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:18:49.144168 containerd[1588]: time="2025-10-30T13:18:49.144121840Z" level=info msg="CreateContainer within sandbox \"25350e1a2a5dc96abeba9a81205f27d26669a7f76b9598ce86658f69e6ec1eb8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"67b843ac4df6bb5b9808d7a97ea2740038572f2285b2fa022f46ca446e9d3dc4\"" Oct 30 13:18:49.144624 containerd[1588]: time="2025-10-30T13:18:49.144586082Z" level=info msg="StartContainer for \"67b843ac4df6bb5b9808d7a97ea2740038572f2285b2fa022f46ca446e9d3dc4\"" Oct 30 13:18:49.145629 containerd[1588]: time="2025-10-30T13:18:49.145588272Z" level=info msg="connecting to shim 67b843ac4df6bb5b9808d7a97ea2740038572f2285b2fa022f46ca446e9d3dc4" address="unix:///run/containerd/s/896b9943ec43ef8d0b27aab2ce2febcc753c90652c557d89507141afdbd3181c" protocol=ttrpc version=3 Oct 30 13:18:49.172352 systemd[1]: Started cri-containerd-67b843ac4df6bb5b9808d7a97ea2740038572f2285b2fa022f46ca446e9d3dc4.scope - libcontainer container 67b843ac4df6bb5b9808d7a97ea2740038572f2285b2fa022f46ca446e9d3dc4. Oct 30 13:18:49.225227 containerd[1588]: time="2025-10-30T13:18:49.225171394Z" level=info msg="StartContainer for \"67b843ac4df6bb5b9808d7a97ea2740038572f2285b2fa022f46ca446e9d3dc4\" returns successfully" Oct 30 13:18:49.984401 kubelet[2769]: E1030 13:18:49.983874 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:49.994387 kubelet[2769]: I1030 13:18:49.994310 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5455c56b47-s245x" podStartSLOduration=1.801208279 podStartE2EDuration="3.994291787s" podCreationTimestamp="2025-10-30 13:18:46 +0000 UTC" firstStartedPulling="2025-10-30 13:18:46.924584081 +0000 UTC m=+21.109534159" lastFinishedPulling="2025-10-30 13:18:49.117667589 +0000 UTC m=+23.302617667" observedRunningTime="2025-10-30 13:18:49.993080582 +0000 UTC m=+24.178030670" watchObservedRunningTime="2025-10-30 13:18:49.994291787 +0000 UTC m=+24.179241865" Oct 30 13:18:50.032415 kubelet[2769]: E1030 13:18:50.032376 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:50.032415 kubelet[2769]: W1030 13:18:50.032407 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:50.032581 kubelet[2769]: E1030 13:18:50.032436 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:50.032693 kubelet[2769]: E1030 13:18:50.032676 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:50.032693 kubelet[2769]: W1030 13:18:50.032688 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:50.032748 kubelet[2769]: E1030 13:18:50.032699 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:50.032956 kubelet[2769]: E1030 13:18:50.032931 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:50.032956 kubelet[2769]: W1030 13:18:50.032944 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:50.032956 kubelet[2769]: E1030 13:18:50.032952 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:50.033215 kubelet[2769]: E1030 13:18:50.033195 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:50.033215 kubelet[2769]: W1030 13:18:50.033207 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:50.033215 kubelet[2769]: E1030 13:18:50.033217 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:50.033463 kubelet[2769]: E1030 13:18:50.033445 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:50.033463 kubelet[2769]: W1030 13:18:50.033456 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:50.033591 kubelet[2769]: E1030 13:18:50.033467 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:50.033664 kubelet[2769]: E1030 13:18:50.033647 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:50.033664 kubelet[2769]: W1030 13:18:50.033657 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:50.033664 kubelet[2769]: E1030 13:18:50.033665 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:50.033867 kubelet[2769]: E1030 13:18:50.033849 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:50.033867 kubelet[2769]: W1030 13:18:50.033860 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:50.033867 kubelet[2769]: E1030 13:18:50.033868 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:50.034075 kubelet[2769]: E1030 13:18:50.034055 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:50.034075 kubelet[2769]: W1030 13:18:50.034069 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:50.034128 kubelet[2769]: E1030 13:18:50.034080 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:50.034296 kubelet[2769]: E1030 13:18:50.034278 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:50.034296 kubelet[2769]: W1030 13:18:50.034290 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:50.034353 kubelet[2769]: E1030 13:18:50.034300 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:50.034536 kubelet[2769]: E1030 13:18:50.034518 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:50.034536 kubelet[2769]: W1030 13:18:50.034529 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:50.034584 kubelet[2769]: E1030 13:18:50.034538 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:50.034726 kubelet[2769]: E1030 13:18:50.034709 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:50.034726 kubelet[2769]: W1030 13:18:50.034719 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:50.034781 kubelet[2769]: E1030 13:18:50.034727 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:50.034931 kubelet[2769]: E1030 13:18:50.034910 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:50.034931 kubelet[2769]: W1030 13:18:50.034923 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:50.034989 kubelet[2769]: E1030 13:18:50.034933 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:50.035188 kubelet[2769]: E1030 13:18:50.035166 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:50.035188 kubelet[2769]: W1030 13:18:50.035180 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:50.035245 kubelet[2769]: E1030 13:18:50.035191 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:50.035429 kubelet[2769]: E1030 13:18:50.035406 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:50.035429 kubelet[2769]: W1030 13:18:50.035421 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:50.035478 kubelet[2769]: E1030 13:18:50.035432 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:50.035634 kubelet[2769]: E1030 13:18:50.035614 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:50.035634 kubelet[2769]: W1030 13:18:50.035625 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:50.035634 kubelet[2769]: E1030 13:18:50.035633 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:50.051033 kubelet[2769]: E1030 13:18:50.051004 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:50.051033 kubelet[2769]: W1030 13:18:50.051022 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:50.051033 kubelet[2769]: E1030 13:18:50.051037 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:50.051379 kubelet[2769]: E1030 13:18:50.051344 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:50.051423 kubelet[2769]: W1030 13:18:50.051374 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:50.051423 kubelet[2769]: E1030 13:18:50.051403 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:50.051669 kubelet[2769]: E1030 13:18:50.051639 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:50.051669 kubelet[2769]: W1030 13:18:50.051656 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:50.051669 kubelet[2769]: E1030 13:18:50.051666 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:50.051867 kubelet[2769]: E1030 13:18:50.051850 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:50.051867 kubelet[2769]: W1030 13:18:50.051861 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:50.051867 kubelet[2769]: E1030 13:18:50.051868 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:50.052078 kubelet[2769]: E1030 13:18:50.052060 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:50.052078 kubelet[2769]: W1030 13:18:50.052071 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:50.052078 kubelet[2769]: E1030 13:18:50.052079 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:50.052327 kubelet[2769]: E1030 13:18:50.052308 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:50.052327 kubelet[2769]: W1030 13:18:50.052319 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:50.052327 kubelet[2769]: E1030 13:18:50.052327 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:50.052658 kubelet[2769]: E1030 13:18:50.052643 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:50.052658 kubelet[2769]: W1030 13:18:50.052654 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:50.052727 kubelet[2769]: E1030 13:18:50.052664 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:50.052878 kubelet[2769]: E1030 13:18:50.052859 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:50.052878 kubelet[2769]: W1030 13:18:50.052871 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:50.052945 kubelet[2769]: E1030 13:18:50.052881 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:50.053087 kubelet[2769]: E1030 13:18:50.053069 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:50.053087 kubelet[2769]: W1030 13:18:50.053080 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:50.053087 kubelet[2769]: E1030 13:18:50.053089 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:50.053280 kubelet[2769]: E1030 13:18:50.053262 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:50.053280 kubelet[2769]: W1030 13:18:50.053272 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:50.053357 kubelet[2769]: E1030 13:18:50.053283 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:50.053486 kubelet[2769]: E1030 13:18:50.053470 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:50.053486 kubelet[2769]: W1030 13:18:50.053480 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:50.053486 kubelet[2769]: E1030 13:18:50.053488 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:50.053792 kubelet[2769]: E1030 13:18:50.053773 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:50.053792 kubelet[2769]: W1030 13:18:50.053788 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:50.053871 kubelet[2769]: E1030 13:18:50.053799 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:50.054014 kubelet[2769]: E1030 13:18:50.053996 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:50.054014 kubelet[2769]: W1030 13:18:50.054009 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:50.054091 kubelet[2769]: E1030 13:18:50.054018 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:50.054230 kubelet[2769]: E1030 13:18:50.054213 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:50.054230 kubelet[2769]: W1030 13:18:50.054223 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:50.054230 kubelet[2769]: E1030 13:18:50.054232 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:50.054428 kubelet[2769]: E1030 13:18:50.054409 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:50.054428 kubelet[2769]: W1030 13:18:50.054422 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:50.054491 kubelet[2769]: E1030 13:18:50.054433 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:50.054618 kubelet[2769]: E1030 13:18:50.054601 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:50.054618 kubelet[2769]: W1030 13:18:50.054612 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:50.054618 kubelet[2769]: E1030 13:18:50.054620 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:50.054817 kubelet[2769]: E1030 13:18:50.054801 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:50.054817 kubelet[2769]: W1030 13:18:50.054812 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:50.054880 kubelet[2769]: E1030 13:18:50.054821 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:50.055185 kubelet[2769]: E1030 13:18:50.055167 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:18:50.055185 kubelet[2769]: W1030 13:18:50.055179 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:18:50.055185 kubelet[2769]: E1030 13:18:50.055187 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:18:50.517585 containerd[1588]: time="2025-10-30T13:18:50.517527325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:18:50.518420 containerd[1588]: time="2025-10-30T13:18:50.518396086Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Oct 30 13:18:50.519483 containerd[1588]: time="2025-10-30T13:18:50.519449524Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:18:50.521366 containerd[1588]: time="2025-10-30T13:18:50.521338027Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:18:50.521913 containerd[1588]: time="2025-10-30T13:18:50.521864901Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.403901184s" Oct 30 13:18:50.521953 containerd[1588]: time="2025-10-30T13:18:50.521912034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 30 13:18:50.525182 containerd[1588]: time="2025-10-30T13:18:50.525152415Z" level=info msg="CreateContainer within sandbox \"81a26c6af2c9204ccd9754c9086cb2ddee1788df3a3b89c1f83ec3e11ff595b2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 30 13:18:50.533631 containerd[1588]: time="2025-10-30T13:18:50.533595918Z" level=info msg="Container 10405ebc06f99f6eb04d2ab4ae7acf436a15069506a8ac4e7ee851d87152071e: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:18:50.540765 containerd[1588]: time="2025-10-30T13:18:50.540725928Z" level=info msg="CreateContainer within sandbox \"81a26c6af2c9204ccd9754c9086cb2ddee1788df3a3b89c1f83ec3e11ff595b2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"10405ebc06f99f6eb04d2ab4ae7acf436a15069506a8ac4e7ee851d87152071e\"" Oct 30 13:18:50.541210 containerd[1588]: time="2025-10-30T13:18:50.541182072Z" level=info msg="StartContainer for \"10405ebc06f99f6eb04d2ab4ae7acf436a15069506a8ac4e7ee851d87152071e\"" Oct 30 13:18:50.542581 containerd[1588]: time="2025-10-30T13:18:50.542553249Z" level=info msg="connecting to shim 10405ebc06f99f6eb04d2ab4ae7acf436a15069506a8ac4e7ee851d87152071e" address="unix:///run/containerd/s/28940e1528cfb68f70ff184cb066258c9a28b2ab2fe2ba0a24f80d1432616343" protocol=ttrpc version=3 Oct 30 13:18:50.574137 systemd[1]: Started cri-containerd-10405ebc06f99f6eb04d2ab4ae7acf436a15069506a8ac4e7ee851d87152071e.scope - libcontainer container 10405ebc06f99f6eb04d2ab4ae7acf436a15069506a8ac4e7ee851d87152071e. Oct 30 13:18:50.629567 systemd[1]: cri-containerd-10405ebc06f99f6eb04d2ab4ae7acf436a15069506a8ac4e7ee851d87152071e.scope: Deactivated successfully. Oct 30 13:18:50.633129 containerd[1588]: time="2025-10-30T13:18:50.632951655Z" level=info msg="TaskExit event in podsandbox handler container_id:\"10405ebc06f99f6eb04d2ab4ae7acf436a15069506a8ac4e7ee851d87152071e\" id:\"10405ebc06f99f6eb04d2ab4ae7acf436a15069506a8ac4e7ee851d87152071e\" pid:3482 exited_at:{seconds:1761830330 nanos:632530360}" Oct 30 13:18:50.643969 containerd[1588]: time="2025-10-30T13:18:50.643912766Z" level=info msg="received exit event container_id:\"10405ebc06f99f6eb04d2ab4ae7acf436a15069506a8ac4e7ee851d87152071e\" id:\"10405ebc06f99f6eb04d2ab4ae7acf436a15069506a8ac4e7ee851d87152071e\" pid:3482 exited_at:{seconds:1761830330 nanos:632530360}" Oct 30 13:18:50.646223 containerd[1588]: time="2025-10-30T13:18:50.646135331Z" level=info msg="StartContainer for \"10405ebc06f99f6eb04d2ab4ae7acf436a15069506a8ac4e7ee851d87152071e\" returns successfully" Oct 30 13:18:50.670935 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10405ebc06f99f6eb04d2ab4ae7acf436a15069506a8ac4e7ee851d87152071e-rootfs.mount: Deactivated successfully. Oct 30 13:18:50.913346 kubelet[2769]: E1030 13:18:50.913192 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jqhml" podUID="f14ca4d9-aac0-4af6-a374-d183e93fb183" Oct 30 13:18:50.986704 kubelet[2769]: I1030 13:18:50.986664 2769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 30 13:18:50.987150 kubelet[2769]: E1030 13:18:50.987035 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:50.987235 kubelet[2769]: E1030 13:18:50.987207 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:50.988118 containerd[1588]: time="2025-10-30T13:18:50.988073208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 30 13:18:52.913346 kubelet[2769]: E1030 13:18:52.913275 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jqhml" podUID="f14ca4d9-aac0-4af6-a374-d183e93fb183" Oct 30 13:18:53.845299 containerd[1588]: time="2025-10-30T13:18:53.845238445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:18:53.846193 containerd[1588]: time="2025-10-30T13:18:53.846159791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 30 13:18:53.847356 containerd[1588]: time="2025-10-30T13:18:53.847294418Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:18:53.849257 containerd[1588]: time="2025-10-30T13:18:53.849217498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:18:53.849834 containerd[1588]: time="2025-10-30T13:18:53.849780718Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.861671558s" Oct 30 13:18:53.849834 containerd[1588]: time="2025-10-30T13:18:53.849815927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 30 13:18:53.854624 containerd[1588]: time="2025-10-30T13:18:53.854577012Z" level=info msg="CreateContainer within sandbox \"81a26c6af2c9204ccd9754c9086cb2ddee1788df3a3b89c1f83ec3e11ff595b2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 30 13:18:53.864952 containerd[1588]: time="2025-10-30T13:18:53.864892295Z" level=info msg="Container 9c0b1cb4ff79f0c9cb1bcbcbbbab45f7487847db4693d050c1da964012f689e6: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:18:53.874246 containerd[1588]: time="2025-10-30T13:18:53.874204520Z" level=info msg="CreateContainer within sandbox \"81a26c6af2c9204ccd9754c9086cb2ddee1788df3a3b89c1f83ec3e11ff595b2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9c0b1cb4ff79f0c9cb1bcbcbbbab45f7487847db4693d050c1da964012f689e6\"" Oct 30 13:18:53.874719 containerd[1588]: time="2025-10-30T13:18:53.874691050Z" level=info msg="StartContainer for \"9c0b1cb4ff79f0c9cb1bcbcbbbab45f7487847db4693d050c1da964012f689e6\"" Oct 30 13:18:53.876297 containerd[1588]: time="2025-10-30T13:18:53.876269020Z" level=info msg="connecting to shim 9c0b1cb4ff79f0c9cb1bcbcbbbab45f7487847db4693d050c1da964012f689e6" address="unix:///run/containerd/s/28940e1528cfb68f70ff184cb066258c9a28b2ab2fe2ba0a24f80d1432616343" protocol=ttrpc version=3 Oct 30 13:18:53.901374 systemd[1]: Started cri-containerd-9c0b1cb4ff79f0c9cb1bcbcbbbab45f7487847db4693d050c1da964012f689e6.scope - libcontainer container 9c0b1cb4ff79f0c9cb1bcbcbbbab45f7487847db4693d050c1da964012f689e6. Oct 30 13:18:54.324633 containerd[1588]: time="2025-10-30T13:18:54.324584567Z" level=info msg="StartContainer for \"9c0b1cb4ff79f0c9cb1bcbcbbbab45f7487847db4693d050c1da964012f689e6\" returns successfully" Oct 30 13:18:54.913508 kubelet[2769]: E1030 13:18:54.913445 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jqhml" podUID="f14ca4d9-aac0-4af6-a374-d183e93fb183" Oct 30 13:18:55.327450 kubelet[2769]: E1030 13:18:55.327408 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:55.414879 systemd[1]: cri-containerd-9c0b1cb4ff79f0c9cb1bcbcbbbab45f7487847db4693d050c1da964012f689e6.scope: Deactivated successfully. Oct 30 13:18:55.415373 systemd[1]: cri-containerd-9c0b1cb4ff79f0c9cb1bcbcbbbab45f7487847db4693d050c1da964012f689e6.scope: Consumed 672ms CPU time, 178M memory peak, 3.5M read from disk, 171.3M written to disk. Oct 30 13:18:55.416287 containerd[1588]: time="2025-10-30T13:18:55.416239344Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9c0b1cb4ff79f0c9cb1bcbcbbbab45f7487847db4693d050c1da964012f689e6\" id:\"9c0b1cb4ff79f0c9cb1bcbcbbbab45f7487847db4693d050c1da964012f689e6\" pid:3541 exited_at:{seconds:1761830335 nanos:415670146}" Oct 30 13:18:55.416685 containerd[1588]: time="2025-10-30T13:18:55.416299101Z" level=info msg="received exit event container_id:\"9c0b1cb4ff79f0c9cb1bcbcbbbab45f7487847db4693d050c1da964012f689e6\" id:\"9c0b1cb4ff79f0c9cb1bcbcbbbab45f7487847db4693d050c1da964012f689e6\" pid:3541 exited_at:{seconds:1761830335 nanos:415670146}" Oct 30 13:18:55.447255 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c0b1cb4ff79f0c9cb1bcbcbbbab45f7487847db4693d050c1da964012f689e6-rootfs.mount: Deactivated successfully. Oct 30 13:18:55.494868 kubelet[2769]: I1030 13:18:55.494826 2769 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 30 13:18:55.563908 systemd[1]: Created slice kubepods-burstable-pod1eb997b9_a4bf_4310_a37f_4b8c7364b569.slice - libcontainer container kubepods-burstable-pod1eb997b9_a4bf_4310_a37f_4b8c7364b569.slice. Oct 30 13:18:55.576852 systemd[1]: Created slice kubepods-besteffort-pod2f3b781a_b409_44c1_bfbe_62b7c2fd7f95.slice - libcontainer container kubepods-besteffort-pod2f3b781a_b409_44c1_bfbe_62b7c2fd7f95.slice. Oct 30 13:18:55.584122 systemd[1]: Created slice kubepods-burstable-podf0340278_e067_4b59_87c3_c2890d479a3c.slice - libcontainer container kubepods-burstable-podf0340278_e067_4b59_87c3_c2890d479a3c.slice. Oct 30 13:18:55.591275 systemd[1]: Created slice kubepods-besteffort-podc30303b7_2f8a_4e76_affb_92ba5d248c6b.slice - libcontainer container kubepods-besteffort-podc30303b7_2f8a_4e76_affb_92ba5d248c6b.slice. Oct 30 13:18:55.594363 kubelet[2769]: I1030 13:18:55.594129 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/689c0549-8e19-49c9-a1ed-e0519bd6b7c7-whisker-backend-key-pair\") pod \"whisker-cf5985cf4-xkfk4\" (UID: \"689c0549-8e19-49c9-a1ed-e0519bd6b7c7\") " pod="calico-system/whisker-cf5985cf4-xkfk4" Oct 30 13:18:55.594363 kubelet[2769]: I1030 13:18:55.594174 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0340278-e067-4b59-87c3-c2890d479a3c-config-volume\") pod \"coredns-674b8bbfcf-l4zjz\" (UID: \"f0340278-e067-4b59-87c3-c2890d479a3c\") " pod="kube-system/coredns-674b8bbfcf-l4zjz" Oct 30 13:18:55.594363 kubelet[2769]: I1030 13:18:55.594212 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8185e627-e431-4f4e-9719-dc6a950cb7cf-calico-apiserver-certs\") pod \"calico-apiserver-ff94d9bcc-4z7tw\" (UID: \"8185e627-e431-4f4e-9719-dc6a950cb7cf\") " pod="calico-apiserver/calico-apiserver-ff94d9bcc-4z7tw" Oct 30 13:18:55.594363 kubelet[2769]: I1030 13:18:55.594233 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdf6j\" (UniqueName: \"kubernetes.io/projected/c30303b7-2f8a-4e76-affb-92ba5d248c6b-kube-api-access-rdf6j\") pod \"calico-apiserver-ff94d9bcc-4c6ph\" (UID: \"c30303b7-2f8a-4e76-affb-92ba5d248c6b\") " pod="calico-apiserver/calico-apiserver-ff94d9bcc-4c6ph" Oct 30 13:18:55.594363 kubelet[2769]: I1030 13:18:55.594253 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7p4h\" (UniqueName: \"kubernetes.io/projected/8185e627-e431-4f4e-9719-dc6a950cb7cf-kube-api-access-c7p4h\") pod \"calico-apiserver-ff94d9bcc-4z7tw\" (UID: \"8185e627-e431-4f4e-9719-dc6a950cb7cf\") " pod="calico-apiserver/calico-apiserver-ff94d9bcc-4z7tw" Oct 30 13:18:55.594616 kubelet[2769]: I1030 13:18:55.594268 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpg7v\" (UniqueName: \"kubernetes.io/projected/689c0549-8e19-49c9-a1ed-e0519bd6b7c7-kube-api-access-vpg7v\") pod \"whisker-cf5985cf4-xkfk4\" (UID: \"689c0549-8e19-49c9-a1ed-e0519bd6b7c7\") " pod="calico-system/whisker-cf5985cf4-xkfk4" Oct 30 13:18:55.594616 kubelet[2769]: I1030 13:18:55.594286 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwpsd\" (UniqueName: \"kubernetes.io/projected/f0340278-e067-4b59-87c3-c2890d479a3c-kube-api-access-fwpsd\") pod \"coredns-674b8bbfcf-l4zjz\" (UID: \"f0340278-e067-4b59-87c3-c2890d479a3c\") " pod="kube-system/coredns-674b8bbfcf-l4zjz" Oct 30 13:18:55.594616 kubelet[2769]: I1030 13:18:55.594304 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct6pm\" (UniqueName: \"kubernetes.io/projected/1eb997b9-a4bf-4310-a37f-4b8c7364b569-kube-api-access-ct6pm\") pod \"coredns-674b8bbfcf-wsfwb\" (UID: \"1eb997b9-a4bf-4310-a37f-4b8c7364b569\") " pod="kube-system/coredns-674b8bbfcf-wsfwb" Oct 30 13:18:55.594616 kubelet[2769]: I1030 13:18:55.594322 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmp74\" (UniqueName: \"kubernetes.io/projected/2f3b781a-b409-44c1-bfbe-62b7c2fd7f95-kube-api-access-dmp74\") pod \"calico-kube-controllers-6798f4bdc5-q6qdh\" (UID: \"2f3b781a-b409-44c1-bfbe-62b7c2fd7f95\") " pod="calico-system/calico-kube-controllers-6798f4bdc5-q6qdh" Oct 30 13:18:55.594616 kubelet[2769]: I1030 13:18:55.594339 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/689c0549-8e19-49c9-a1ed-e0519bd6b7c7-whisker-ca-bundle\") pod \"whisker-cf5985cf4-xkfk4\" (UID: \"689c0549-8e19-49c9-a1ed-e0519bd6b7c7\") " pod="calico-system/whisker-cf5985cf4-xkfk4" Oct 30 13:18:55.594733 kubelet[2769]: I1030 13:18:55.594445 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f3b781a-b409-44c1-bfbe-62b7c2fd7f95-tigera-ca-bundle\") pod \"calico-kube-controllers-6798f4bdc5-q6qdh\" (UID: \"2f3b781a-b409-44c1-bfbe-62b7c2fd7f95\") " pod="calico-system/calico-kube-controllers-6798f4bdc5-q6qdh" Oct 30 13:18:55.594733 kubelet[2769]: I1030 13:18:55.594483 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c30303b7-2f8a-4e76-affb-92ba5d248c6b-calico-apiserver-certs\") pod \"calico-apiserver-ff94d9bcc-4c6ph\" (UID: \"c30303b7-2f8a-4e76-affb-92ba5d248c6b\") " pod="calico-apiserver/calico-apiserver-ff94d9bcc-4c6ph" Oct 30 13:18:55.594733 kubelet[2769]: I1030 13:18:55.594509 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1eb997b9-a4bf-4310-a37f-4b8c7364b569-config-volume\") pod \"coredns-674b8bbfcf-wsfwb\" (UID: \"1eb997b9-a4bf-4310-a37f-4b8c7364b569\") " pod="kube-system/coredns-674b8bbfcf-wsfwb" Oct 30 13:18:55.599373 systemd[1]: Created slice kubepods-besteffort-pod8185e627_e431_4f4e_9719_dc6a950cb7cf.slice - libcontainer container kubepods-besteffort-pod8185e627_e431_4f4e_9719_dc6a950cb7cf.slice. Oct 30 13:18:55.607945 systemd[1]: Created slice kubepods-besteffort-pod689c0549_8e19_49c9_a1ed_e0519bd6b7c7.slice - libcontainer container kubepods-besteffort-pod689c0549_8e19_49c9_a1ed_e0519bd6b7c7.slice. Oct 30 13:18:55.614042 systemd[1]: Created slice kubepods-besteffort-podab05aa8b_f302_4973_9a9c_4a341cc5c31e.slice - libcontainer container kubepods-besteffort-podab05aa8b_f302_4973_9a9c_4a341cc5c31e.slice. Oct 30 13:18:55.695103 kubelet[2769]: I1030 13:18:55.695059 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab05aa8b-f302-4973-9a9c-4a341cc5c31e-config\") pod \"goldmane-666569f655-ws88l\" (UID: \"ab05aa8b-f302-4973-9a9c-4a341cc5c31e\") " pod="calico-system/goldmane-666569f655-ws88l" Oct 30 13:18:55.695103 kubelet[2769]: I1030 13:18:55.695108 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ab05aa8b-f302-4973-9a9c-4a341cc5c31e-goldmane-key-pair\") pod \"goldmane-666569f655-ws88l\" (UID: \"ab05aa8b-f302-4973-9a9c-4a341cc5c31e\") " pod="calico-system/goldmane-666569f655-ws88l" Oct 30 13:18:55.695292 kubelet[2769]: I1030 13:18:55.695166 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx8pn\" (UniqueName: \"kubernetes.io/projected/ab05aa8b-f302-4973-9a9c-4a341cc5c31e-kube-api-access-dx8pn\") pod \"goldmane-666569f655-ws88l\" (UID: \"ab05aa8b-f302-4973-9a9c-4a341cc5c31e\") " pod="calico-system/goldmane-666569f655-ws88l" Oct 30 13:18:55.695292 kubelet[2769]: I1030 13:18:55.695286 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab05aa8b-f302-4973-9a9c-4a341cc5c31e-goldmane-ca-bundle\") pod \"goldmane-666569f655-ws88l\" (UID: \"ab05aa8b-f302-4973-9a9c-4a341cc5c31e\") " pod="calico-system/goldmane-666569f655-ws88l" Oct 30 13:18:55.872338 kubelet[2769]: E1030 13:18:55.872214 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:55.872996 containerd[1588]: time="2025-10-30T13:18:55.872926579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wsfwb,Uid:1eb997b9-a4bf-4310-a37f-4b8c7364b569,Namespace:kube-system,Attempt:0,}" Oct 30 13:18:55.883257 containerd[1588]: time="2025-10-30T13:18:55.883200748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6798f4bdc5-q6qdh,Uid:2f3b781a-b409-44c1-bfbe-62b7c2fd7f95,Namespace:calico-system,Attempt:0,}" Oct 30 13:18:55.887859 kubelet[2769]: E1030 13:18:55.887470 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:55.888501 containerd[1588]: time="2025-10-30T13:18:55.888452898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l4zjz,Uid:f0340278-e067-4b59-87c3-c2890d479a3c,Namespace:kube-system,Attempt:0,}" Oct 30 13:18:55.898873 containerd[1588]: time="2025-10-30T13:18:55.898822335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff94d9bcc-4c6ph,Uid:c30303b7-2f8a-4e76-affb-92ba5d248c6b,Namespace:calico-apiserver,Attempt:0,}" Oct 30 13:18:55.903158 containerd[1588]: time="2025-10-30T13:18:55.902931491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff94d9bcc-4z7tw,Uid:8185e627-e431-4f4e-9719-dc6a950cb7cf,Namespace:calico-apiserver,Attempt:0,}" Oct 30 13:18:55.911841 containerd[1588]: time="2025-10-30T13:18:55.911789156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cf5985cf4-xkfk4,Uid:689c0549-8e19-49c9-a1ed-e0519bd6b7c7,Namespace:calico-system,Attempt:0,}" Oct 30 13:18:55.918702 containerd[1588]: time="2025-10-30T13:18:55.918658314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ws88l,Uid:ab05aa8b-f302-4973-9a9c-4a341cc5c31e,Namespace:calico-system,Attempt:0,}" Oct 30 13:18:56.054965 containerd[1588]: time="2025-10-30T13:18:56.054892067Z" level=error msg="Failed to destroy network for sandbox \"f93df6f8323474609bd8fbc3185733f20b83e92635d988b946cd51f17dbe5c4b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:18:56.061745 containerd[1588]: time="2025-10-30T13:18:56.061688523Z" level=error msg="Failed to destroy network for sandbox \"94ea5f2a2381b0d9fd8a772873850fc86b2c81df3f8ee18f3d8e8aa71f8dd420\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:18:56.062644 containerd[1588]: time="2025-10-30T13:18:56.062491128Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l4zjz,Uid:f0340278-e067-4b59-87c3-c2890d479a3c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f93df6f8323474609bd8fbc3185733f20b83e92635d988b946cd51f17dbe5c4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:18:56.062929 kubelet[2769]: E1030 13:18:56.062845 2769 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f93df6f8323474609bd8fbc3185733f20b83e92635d988b946cd51f17dbe5c4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:18:56.063523 containerd[1588]: time="2025-10-30T13:18:56.063476732Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff94d9bcc-4z7tw,Uid:8185e627-e431-4f4e-9719-dc6a950cb7cf,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"94ea5f2a2381b0d9fd8a772873850fc86b2c81df3f8ee18f3d8e8aa71f8dd420\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:18:56.063796 kubelet[2769]: E1030 13:18:56.063744 2769 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94ea5f2a2381b0d9fd8a772873850fc86b2c81df3f8ee18f3d8e8aa71f8dd420\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:18:56.063833 kubelet[2769]: E1030 13:18:56.063723 2769 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f93df6f8323474609bd8fbc3185733f20b83e92635d988b946cd51f17dbe5c4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-l4zjz" Oct 30 13:18:56.063833 kubelet[2769]: E1030 13:18:56.063823 2769 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94ea5f2a2381b0d9fd8a772873850fc86b2c81df3f8ee18f3d8e8aa71f8dd420\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ff94d9bcc-4z7tw" Oct 30 13:18:56.063884 kubelet[2769]: E1030 13:18:56.063828 2769 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f93df6f8323474609bd8fbc3185733f20b83e92635d988b946cd51f17dbe5c4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-l4zjz" Oct 30 13:18:56.063884 kubelet[2769]: E1030 13:18:56.063848 2769 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94ea5f2a2381b0d9fd8a772873850fc86b2c81df3f8ee18f3d8e8aa71f8dd420\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ff94d9bcc-4z7tw" Oct 30 13:18:56.063955 kubelet[2769]: E1030 13:18:56.063910 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ff94d9bcc-4z7tw_calico-apiserver(8185e627-e431-4f4e-9719-dc6a950cb7cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ff94d9bcc-4z7tw_calico-apiserver(8185e627-e431-4f4e-9719-dc6a950cb7cf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94ea5f2a2381b0d9fd8a772873850fc86b2c81df3f8ee18f3d8e8aa71f8dd420\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ff94d9bcc-4z7tw" podUID="8185e627-e431-4f4e-9719-dc6a950cb7cf" Oct 30 13:18:56.065068 kubelet[2769]: E1030 13:18:56.063924 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-l4zjz_kube-system(f0340278-e067-4b59-87c3-c2890d479a3c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-l4zjz_kube-system(f0340278-e067-4b59-87c3-c2890d479a3c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f93df6f8323474609bd8fbc3185733f20b83e92635d988b946cd51f17dbe5c4b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-l4zjz" podUID="f0340278-e067-4b59-87c3-c2890d479a3c" Oct 30 13:18:56.071032 containerd[1588]: time="2025-10-30T13:18:56.070958364Z" level=error msg="Failed to destroy network for sandbox \"ce16cf6f446d15c79bff6af93314690b5b804d0ab78a1c7f5630886fc99444e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:18:56.072955 containerd[1588]: time="2025-10-30T13:18:56.072902279Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wsfwb,Uid:1eb997b9-a4bf-4310-a37f-4b8c7364b569,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce16cf6f446d15c79bff6af93314690b5b804d0ab78a1c7f5630886fc99444e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:18:56.074615 kubelet[2769]: E1030 13:18:56.073177 2769 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce16cf6f446d15c79bff6af93314690b5b804d0ab78a1c7f5630886fc99444e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:18:56.074615 kubelet[2769]: E1030 13:18:56.073252 2769 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce16cf6f446d15c79bff6af93314690b5b804d0ab78a1c7f5630886fc99444e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-wsfwb" Oct 30 13:18:56.074615 kubelet[2769]: E1030 13:18:56.073276 2769 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce16cf6f446d15c79bff6af93314690b5b804d0ab78a1c7f5630886fc99444e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-wsfwb" Oct 30 13:18:56.074712 kubelet[2769]: E1030 13:18:56.073349 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-wsfwb_kube-system(1eb997b9-a4bf-4310-a37f-4b8c7364b569)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-wsfwb_kube-system(1eb997b9-a4bf-4310-a37f-4b8c7364b569)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce16cf6f446d15c79bff6af93314690b5b804d0ab78a1c7f5630886fc99444e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-wsfwb" podUID="1eb997b9-a4bf-4310-a37f-4b8c7364b569" Oct 30 13:18:56.081369 containerd[1588]: time="2025-10-30T13:18:56.081318043Z" level=error msg="Failed to destroy network for sandbox \"7b9859898abb082ae88c192908ead7f0d9a46700f6eab121ae7852128c5fe012\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:18:56.083820 containerd[1588]: time="2025-10-30T13:18:56.082772548Z" level=error msg="Failed to destroy network for sandbox \"37ee9af723d610ceb0c95303724161da8e646aa6da7d4fc0b7882feeb3201de7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:18:56.083820 containerd[1588]: time="2025-10-30T13:18:56.083542609Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff94d9bcc-4c6ph,Uid:c30303b7-2f8a-4e76-affb-92ba5d248c6b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b9859898abb082ae88c192908ead7f0d9a46700f6eab121ae7852128c5fe012\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:18:56.083930 kubelet[2769]: E1030 13:18:56.083887 2769 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b9859898abb082ae88c192908ead7f0d9a46700f6eab121ae7852128c5fe012\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:18:56.084023 kubelet[2769]: E1030 13:18:56.083944 2769 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b9859898abb082ae88c192908ead7f0d9a46700f6eab121ae7852128c5fe012\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ff94d9bcc-4c6ph" Oct 30 13:18:56.084023 kubelet[2769]: E1030 13:18:56.083963 2769 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b9859898abb082ae88c192908ead7f0d9a46700f6eab121ae7852128c5fe012\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ff94d9bcc-4c6ph" Oct 30 13:18:56.084114 kubelet[2769]: E1030 13:18:56.084079 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ff94d9bcc-4c6ph_calico-apiserver(c30303b7-2f8a-4e76-affb-92ba5d248c6b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ff94d9bcc-4c6ph_calico-apiserver(c30303b7-2f8a-4e76-affb-92ba5d248c6b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b9859898abb082ae88c192908ead7f0d9a46700f6eab121ae7852128c5fe012\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ff94d9bcc-4c6ph" podUID="c30303b7-2f8a-4e76-affb-92ba5d248c6b" Oct 30 13:18:56.084779 containerd[1588]: time="2025-10-30T13:18:56.084743396Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6798f4bdc5-q6qdh,Uid:2f3b781a-b409-44c1-bfbe-62b7c2fd7f95,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"37ee9af723d610ceb0c95303724161da8e646aa6da7d4fc0b7882feeb3201de7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:18:56.085073 kubelet[2769]: E1030 13:18:56.085036 2769 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37ee9af723d610ceb0c95303724161da8e646aa6da7d4fc0b7882feeb3201de7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:18:56.085195 kubelet[2769]: E1030 13:18:56.085176 2769 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37ee9af723d610ceb0c95303724161da8e646aa6da7d4fc0b7882feeb3201de7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6798f4bdc5-q6qdh" Oct 30 13:18:56.085268 kubelet[2769]: E1030 13:18:56.085251 2769 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37ee9af723d610ceb0c95303724161da8e646aa6da7d4fc0b7882feeb3201de7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6798f4bdc5-q6qdh" Oct 30 13:18:56.085386 kubelet[2769]: E1030 13:18:56.085357 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6798f4bdc5-q6qdh_calico-system(2f3b781a-b409-44c1-bfbe-62b7c2fd7f95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6798f4bdc5-q6qdh_calico-system(2f3b781a-b409-44c1-bfbe-62b7c2fd7f95)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37ee9af723d610ceb0c95303724161da8e646aa6da7d4fc0b7882feeb3201de7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6798f4bdc5-q6qdh" podUID="2f3b781a-b409-44c1-bfbe-62b7c2fd7f95" Oct 30 13:18:56.092501 containerd[1588]: time="2025-10-30T13:18:56.092445239Z" level=error msg="Failed to destroy network for sandbox \"a51a552f6566463806bb931497be83b5bb74b3b89c9409b2472d272e2db7cb4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:18:56.093867 containerd[1588]: time="2025-10-30T13:18:56.093797833Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ws88l,Uid:ab05aa8b-f302-4973-9a9c-4a341cc5c31e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a51a552f6566463806bb931497be83b5bb74b3b89c9409b2472d272e2db7cb4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:18:56.094082 kubelet[2769]: E1030 13:18:56.094049 2769 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a51a552f6566463806bb931497be83b5bb74b3b89c9409b2472d272e2db7cb4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:18:56.094133 kubelet[2769]: E1030 13:18:56.094112 2769 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a51a552f6566463806bb931497be83b5bb74b3b89c9409b2472d272e2db7cb4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-ws88l" Oct 30 13:18:56.094160 kubelet[2769]: E1030 13:18:56.094133 2769 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a51a552f6566463806bb931497be83b5bb74b3b89c9409b2472d272e2db7cb4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-ws88l" Oct 30 13:18:56.094254 kubelet[2769]: E1030 13:18:56.094199 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-ws88l_calico-system(ab05aa8b-f302-4973-9a9c-4a341cc5c31e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-ws88l_calico-system(ab05aa8b-f302-4973-9a9c-4a341cc5c31e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a51a552f6566463806bb931497be83b5bb74b3b89c9409b2472d272e2db7cb4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-ws88l" podUID="ab05aa8b-f302-4973-9a9c-4a341cc5c31e" Oct 30 13:18:56.102902 containerd[1588]: time="2025-10-30T13:18:56.102849386Z" level=error msg="Failed to destroy network for sandbox \"ee17b6acdd5e8dc4cd3b451c5b0ff228639be4c48aba919862c0885b6363fae1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:18:56.104085 containerd[1588]: time="2025-10-30T13:18:56.104038869Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cf5985cf4-xkfk4,Uid:689c0549-8e19-49c9-a1ed-e0519bd6b7c7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee17b6acdd5e8dc4cd3b451c5b0ff228639be4c48aba919862c0885b6363fae1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:18:56.104283 kubelet[2769]: E1030 13:18:56.104243 2769 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee17b6acdd5e8dc4cd3b451c5b0ff228639be4c48aba919862c0885b6363fae1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:18:56.104328 kubelet[2769]: E1030 13:18:56.104294 2769 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee17b6acdd5e8dc4cd3b451c5b0ff228639be4c48aba919862c0885b6363fae1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-cf5985cf4-xkfk4" Oct 30 13:18:56.104328 kubelet[2769]: E1030 13:18:56.104317 2769 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee17b6acdd5e8dc4cd3b451c5b0ff228639be4c48aba919862c0885b6363fae1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-cf5985cf4-xkfk4" Oct 30 13:18:56.104391 kubelet[2769]: E1030 13:18:56.104365 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-cf5985cf4-xkfk4_calico-system(689c0549-8e19-49c9-a1ed-e0519bd6b7c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-cf5985cf4-xkfk4_calico-system(689c0549-8e19-49c9-a1ed-e0519bd6b7c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee17b6acdd5e8dc4cd3b451c5b0ff228639be4c48aba919862c0885b6363fae1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-cf5985cf4-xkfk4" podUID="689c0549-8e19-49c9-a1ed-e0519bd6b7c7" Oct 30 13:18:56.333070 kubelet[2769]: E1030 13:18:56.333022 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:18:56.333824 containerd[1588]: time="2025-10-30T13:18:56.333781954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 30 13:18:56.919533 systemd[1]: Created slice kubepods-besteffort-podf14ca4d9_aac0_4af6_a374_d183e93fb183.slice - libcontainer container kubepods-besteffort-podf14ca4d9_aac0_4af6_a374_d183e93fb183.slice. Oct 30 13:18:56.922692 containerd[1588]: time="2025-10-30T13:18:56.922630064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jqhml,Uid:f14ca4d9-aac0-4af6-a374-d183e93fb183,Namespace:calico-system,Attempt:0,}" Oct 30 13:18:56.987838 containerd[1588]: time="2025-10-30T13:18:56.987757206Z" level=error msg="Failed to destroy network for sandbox \"a52a07f3d1339fff49a5f72fe97cf98308475a63359e957c809eec7b2d6fb919\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:18:56.989507 containerd[1588]: time="2025-10-30T13:18:56.989440961Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jqhml,Uid:f14ca4d9-aac0-4af6-a374-d183e93fb183,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a52a07f3d1339fff49a5f72fe97cf98308475a63359e957c809eec7b2d6fb919\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:18:56.989882 kubelet[2769]: E1030 13:18:56.989782 2769 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a52a07f3d1339fff49a5f72fe97cf98308475a63359e957c809eec7b2d6fb919\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:18:56.989882 kubelet[2769]: E1030 13:18:56.989878 2769 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a52a07f3d1339fff49a5f72fe97cf98308475a63359e957c809eec7b2d6fb919\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jqhml" Oct 30 13:18:56.990104 kubelet[2769]: E1030 13:18:56.989906 2769 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a52a07f3d1339fff49a5f72fe97cf98308475a63359e957c809eec7b2d6fb919\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jqhml" Oct 30 13:18:56.990134 systemd[1]: run-netns-cni\x2d61862e21\x2db22a\x2d5b7b\x2d60b3\x2d558414cbaf55.mount: Deactivated successfully. Oct 30 13:18:56.990466 kubelet[2769]: E1030 13:18:56.990222 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jqhml_calico-system(f14ca4d9-aac0-4af6-a374-d183e93fb183)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jqhml_calico-system(f14ca4d9-aac0-4af6-a374-d183e93fb183)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a52a07f3d1339fff49a5f72fe97cf98308475a63359e957c809eec7b2d6fb919\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jqhml" podUID="f14ca4d9-aac0-4af6-a374-d183e93fb183" Oct 30 13:19:04.026795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1806065043.mount: Deactivated successfully. Oct 30 13:19:05.701921 containerd[1588]: time="2025-10-30T13:19:05.701843817Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:19:05.704481 containerd[1588]: time="2025-10-30T13:19:05.704415567Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 30 13:19:05.708373 containerd[1588]: time="2025-10-30T13:19:05.707823714Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:19:05.712811 containerd[1588]: time="2025-10-30T13:19:05.712232826Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:19:05.712811 containerd[1588]: time="2025-10-30T13:19:05.712397997Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 9.378567608s" Oct 30 13:19:05.712811 containerd[1588]: time="2025-10-30T13:19:05.712443676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 30 13:19:05.738775 systemd[1]: Started sshd@7-10.0.0.37:22-10.0.0.1:51770.service - OpenSSH per-connection server daemon (10.0.0.1:51770). Oct 30 13:19:05.743217 containerd[1588]: time="2025-10-30T13:19:05.743169376Z" level=info msg="CreateContainer within sandbox \"81a26c6af2c9204ccd9754c9086cb2ddee1788df3a3b89c1f83ec3e11ff595b2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 30 13:19:05.754514 containerd[1588]: time="2025-10-30T13:19:05.754464416Z" level=info msg="Container 3673948fe289201045ad98242d220bf2e3564a93bf7a67c02c1118e9a1e88758: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:19:05.764855 containerd[1588]: time="2025-10-30T13:19:05.764806053Z" level=info msg="CreateContainer within sandbox \"81a26c6af2c9204ccd9754c9086cb2ddee1788df3a3b89c1f83ec3e11ff595b2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3673948fe289201045ad98242d220bf2e3564a93bf7a67c02c1118e9a1e88758\"" Oct 30 13:19:05.768154 containerd[1588]: time="2025-10-30T13:19:05.768113464Z" level=info msg="StartContainer for \"3673948fe289201045ad98242d220bf2e3564a93bf7a67c02c1118e9a1e88758\"" Oct 30 13:19:05.771297 containerd[1588]: time="2025-10-30T13:19:05.769959162Z" level=info msg="connecting to shim 3673948fe289201045ad98242d220bf2e3564a93bf7a67c02c1118e9a1e88758" address="unix:///run/containerd/s/28940e1528cfb68f70ff184cb066258c9a28b2ab2fe2ba0a24f80d1432616343" protocol=ttrpc version=3 Oct 30 13:19:05.821318 sshd[3855]: Accepted publickey for core from 10.0.0.1 port 51770 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:19:05.823612 sshd-session[3855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:19:05.830039 systemd-logind[1576]: New session 8 of user core. Oct 30 13:19:05.838145 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 30 13:19:05.842594 systemd[1]: Started cri-containerd-3673948fe289201045ad98242d220bf2e3564a93bf7a67c02c1118e9a1e88758.scope - libcontainer container 3673948fe289201045ad98242d220bf2e3564a93bf7a67c02c1118e9a1e88758. Oct 30 13:19:05.934702 containerd[1588]: time="2025-10-30T13:19:05.933245196Z" level=info msg="StartContainer for \"3673948fe289201045ad98242d220bf2e3564a93bf7a67c02c1118e9a1e88758\" returns successfully" Oct 30 13:19:05.960035 sshd[3869]: Connection closed by 10.0.0.1 port 51770 Oct 30 13:19:05.960851 sshd-session[3855]: pam_unix(sshd:session): session closed for user core Oct 30 13:19:05.966908 systemd[1]: sshd@7-10.0.0.37:22-10.0.0.1:51770.service: Deactivated successfully. Oct 30 13:19:05.970189 systemd[1]: session-8.scope: Deactivated successfully. Oct 30 13:19:05.973455 systemd-logind[1576]: Session 8 logged out. Waiting for processes to exit. Oct 30 13:19:05.974593 systemd-logind[1576]: Removed session 8. Oct 30 13:19:06.012824 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 30 13:19:06.014452 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 30 13:19:06.255454 kubelet[2769]: I1030 13:19:06.255139 2769 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/689c0549-8e19-49c9-a1ed-e0519bd6b7c7-whisker-ca-bundle\") pod \"689c0549-8e19-49c9-a1ed-e0519bd6b7c7\" (UID: \"689c0549-8e19-49c9-a1ed-e0519bd6b7c7\") " Oct 30 13:19:06.255938 kubelet[2769]: I1030 13:19:06.255904 2769 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/689c0549-8e19-49c9-a1ed-e0519bd6b7c7-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "689c0549-8e19-49c9-a1ed-e0519bd6b7c7" (UID: "689c0549-8e19-49c9-a1ed-e0519bd6b7c7"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 30 13:19:06.256069 kubelet[2769]: I1030 13:19:06.256048 2769 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/689c0549-8e19-49c9-a1ed-e0519bd6b7c7-whisker-backend-key-pair\") pod \"689c0549-8e19-49c9-a1ed-e0519bd6b7c7\" (UID: \"689c0549-8e19-49c9-a1ed-e0519bd6b7c7\") " Oct 30 13:19:06.256114 kubelet[2769]: I1030 13:19:06.256075 2769 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpg7v\" (UniqueName: \"kubernetes.io/projected/689c0549-8e19-49c9-a1ed-e0519bd6b7c7-kube-api-access-vpg7v\") pod \"689c0549-8e19-49c9-a1ed-e0519bd6b7c7\" (UID: \"689c0549-8e19-49c9-a1ed-e0519bd6b7c7\") " Oct 30 13:19:06.256149 kubelet[2769]: I1030 13:19:06.256129 2769 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/689c0549-8e19-49c9-a1ed-e0519bd6b7c7-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 30 13:19:06.262192 kubelet[2769]: I1030 13:19:06.262138 2769 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/689c0549-8e19-49c9-a1ed-e0519bd6b7c7-kube-api-access-vpg7v" (OuterVolumeSpecName: "kube-api-access-vpg7v") pod "689c0549-8e19-49c9-a1ed-e0519bd6b7c7" (UID: "689c0549-8e19-49c9-a1ed-e0519bd6b7c7"). InnerVolumeSpecName "kube-api-access-vpg7v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 30 13:19:06.262553 systemd[1]: var-lib-kubelet-pods-689c0549\x2d8e19\x2d49c9\x2da1ed\x2de0519bd6b7c7-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 30 13:19:06.262686 systemd[1]: var-lib-kubelet-pods-689c0549\x2d8e19\x2d49c9\x2da1ed\x2de0519bd6b7c7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvpg7v.mount: Deactivated successfully. Oct 30 13:19:06.263162 kubelet[2769]: I1030 13:19:06.263118 2769 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/689c0549-8e19-49c9-a1ed-e0519bd6b7c7-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "689c0549-8e19-49c9-a1ed-e0519bd6b7c7" (UID: "689c0549-8e19-49c9-a1ed-e0519bd6b7c7"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 30 13:19:06.332618 kubelet[2769]: I1030 13:19:06.332575 2769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 30 13:19:06.334902 kubelet[2769]: E1030 13:19:06.334854 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:19:06.356388 kubelet[2769]: I1030 13:19:06.356285 2769 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vpg7v\" (UniqueName: \"kubernetes.io/projected/689c0549-8e19-49c9-a1ed-e0519bd6b7c7-kube-api-access-vpg7v\") on node \"localhost\" DevicePath \"\"" Oct 30 13:19:06.356388 kubelet[2769]: I1030 13:19:06.356308 2769 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/689c0549-8e19-49c9-a1ed-e0519bd6b7c7-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 30 13:19:06.359805 kubelet[2769]: E1030 13:19:06.359777 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:19:06.360131 kubelet[2769]: E1030 13:19:06.360115 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:19:06.366266 systemd[1]: Removed slice kubepods-besteffort-pod689c0549_8e19_49c9_a1ed_e0519bd6b7c7.slice - libcontainer container kubepods-besteffort-pod689c0549_8e19_49c9_a1ed_e0519bd6b7c7.slice. Oct 30 13:19:06.472528 containerd[1588]: time="2025-10-30T13:19:06.472472289Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3673948fe289201045ad98242d220bf2e3564a93bf7a67c02c1118e9a1e88758\" id:\"c1ac19f6243204b882bf2318a5387c5cd7e88bfd08b74726eb966077cddc6685\" pid:3942 exit_status:1 exited_at:{seconds:1761830346 nanos:472050039}" Oct 30 13:19:06.744934 kubelet[2769]: I1030 13:19:06.744863 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7jhjv" podStartSLOduration=2.035172518 podStartE2EDuration="20.744846411s" podCreationTimestamp="2025-10-30 13:18:46 +0000 UTC" firstStartedPulling="2025-10-30 13:18:47.005914668 +0000 UTC m=+21.190864736" lastFinishedPulling="2025-10-30 13:19:05.71558855 +0000 UTC m=+39.900538629" observedRunningTime="2025-10-30 13:19:06.744652493 +0000 UTC m=+40.929602572" watchObservedRunningTime="2025-10-30 13:19:06.744846411 +0000 UTC m=+40.929796489" Oct 30 13:19:06.770821 systemd[1]: Created slice kubepods-besteffort-pod3b9db068_4663_4510_b3a1_4b6c4be0ac8f.slice - libcontainer container kubepods-besteffort-pod3b9db068_4663_4510_b3a1_4b6c4be0ac8f.slice. Oct 30 13:19:06.862412 kubelet[2769]: I1030 13:19:06.862350 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b9db068-4663-4510-b3a1-4b6c4be0ac8f-whisker-ca-bundle\") pod \"whisker-59b5978bfb-8lpn6\" (UID: \"3b9db068-4663-4510-b3a1-4b6c4be0ac8f\") " pod="calico-system/whisker-59b5978bfb-8lpn6" Oct 30 13:19:06.862412 kubelet[2769]: I1030 13:19:06.862400 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3b9db068-4663-4510-b3a1-4b6c4be0ac8f-whisker-backend-key-pair\") pod \"whisker-59b5978bfb-8lpn6\" (UID: \"3b9db068-4663-4510-b3a1-4b6c4be0ac8f\") " pod="calico-system/whisker-59b5978bfb-8lpn6" Oct 30 13:19:06.862412 kubelet[2769]: I1030 13:19:06.862415 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcptg\" (UniqueName: \"kubernetes.io/projected/3b9db068-4663-4510-b3a1-4b6c4be0ac8f-kube-api-access-hcptg\") pod \"whisker-59b5978bfb-8lpn6\" (UID: \"3b9db068-4663-4510-b3a1-4b6c4be0ac8f\") " pod="calico-system/whisker-59b5978bfb-8lpn6" Oct 30 13:19:06.913745 containerd[1588]: time="2025-10-30T13:19:06.913675890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff94d9bcc-4c6ph,Uid:c30303b7-2f8a-4e76-affb-92ba5d248c6b,Namespace:calico-apiserver,Attempt:0,}" Oct 30 13:19:07.076558 containerd[1588]: time="2025-10-30T13:19:07.075165971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59b5978bfb-8lpn6,Uid:3b9db068-4663-4510-b3a1-4b6c4be0ac8f,Namespace:calico-system,Attempt:0,}" Oct 30 13:19:07.075194 systemd-networkd[1508]: cali3fa1c692952: Link UP Oct 30 13:19:07.077636 systemd-networkd[1508]: cali3fa1c692952: Gained carrier Oct 30 13:19:07.171742 containerd[1588]: 2025-10-30 13:19:06.938 [INFO][3969] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 30 13:19:07.171742 containerd[1588]: 2025-10-30 13:19:06.956 [INFO][3969] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--ff94d9bcc--4c6ph-eth0 calico-apiserver-ff94d9bcc- calico-apiserver c30303b7-2f8a-4e76-affb-92ba5d248c6b 845 0 2025-10-30 13:18:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:ff94d9bcc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-ff94d9bcc-4c6ph eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3fa1c692952 [] [] }} ContainerID="652eec77452bab3db1411f84b5356391b32e2937172c5fe5747e734ae7da9161" Namespace="calico-apiserver" Pod="calico-apiserver-ff94d9bcc-4c6ph" WorkloadEndpoint="localhost-k8s-calico--apiserver--ff94d9bcc--4c6ph-" Oct 30 13:19:07.171742 containerd[1588]: 2025-10-30 13:19:06.956 [INFO][3969] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="652eec77452bab3db1411f84b5356391b32e2937172c5fe5747e734ae7da9161" Namespace="calico-apiserver" Pod="calico-apiserver-ff94d9bcc-4c6ph" WorkloadEndpoint="localhost-k8s-calico--apiserver--ff94d9bcc--4c6ph-eth0" Oct 30 13:19:07.171742 containerd[1588]: 2025-10-30 13:19:07.024 [INFO][3984] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="652eec77452bab3db1411f84b5356391b32e2937172c5fe5747e734ae7da9161" HandleID="k8s-pod-network.652eec77452bab3db1411f84b5356391b32e2937172c5fe5747e734ae7da9161" Workload="localhost-k8s-calico--apiserver--ff94d9bcc--4c6ph-eth0" Oct 30 13:19:07.172050 containerd[1588]: 2025-10-30 13:19:07.025 [INFO][3984] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="652eec77452bab3db1411f84b5356391b32e2937172c5fe5747e734ae7da9161" HandleID="k8s-pod-network.652eec77452bab3db1411f84b5356391b32e2937172c5fe5747e734ae7da9161" Workload="localhost-k8s-calico--apiserver--ff94d9bcc--4c6ph-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f130), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-ff94d9bcc-4c6ph", "timestamp":"2025-10-30 13:19:07.024518229 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 13:19:07.172050 containerd[1588]: 2025-10-30 13:19:07.025 [INFO][3984] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 13:19:07.172050 containerd[1588]: 2025-10-30 13:19:07.025 [INFO][3984] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 13:19:07.172050 containerd[1588]: 2025-10-30 13:19:07.025 [INFO][3984] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 13:19:07.172050 containerd[1588]: 2025-10-30 13:19:07.033 [INFO][3984] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.652eec77452bab3db1411f84b5356391b32e2937172c5fe5747e734ae7da9161" host="localhost" Oct 30 13:19:07.172050 containerd[1588]: 2025-10-30 13:19:07.043 [INFO][3984] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 13:19:07.172050 containerd[1588]: 2025-10-30 13:19:07.047 [INFO][3984] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 13:19:07.172050 containerd[1588]: 2025-10-30 13:19:07.049 [INFO][3984] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 13:19:07.172050 containerd[1588]: 2025-10-30 13:19:07.050 [INFO][3984] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 13:19:07.172050 containerd[1588]: 2025-10-30 13:19:07.050 [INFO][3984] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.652eec77452bab3db1411f84b5356391b32e2937172c5fe5747e734ae7da9161" host="localhost" Oct 30 13:19:07.172327 containerd[1588]: 2025-10-30 13:19:07.052 [INFO][3984] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.652eec77452bab3db1411f84b5356391b32e2937172c5fe5747e734ae7da9161 Oct 30 13:19:07.172327 containerd[1588]: 2025-10-30 13:19:07.056 [INFO][3984] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.652eec77452bab3db1411f84b5356391b32e2937172c5fe5747e734ae7da9161" host="localhost" Oct 30 13:19:07.172327 containerd[1588]: 2025-10-30 13:19:07.060 [INFO][3984] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.652eec77452bab3db1411f84b5356391b32e2937172c5fe5747e734ae7da9161" host="localhost" Oct 30 13:19:07.172327 containerd[1588]: 2025-10-30 13:19:07.060 [INFO][3984] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.652eec77452bab3db1411f84b5356391b32e2937172c5fe5747e734ae7da9161" host="localhost" Oct 30 13:19:07.172327 containerd[1588]: 2025-10-30 13:19:07.060 [INFO][3984] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 13:19:07.172327 containerd[1588]: 2025-10-30 13:19:07.060 [INFO][3984] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="652eec77452bab3db1411f84b5356391b32e2937172c5fe5747e734ae7da9161" HandleID="k8s-pod-network.652eec77452bab3db1411f84b5356391b32e2937172c5fe5747e734ae7da9161" Workload="localhost-k8s-calico--apiserver--ff94d9bcc--4c6ph-eth0" Oct 30 13:19:07.172467 containerd[1588]: 2025-10-30 13:19:07.066 [INFO][3969] cni-plugin/k8s.go 418: Populated endpoint ContainerID="652eec77452bab3db1411f84b5356391b32e2937172c5fe5747e734ae7da9161" Namespace="calico-apiserver" Pod="calico-apiserver-ff94d9bcc-4c6ph" WorkloadEndpoint="localhost-k8s-calico--apiserver--ff94d9bcc--4c6ph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--ff94d9bcc--4c6ph-eth0", GenerateName:"calico-apiserver-ff94d9bcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"c30303b7-2f8a-4e76-affb-92ba5d248c6b", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 18, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ff94d9bcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-ff94d9bcc-4c6ph", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3fa1c692952", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:19:07.172542 containerd[1588]: 2025-10-30 13:19:07.066 [INFO][3969] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="652eec77452bab3db1411f84b5356391b32e2937172c5fe5747e734ae7da9161" Namespace="calico-apiserver" Pod="calico-apiserver-ff94d9bcc-4c6ph" WorkloadEndpoint="localhost-k8s-calico--apiserver--ff94d9bcc--4c6ph-eth0" Oct 30 13:19:07.172542 containerd[1588]: 2025-10-30 13:19:07.066 [INFO][3969] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3fa1c692952 ContainerID="652eec77452bab3db1411f84b5356391b32e2937172c5fe5747e734ae7da9161" Namespace="calico-apiserver" Pod="calico-apiserver-ff94d9bcc-4c6ph" WorkloadEndpoint="localhost-k8s-calico--apiserver--ff94d9bcc--4c6ph-eth0" Oct 30 13:19:07.172542 containerd[1588]: 2025-10-30 13:19:07.075 [INFO][3969] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="652eec77452bab3db1411f84b5356391b32e2937172c5fe5747e734ae7da9161" Namespace="calico-apiserver" Pod="calico-apiserver-ff94d9bcc-4c6ph" WorkloadEndpoint="localhost-k8s-calico--apiserver--ff94d9bcc--4c6ph-eth0" Oct 30 13:19:07.172605 containerd[1588]: 2025-10-30 13:19:07.075 [INFO][3969] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="652eec77452bab3db1411f84b5356391b32e2937172c5fe5747e734ae7da9161" Namespace="calico-apiserver" Pod="calico-apiserver-ff94d9bcc-4c6ph" WorkloadEndpoint="localhost-k8s-calico--apiserver--ff94d9bcc--4c6ph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--ff94d9bcc--4c6ph-eth0", GenerateName:"calico-apiserver-ff94d9bcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"c30303b7-2f8a-4e76-affb-92ba5d248c6b", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 18, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ff94d9bcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"652eec77452bab3db1411f84b5356391b32e2937172c5fe5747e734ae7da9161", Pod:"calico-apiserver-ff94d9bcc-4c6ph", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3fa1c692952", MAC:"e2:fb:0d:cb:1b:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:19:07.172657 containerd[1588]: 2025-10-30 13:19:07.168 [INFO][3969] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="652eec77452bab3db1411f84b5356391b32e2937172c5fe5747e734ae7da9161" Namespace="calico-apiserver" Pod="calico-apiserver-ff94d9bcc-4c6ph" WorkloadEndpoint="localhost-k8s-calico--apiserver--ff94d9bcc--4c6ph-eth0" Oct 30 13:19:07.353512 containerd[1588]: time="2025-10-30T13:19:07.353340559Z" level=info msg="connecting to shim 652eec77452bab3db1411f84b5356391b32e2937172c5fe5747e734ae7da9161" address="unix:///run/containerd/s/09796e25f9c0ddbbf2c8d8ee42028ad9acfdd2f3334b80f99152e5256c6f32e5" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:19:07.363402 kubelet[2769]: E1030 13:19:07.363336 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:19:07.393249 systemd[1]: Started cri-containerd-652eec77452bab3db1411f84b5356391b32e2937172c5fe5747e734ae7da9161.scope - libcontainer container 652eec77452bab3db1411f84b5356391b32e2937172c5fe5747e734ae7da9161. Oct 30 13:19:07.410194 systemd-networkd[1508]: cali5ecce26ad2d: Link UP Oct 30 13:19:07.410945 systemd-networkd[1508]: cali5ecce26ad2d: Gained carrier Oct 30 13:19:07.428618 containerd[1588]: 2025-10-30 13:19:07.321 [INFO][4001] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 30 13:19:07.428618 containerd[1588]: 2025-10-30 13:19:07.333 [INFO][4001] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--59b5978bfb--8lpn6-eth0 whisker-59b5978bfb- calico-system 3b9db068-4663-4510-b3a1-4b6c4be0ac8f 965 0 2025-10-30 13:19:06 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:59b5978bfb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-59b5978bfb-8lpn6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5ecce26ad2d [] [] }} ContainerID="4c44a8365e30d1eca2fe2a4c3404fe528bb6fac0956784b7a783a8e930e28cea" Namespace="calico-system" Pod="whisker-59b5978bfb-8lpn6" WorkloadEndpoint="localhost-k8s-whisker--59b5978bfb--8lpn6-" Oct 30 13:19:07.428618 containerd[1588]: 2025-10-30 13:19:07.333 [INFO][4001] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4c44a8365e30d1eca2fe2a4c3404fe528bb6fac0956784b7a783a8e930e28cea" Namespace="calico-system" Pod="whisker-59b5978bfb-8lpn6" WorkloadEndpoint="localhost-k8s-whisker--59b5978bfb--8lpn6-eth0" Oct 30 13:19:07.428618 containerd[1588]: 2025-10-30 13:19:07.358 [INFO][4019] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c44a8365e30d1eca2fe2a4c3404fe528bb6fac0956784b7a783a8e930e28cea" HandleID="k8s-pod-network.4c44a8365e30d1eca2fe2a4c3404fe528bb6fac0956784b7a783a8e930e28cea" Workload="localhost-k8s-whisker--59b5978bfb--8lpn6-eth0" Oct 30 13:19:07.428907 containerd[1588]: 2025-10-30 13:19:07.358 [INFO][4019] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4c44a8365e30d1eca2fe2a4c3404fe528bb6fac0956784b7a783a8e930e28cea" HandleID="k8s-pod-network.4c44a8365e30d1eca2fe2a4c3404fe528bb6fac0956784b7a783a8e930e28cea" Workload="localhost-k8s-whisker--59b5978bfb--8lpn6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e7590), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-59b5978bfb-8lpn6", "timestamp":"2025-10-30 13:19:07.358489766 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 13:19:07.428907 containerd[1588]: 2025-10-30 13:19:07.358 [INFO][4019] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 13:19:07.428907 containerd[1588]: 2025-10-30 13:19:07.358 [INFO][4019] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 13:19:07.428907 containerd[1588]: 2025-10-30 13:19:07.358 [INFO][4019] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 13:19:07.428907 containerd[1588]: 2025-10-30 13:19:07.370 [INFO][4019] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4c44a8365e30d1eca2fe2a4c3404fe528bb6fac0956784b7a783a8e930e28cea" host="localhost" Oct 30 13:19:07.428907 containerd[1588]: 2025-10-30 13:19:07.376 [INFO][4019] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 13:19:07.428907 containerd[1588]: 2025-10-30 13:19:07.380 [INFO][4019] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 13:19:07.428907 containerd[1588]: 2025-10-30 13:19:07.384 [INFO][4019] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 13:19:07.428907 containerd[1588]: 2025-10-30 13:19:07.389 [INFO][4019] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 13:19:07.428907 containerd[1588]: 2025-10-30 13:19:07.389 [INFO][4019] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4c44a8365e30d1eca2fe2a4c3404fe528bb6fac0956784b7a783a8e930e28cea" host="localhost" Oct 30 13:19:07.429176 containerd[1588]: 2025-10-30 13:19:07.392 [INFO][4019] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4c44a8365e30d1eca2fe2a4c3404fe528bb6fac0956784b7a783a8e930e28cea Oct 30 13:19:07.429176 containerd[1588]: 2025-10-30 13:19:07.395 [INFO][4019] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4c44a8365e30d1eca2fe2a4c3404fe528bb6fac0956784b7a783a8e930e28cea" host="localhost" Oct 30 13:19:07.429176 containerd[1588]: 2025-10-30 13:19:07.404 [INFO][4019] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.4c44a8365e30d1eca2fe2a4c3404fe528bb6fac0956784b7a783a8e930e28cea" host="localhost" Oct 30 13:19:07.429176 containerd[1588]: 2025-10-30 13:19:07.404 [INFO][4019] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.4c44a8365e30d1eca2fe2a4c3404fe528bb6fac0956784b7a783a8e930e28cea" host="localhost" Oct 30 13:19:07.429176 containerd[1588]: 2025-10-30 13:19:07.404 [INFO][4019] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 13:19:07.429176 containerd[1588]: 2025-10-30 13:19:07.404 [INFO][4019] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="4c44a8365e30d1eca2fe2a4c3404fe528bb6fac0956784b7a783a8e930e28cea" HandleID="k8s-pod-network.4c44a8365e30d1eca2fe2a4c3404fe528bb6fac0956784b7a783a8e930e28cea" Workload="localhost-k8s-whisker--59b5978bfb--8lpn6-eth0" Oct 30 13:19:07.429305 containerd[1588]: 2025-10-30 13:19:07.407 [INFO][4001] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4c44a8365e30d1eca2fe2a4c3404fe528bb6fac0956784b7a783a8e930e28cea" Namespace="calico-system" Pod="whisker-59b5978bfb-8lpn6" WorkloadEndpoint="localhost-k8s-whisker--59b5978bfb--8lpn6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--59b5978bfb--8lpn6-eth0", GenerateName:"whisker-59b5978bfb-", Namespace:"calico-system", SelfLink:"", UID:"3b9db068-4663-4510-b3a1-4b6c4be0ac8f", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 19, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59b5978bfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-59b5978bfb-8lpn6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5ecce26ad2d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:19:07.429305 containerd[1588]: 2025-10-30 13:19:07.407 [INFO][4001] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="4c44a8365e30d1eca2fe2a4c3404fe528bb6fac0956784b7a783a8e930e28cea" Namespace="calico-system" Pod="whisker-59b5978bfb-8lpn6" WorkloadEndpoint="localhost-k8s-whisker--59b5978bfb--8lpn6-eth0" Oct 30 13:19:07.429397 containerd[1588]: 2025-10-30 13:19:07.407 [INFO][4001] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ecce26ad2d ContainerID="4c44a8365e30d1eca2fe2a4c3404fe528bb6fac0956784b7a783a8e930e28cea" Namespace="calico-system" Pod="whisker-59b5978bfb-8lpn6" WorkloadEndpoint="localhost-k8s-whisker--59b5978bfb--8lpn6-eth0" Oct 30 13:19:07.429397 containerd[1588]: 2025-10-30 13:19:07.410 [INFO][4001] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c44a8365e30d1eca2fe2a4c3404fe528bb6fac0956784b7a783a8e930e28cea" Namespace="calico-system" Pod="whisker-59b5978bfb-8lpn6" WorkloadEndpoint="localhost-k8s-whisker--59b5978bfb--8lpn6-eth0" Oct 30 13:19:07.429440 containerd[1588]: 2025-10-30 13:19:07.411 [INFO][4001] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4c44a8365e30d1eca2fe2a4c3404fe528bb6fac0956784b7a783a8e930e28cea" Namespace="calico-system" Pod="whisker-59b5978bfb-8lpn6" WorkloadEndpoint="localhost-k8s-whisker--59b5978bfb--8lpn6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--59b5978bfb--8lpn6-eth0", GenerateName:"whisker-59b5978bfb-", Namespace:"calico-system", SelfLink:"", UID:"3b9db068-4663-4510-b3a1-4b6c4be0ac8f", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 19, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59b5978bfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c44a8365e30d1eca2fe2a4c3404fe528bb6fac0956784b7a783a8e930e28cea", Pod:"whisker-59b5978bfb-8lpn6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5ecce26ad2d", MAC:"52:17:bd:b9:e7:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:19:07.429496 containerd[1588]: 2025-10-30 13:19:07.425 [INFO][4001] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4c44a8365e30d1eca2fe2a4c3404fe528bb6fac0956784b7a783a8e930e28cea" Namespace="calico-system" Pod="whisker-59b5978bfb-8lpn6" WorkloadEndpoint="localhost-k8s-whisker--59b5978bfb--8lpn6-eth0" Oct 30 13:19:07.437935 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 13:19:07.457181 containerd[1588]: time="2025-10-30T13:19:07.457096591Z" level=info msg="connecting to shim 4c44a8365e30d1eca2fe2a4c3404fe528bb6fac0956784b7a783a8e930e28cea" address="unix:///run/containerd/s/c9bc590175429674df2ea99d5250edfaacdb2ca3a965701061266288057d0cc4" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:19:07.477518 containerd[1588]: time="2025-10-30T13:19:07.477472100Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3673948fe289201045ad98242d220bf2e3564a93bf7a67c02c1118e9a1e88758\" id:\"8dfb249c363952536d1eebb79e904600ae18bc38bc67a3bd436b06185d51fb99\" pid:4068 exit_status:1 exited_at:{seconds:1761830347 nanos:477173241}" Oct 30 13:19:07.483289 containerd[1588]: time="2025-10-30T13:19:07.483246391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff94d9bcc-4c6ph,Uid:c30303b7-2f8a-4e76-affb-92ba5d248c6b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"652eec77452bab3db1411f84b5356391b32e2937172c5fe5747e734ae7da9161\"" Oct 30 13:19:07.486234 containerd[1588]: time="2025-10-30T13:19:07.486191649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 13:19:07.491133 systemd[1]: Started cri-containerd-4c44a8365e30d1eca2fe2a4c3404fe528bb6fac0956784b7a783a8e930e28cea.scope - libcontainer container 4c44a8365e30d1eca2fe2a4c3404fe528bb6fac0956784b7a783a8e930e28cea. Oct 30 13:19:07.508535 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 13:19:07.540864 containerd[1588]: time="2025-10-30T13:19:07.540809971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59b5978bfb-8lpn6,Uid:3b9db068-4663-4510-b3a1-4b6c4be0ac8f,Namespace:calico-system,Attempt:0,} returns sandbox id \"4c44a8365e30d1eca2fe2a4c3404fe528bb6fac0956784b7a783a8e930e28cea\"" Oct 30 13:19:07.875032 containerd[1588]: time="2025-10-30T13:19:07.874940918Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:19:07.879952 containerd[1588]: time="2025-10-30T13:19:07.879893793Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 13:19:07.889043 containerd[1588]: time="2025-10-30T13:19:07.888974704Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 13:19:07.894859 kubelet[2769]: E1030 13:19:07.894802 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 13:19:07.894991 kubelet[2769]: E1030 13:19:07.894876 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 13:19:07.896834 kubelet[2769]: E1030 13:19:07.896771 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdf6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-ff94d9bcc-4c6ph_calico-apiserver(c30303b7-2f8a-4e76-affb-92ba5d248c6b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 13:19:07.898264 containerd[1588]: time="2025-10-30T13:19:07.898237628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 13:19:07.898519 kubelet[2769]: E1030 13:19:07.898418 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff94d9bcc-4c6ph" podUID="c30303b7-2f8a-4e76-affb-92ba5d248c6b" Oct 30 13:19:07.914554 kubelet[2769]: E1030 13:19:07.914502 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:19:07.922351 containerd[1588]: time="2025-10-30T13:19:07.920218334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ws88l,Uid:ab05aa8b-f302-4973-9a9c-4a341cc5c31e,Namespace:calico-system,Attempt:0,}" Oct 30 13:19:07.922351 containerd[1588]: time="2025-10-30T13:19:07.920224285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wsfwb,Uid:1eb997b9-a4bf-4310-a37f-4b8c7364b569,Namespace:kube-system,Attempt:0,}" Oct 30 13:19:07.925601 kubelet[2769]: I1030 13:19:07.925561 2769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="689c0549-8e19-49c9-a1ed-e0519bd6b7c7" path="/var/lib/kubelet/pods/689c0549-8e19-49c9-a1ed-e0519bd6b7c7/volumes" Oct 30 13:19:08.067147 systemd-networkd[1508]: caliec514825c6c: Link UP Oct 30 13:19:08.067364 systemd-networkd[1508]: caliec514825c6c: Gained carrier Oct 30 13:19:08.083420 containerd[1588]: 2025-10-30 13:19:07.967 [INFO][4274] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 30 13:19:08.083420 containerd[1588]: 2025-10-30 13:19:07.981 [INFO][4274] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--wsfwb-eth0 coredns-674b8bbfcf- kube-system 1eb997b9-a4bf-4310-a37f-4b8c7364b569 840 0 2025-10-30 13:18:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-wsfwb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliec514825c6c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2461457dd76329993713c0ca7fee5901be4f8d91998f7de1fdab1dd0199418b4" Namespace="kube-system" Pod="coredns-674b8bbfcf-wsfwb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wsfwb-" Oct 30 13:19:08.083420 containerd[1588]: 2025-10-30 13:19:07.981 [INFO][4274] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2461457dd76329993713c0ca7fee5901be4f8d91998f7de1fdab1dd0199418b4" Namespace="kube-system" Pod="coredns-674b8bbfcf-wsfwb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wsfwb-eth0" Oct 30 13:19:08.083420 containerd[1588]: 2025-10-30 13:19:08.023 [INFO][4295] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2461457dd76329993713c0ca7fee5901be4f8d91998f7de1fdab1dd0199418b4" HandleID="k8s-pod-network.2461457dd76329993713c0ca7fee5901be4f8d91998f7de1fdab1dd0199418b4" Workload="localhost-k8s-coredns--674b8bbfcf--wsfwb-eth0" Oct 30 13:19:08.083643 containerd[1588]: 2025-10-30 13:19:08.024 [INFO][4295] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2461457dd76329993713c0ca7fee5901be4f8d91998f7de1fdab1dd0199418b4" HandleID="k8s-pod-network.2461457dd76329993713c0ca7fee5901be4f8d91998f7de1fdab1dd0199418b4" Workload="localhost-k8s-coredns--674b8bbfcf--wsfwb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a54c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-wsfwb", "timestamp":"2025-10-30 13:19:08.023860306 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 13:19:08.083643 containerd[1588]: 2025-10-30 13:19:08.024 [INFO][4295] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 13:19:08.083643 containerd[1588]: 2025-10-30 13:19:08.024 [INFO][4295] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 13:19:08.083643 containerd[1588]: 2025-10-30 13:19:08.024 [INFO][4295] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 13:19:08.083643 containerd[1588]: 2025-10-30 13:19:08.031 [INFO][4295] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2461457dd76329993713c0ca7fee5901be4f8d91998f7de1fdab1dd0199418b4" host="localhost" Oct 30 13:19:08.083643 containerd[1588]: 2025-10-30 13:19:08.035 [INFO][4295] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 13:19:08.083643 containerd[1588]: 2025-10-30 13:19:08.041 [INFO][4295] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 13:19:08.083643 containerd[1588]: 2025-10-30 13:19:08.043 [INFO][4295] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 13:19:08.083643 containerd[1588]: 2025-10-30 13:19:08.046 [INFO][4295] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 13:19:08.083643 containerd[1588]: 2025-10-30 13:19:08.046 [INFO][4295] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2461457dd76329993713c0ca7fee5901be4f8d91998f7de1fdab1dd0199418b4" host="localhost" Oct 30 13:19:08.083874 containerd[1588]: 2025-10-30 13:19:08.048 [INFO][4295] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2461457dd76329993713c0ca7fee5901be4f8d91998f7de1fdab1dd0199418b4 Oct 30 13:19:08.083874 containerd[1588]: 2025-10-30 13:19:08.052 [INFO][4295] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2461457dd76329993713c0ca7fee5901be4f8d91998f7de1fdab1dd0199418b4" host="localhost" Oct 30 13:19:08.083874 containerd[1588]: 2025-10-30 13:19:08.058 [INFO][4295] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.2461457dd76329993713c0ca7fee5901be4f8d91998f7de1fdab1dd0199418b4" host="localhost" Oct 30 13:19:08.083874 containerd[1588]: 2025-10-30 13:19:08.058 [INFO][4295] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.2461457dd76329993713c0ca7fee5901be4f8d91998f7de1fdab1dd0199418b4" host="localhost" Oct 30 13:19:08.083874 containerd[1588]: 2025-10-30 13:19:08.058 [INFO][4295] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 13:19:08.083874 containerd[1588]: 2025-10-30 13:19:08.058 [INFO][4295] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="2461457dd76329993713c0ca7fee5901be4f8d91998f7de1fdab1dd0199418b4" HandleID="k8s-pod-network.2461457dd76329993713c0ca7fee5901be4f8d91998f7de1fdab1dd0199418b4" Workload="localhost-k8s-coredns--674b8bbfcf--wsfwb-eth0" Oct 30 13:19:08.084022 containerd[1588]: 2025-10-30 13:19:08.064 [INFO][4274] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2461457dd76329993713c0ca7fee5901be4f8d91998f7de1fdab1dd0199418b4" Namespace="kube-system" Pod="coredns-674b8bbfcf-wsfwb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wsfwb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--wsfwb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1eb997b9-a4bf-4310-a37f-4b8c7364b569", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 18, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-wsfwb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliec514825c6c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:19:08.084109 containerd[1588]: 2025-10-30 13:19:08.064 [INFO][4274] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="2461457dd76329993713c0ca7fee5901be4f8d91998f7de1fdab1dd0199418b4" Namespace="kube-system" Pod="coredns-674b8bbfcf-wsfwb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wsfwb-eth0" Oct 30 13:19:08.084109 containerd[1588]: 2025-10-30 13:19:08.064 [INFO][4274] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliec514825c6c ContainerID="2461457dd76329993713c0ca7fee5901be4f8d91998f7de1fdab1dd0199418b4" Namespace="kube-system" Pod="coredns-674b8bbfcf-wsfwb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wsfwb-eth0" Oct 30 13:19:08.084109 containerd[1588]: 2025-10-30 13:19:08.067 [INFO][4274] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2461457dd76329993713c0ca7fee5901be4f8d91998f7de1fdab1dd0199418b4" Namespace="kube-system" Pod="coredns-674b8bbfcf-wsfwb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wsfwb-eth0" Oct 30 13:19:08.084175 containerd[1588]: 2025-10-30 13:19:08.070 [INFO][4274] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2461457dd76329993713c0ca7fee5901be4f8d91998f7de1fdab1dd0199418b4" Namespace="kube-system" Pod="coredns-674b8bbfcf-wsfwb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wsfwb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--wsfwb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1eb997b9-a4bf-4310-a37f-4b8c7364b569", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 18, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2461457dd76329993713c0ca7fee5901be4f8d91998f7de1fdab1dd0199418b4", Pod:"coredns-674b8bbfcf-wsfwb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliec514825c6c", MAC:"92:5e:8e:52:c7:05", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:19:08.084175 containerd[1588]: 2025-10-30 13:19:08.079 [INFO][4274] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2461457dd76329993713c0ca7fee5901be4f8d91998f7de1fdab1dd0199418b4" Namespace="kube-system" Pod="coredns-674b8bbfcf-wsfwb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wsfwb-eth0" Oct 30 13:19:08.121216 containerd[1588]: time="2025-10-30T13:19:08.121157116Z" level=info msg="connecting to shim 2461457dd76329993713c0ca7fee5901be4f8d91998f7de1fdab1dd0199418b4" address="unix:///run/containerd/s/56e806556675058c5767ed4317b2ff82e8da069fd8f67231b7bbc1b39254fd12" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:19:08.167608 systemd[1]: Started cri-containerd-2461457dd76329993713c0ca7fee5901be4f8d91998f7de1fdab1dd0199418b4.scope - libcontainer container 2461457dd76329993713c0ca7fee5901be4f8d91998f7de1fdab1dd0199418b4. Oct 30 13:19:08.170960 systemd-networkd[1508]: cali4121eb3e667: Link UP Oct 30 13:19:08.172081 systemd-networkd[1508]: cali4121eb3e667: Gained carrier Oct 30 13:19:08.187282 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 13:19:08.196074 containerd[1588]: 2025-10-30 13:19:07.990 [INFO][4261] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 30 13:19:08.196074 containerd[1588]: 2025-10-30 13:19:08.006 [INFO][4261] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--ws88l-eth0 goldmane-666569f655- calico-system ab05aa8b-f302-4973-9a9c-4a341cc5c31e 846 0 2025-10-30 13:18:44 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-ws88l eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4121eb3e667 [] [] }} ContainerID="5e2369d9d3cb6c50e7054ab6a728af30b4fe92e65a89533cb2d490dc477437da" Namespace="calico-system" Pod="goldmane-666569f655-ws88l" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ws88l-" Oct 30 13:19:08.196074 containerd[1588]: 2025-10-30 13:19:08.006 [INFO][4261] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5e2369d9d3cb6c50e7054ab6a728af30b4fe92e65a89533cb2d490dc477437da" Namespace="calico-system" Pod="goldmane-666569f655-ws88l" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ws88l-eth0" Oct 30 13:19:08.196074 containerd[1588]: 2025-10-30 13:19:08.044 [INFO][4304] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e2369d9d3cb6c50e7054ab6a728af30b4fe92e65a89533cb2d490dc477437da" HandleID="k8s-pod-network.5e2369d9d3cb6c50e7054ab6a728af30b4fe92e65a89533cb2d490dc477437da" Workload="localhost-k8s-goldmane--666569f655--ws88l-eth0" Oct 30 13:19:08.196074 containerd[1588]: 2025-10-30 13:19:08.044 [INFO][4304] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5e2369d9d3cb6c50e7054ab6a728af30b4fe92e65a89533cb2d490dc477437da" HandleID="k8s-pod-network.5e2369d9d3cb6c50e7054ab6a728af30b4fe92e65a89533cb2d490dc477437da" Workload="localhost-k8s-goldmane--666569f655--ws88l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000367760), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-ws88l", "timestamp":"2025-10-30 13:19:08.044270227 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 13:19:08.196074 containerd[1588]: 2025-10-30 13:19:08.044 [INFO][4304] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 13:19:08.196074 containerd[1588]: 2025-10-30 13:19:08.058 [INFO][4304] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 13:19:08.196074 containerd[1588]: 2025-10-30 13:19:08.058 [INFO][4304] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 13:19:08.196074 containerd[1588]: 2025-10-30 13:19:08.132 [INFO][4304] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5e2369d9d3cb6c50e7054ab6a728af30b4fe92e65a89533cb2d490dc477437da" host="localhost" Oct 30 13:19:08.196074 containerd[1588]: 2025-10-30 13:19:08.137 [INFO][4304] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 13:19:08.196074 containerd[1588]: 2025-10-30 13:19:08.142 [INFO][4304] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 13:19:08.196074 containerd[1588]: 2025-10-30 13:19:08.143 [INFO][4304] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 13:19:08.196074 containerd[1588]: 2025-10-30 13:19:08.145 [INFO][4304] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 13:19:08.196074 containerd[1588]: 2025-10-30 13:19:08.145 [INFO][4304] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5e2369d9d3cb6c50e7054ab6a728af30b4fe92e65a89533cb2d490dc477437da" host="localhost" Oct 30 13:19:08.196074 containerd[1588]: 2025-10-30 13:19:08.148 [INFO][4304] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5e2369d9d3cb6c50e7054ab6a728af30b4fe92e65a89533cb2d490dc477437da Oct 30 13:19:08.196074 containerd[1588]: 2025-10-30 13:19:08.151 [INFO][4304] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5e2369d9d3cb6c50e7054ab6a728af30b4fe92e65a89533cb2d490dc477437da" host="localhost" Oct 30 13:19:08.196074 containerd[1588]: 2025-10-30 13:19:08.162 [INFO][4304] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.5e2369d9d3cb6c50e7054ab6a728af30b4fe92e65a89533cb2d490dc477437da" host="localhost" Oct 30 13:19:08.196074 containerd[1588]: 2025-10-30 13:19:08.162 [INFO][4304] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.5e2369d9d3cb6c50e7054ab6a728af30b4fe92e65a89533cb2d490dc477437da" host="localhost" Oct 30 13:19:08.196074 containerd[1588]: 2025-10-30 13:19:08.162 [INFO][4304] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 13:19:08.196074 containerd[1588]: 2025-10-30 13:19:08.162 [INFO][4304] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="5e2369d9d3cb6c50e7054ab6a728af30b4fe92e65a89533cb2d490dc477437da" HandleID="k8s-pod-network.5e2369d9d3cb6c50e7054ab6a728af30b4fe92e65a89533cb2d490dc477437da" Workload="localhost-k8s-goldmane--666569f655--ws88l-eth0" Oct 30 13:19:08.196607 containerd[1588]: 2025-10-30 13:19:08.165 [INFO][4261] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5e2369d9d3cb6c50e7054ab6a728af30b4fe92e65a89533cb2d490dc477437da" Namespace="calico-system" Pod="goldmane-666569f655-ws88l" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ws88l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--ws88l-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"ab05aa8b-f302-4973-9a9c-4a341cc5c31e", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 18, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-ws88l", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4121eb3e667", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:19:08.196607 containerd[1588]: 2025-10-30 13:19:08.166 [INFO][4261] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="5e2369d9d3cb6c50e7054ab6a728af30b4fe92e65a89533cb2d490dc477437da" Namespace="calico-system" Pod="goldmane-666569f655-ws88l" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ws88l-eth0" Oct 30 13:19:08.196607 containerd[1588]: 2025-10-30 13:19:08.166 [INFO][4261] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4121eb3e667 ContainerID="5e2369d9d3cb6c50e7054ab6a728af30b4fe92e65a89533cb2d490dc477437da" Namespace="calico-system" Pod="goldmane-666569f655-ws88l" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ws88l-eth0" Oct 30 13:19:08.196607 containerd[1588]: 2025-10-30 13:19:08.178 [INFO][4261] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5e2369d9d3cb6c50e7054ab6a728af30b4fe92e65a89533cb2d490dc477437da" Namespace="calico-system" Pod="goldmane-666569f655-ws88l" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ws88l-eth0" Oct 30 13:19:08.196607 containerd[1588]: 2025-10-30 13:19:08.179 [INFO][4261] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5e2369d9d3cb6c50e7054ab6a728af30b4fe92e65a89533cb2d490dc477437da" Namespace="calico-system" Pod="goldmane-666569f655-ws88l" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ws88l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--ws88l-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"ab05aa8b-f302-4973-9a9c-4a341cc5c31e", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 18, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5e2369d9d3cb6c50e7054ab6a728af30b4fe92e65a89533cb2d490dc477437da", Pod:"goldmane-666569f655-ws88l", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4121eb3e667", MAC:"a6:43:ba:c6:70:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:19:08.196607 containerd[1588]: 2025-10-30 13:19:08.191 [INFO][4261] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5e2369d9d3cb6c50e7054ab6a728af30b4fe92e65a89533cb2d490dc477437da" Namespace="calico-system" Pod="goldmane-666569f655-ws88l" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ws88l-eth0" Oct 30 13:19:08.226097 containerd[1588]: time="2025-10-30T13:19:08.226008253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wsfwb,Uid:1eb997b9-a4bf-4310-a37f-4b8c7364b569,Namespace:kube-system,Attempt:0,} returns sandbox id \"2461457dd76329993713c0ca7fee5901be4f8d91998f7de1fdab1dd0199418b4\"" Oct 30 13:19:08.227184 containerd[1588]: time="2025-10-30T13:19:08.227126572Z" level=info msg="connecting to shim 5e2369d9d3cb6c50e7054ab6a728af30b4fe92e65a89533cb2d490dc477437da" address="unix:///run/containerd/s/0f831bed6a89cfab5fc7a14c5a6df62157c835e6af4ca8599c0977c41d418db4" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:19:08.227241 kubelet[2769]: E1030 13:19:08.227146 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:19:08.256129 systemd[1]: Started cri-containerd-5e2369d9d3cb6c50e7054ab6a728af30b4fe92e65a89533cb2d490dc477437da.scope - libcontainer container 5e2369d9d3cb6c50e7054ab6a728af30b4fe92e65a89533cb2d490dc477437da. Oct 30 13:19:08.270932 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 13:19:08.273459 containerd[1588]: time="2025-10-30T13:19:08.273285963Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:19:08.280713 containerd[1588]: time="2025-10-30T13:19:08.280559095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 13:19:08.280881 containerd[1588]: time="2025-10-30T13:19:08.280643819Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 13:19:08.281447 kubelet[2769]: E1030 13:19:08.281388 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 13:19:08.281620 kubelet[2769]: E1030 13:19:08.281543 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 13:19:08.282161 kubelet[2769]: E1030 13:19:08.282117 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f8ced58a223049ba82448ee04d635af3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hcptg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59b5978bfb-8lpn6_calico-system(3b9db068-4663-4510-b3a1-4b6c4be0ac8f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 13:19:08.282496 containerd[1588]: time="2025-10-30T13:19:08.282443510Z" level=info msg="CreateContainer within sandbox \"2461457dd76329993713c0ca7fee5901be4f8d91998f7de1fdab1dd0199418b4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 30 13:19:08.285409 containerd[1588]: time="2025-10-30T13:19:08.285383596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 13:19:08.303026 containerd[1588]: time="2025-10-30T13:19:08.302787143Z" level=info msg="Container 87390e8e89a5af2ea36d8279ea8f9ddac6d5879b6d9a5c6fc3680d5985f8a2d6: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:19:08.311195 containerd[1588]: time="2025-10-30T13:19:08.311131632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ws88l,Uid:ab05aa8b-f302-4973-9a9c-4a341cc5c31e,Namespace:calico-system,Attempt:0,} returns sandbox id \"5e2369d9d3cb6c50e7054ab6a728af30b4fe92e65a89533cb2d490dc477437da\"" Oct 30 13:19:08.312399 containerd[1588]: time="2025-10-30T13:19:08.312333304Z" level=info msg="CreateContainer within sandbox \"2461457dd76329993713c0ca7fee5901be4f8d91998f7de1fdab1dd0199418b4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"87390e8e89a5af2ea36d8279ea8f9ddac6d5879b6d9a5c6fc3680d5985f8a2d6\"" Oct 30 13:19:08.314002 containerd[1588]: time="2025-10-30T13:19:08.312959820Z" level=info msg="StartContainer for \"87390e8e89a5af2ea36d8279ea8f9ddac6d5879b6d9a5c6fc3680d5985f8a2d6\"" Oct 30 13:19:08.314002 containerd[1588]: time="2025-10-30T13:19:08.313890736Z" level=info msg="connecting to shim 87390e8e89a5af2ea36d8279ea8f9ddac6d5879b6d9a5c6fc3680d5985f8a2d6" address="unix:///run/containerd/s/56e806556675058c5767ed4317b2ff82e8da069fd8f67231b7bbc1b39254fd12" protocol=ttrpc version=3 Oct 30 13:19:08.344142 systemd[1]: Started cri-containerd-87390e8e89a5af2ea36d8279ea8f9ddac6d5879b6d9a5c6fc3680d5985f8a2d6.scope - libcontainer container 87390e8e89a5af2ea36d8279ea8f9ddac6d5879b6d9a5c6fc3680d5985f8a2d6. Oct 30 13:19:08.373346 kubelet[2769]: E1030 13:19:08.373278 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff94d9bcc-4c6ph" podUID="c30303b7-2f8a-4e76-affb-92ba5d248c6b" Oct 30 13:19:08.395305 systemd-networkd[1508]: vxlan.calico: Link UP Oct 30 13:19:08.395521 systemd-networkd[1508]: vxlan.calico: Gained carrier Oct 30 13:19:08.406105 containerd[1588]: time="2025-10-30T13:19:08.403820472Z" level=info msg="StartContainer for \"87390e8e89a5af2ea36d8279ea8f9ddac6d5879b6d9a5c6fc3680d5985f8a2d6\" returns successfully" Oct 30 13:19:08.450111 systemd-networkd[1508]: cali3fa1c692952: Gained IPv6LL Oct 30 13:19:08.639296 containerd[1588]: time="2025-10-30T13:19:08.639220847Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:19:08.640619 containerd[1588]: time="2025-10-30T13:19:08.640548062Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 13:19:08.640698 containerd[1588]: time="2025-10-30T13:19:08.640579573Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 13:19:08.640956 kubelet[2769]: E1030 13:19:08.640890 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 13:19:08.640956 kubelet[2769]: E1030 13:19:08.640957 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 13:19:08.641315 kubelet[2769]: E1030 13:19:08.641258 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hcptg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59b5978bfb-8lpn6_calico-system(3b9db068-4663-4510-b3a1-4b6c4be0ac8f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 13:19:08.641565 containerd[1588]: time="2025-10-30T13:19:08.641537782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 13:19:08.642571 kubelet[2769]: E1030 13:19:08.642525 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59b5978bfb-8lpn6" podUID="3b9db068-4663-4510-b3a1-4b6c4be0ac8f" Oct 30 13:19:08.770161 systemd-networkd[1508]: cali5ecce26ad2d: Gained IPv6LL Oct 30 13:19:08.913624 containerd[1588]: time="2025-10-30T13:19:08.913554361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jqhml,Uid:f14ca4d9-aac0-4af6-a374-d183e93fb183,Namespace:calico-system,Attempt:0,}" Oct 30 13:19:08.913778 containerd[1588]: time="2025-10-30T13:19:08.913573828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6798f4bdc5-q6qdh,Uid:2f3b781a-b409-44c1-bfbe-62b7c2fd7f95,Namespace:calico-system,Attempt:0,}" Oct 30 13:19:08.983213 containerd[1588]: time="2025-10-30T13:19:08.983132088Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:19:08.985603 containerd[1588]: time="2025-10-30T13:19:08.985494552Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 13:19:08.985603 containerd[1588]: time="2025-10-30T13:19:08.985577493Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 13:19:08.985939 kubelet[2769]: E1030 13:19:08.985831 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 13:19:08.985939 kubelet[2769]: E1030 13:19:08.985890 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 13:19:08.986276 kubelet[2769]: E1030 13:19:08.986224 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dx8pn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ws88l_calico-system(ab05aa8b-f302-4973-9a9c-4a341cc5c31e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 13:19:08.987560 kubelet[2769]: E1030 13:19:08.987520 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ws88l" podUID="ab05aa8b-f302-4973-9a9c-4a341cc5c31e" Oct 30 13:19:09.035099 systemd-networkd[1508]: califbfed49a3e7: Link UP Oct 30 13:19:09.035359 systemd-networkd[1508]: califbfed49a3e7: Gained carrier Oct 30 13:19:09.056834 containerd[1588]: 2025-10-30 13:19:08.954 [INFO][4540] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--jqhml-eth0 csi-node-driver- calico-system f14ca4d9-aac0-4af6-a374-d183e93fb183 728 0 2025-10-30 13:18:46 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-jqhml eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califbfed49a3e7 [] [] }} ContainerID="ce547bc93a0ac7349504db60afe8a0b02c06d44bd53e87fe2d07ca8251a02758" Namespace="calico-system" Pod="csi-node-driver-jqhml" WorkloadEndpoint="localhost-k8s-csi--node--driver--jqhml-" Oct 30 13:19:09.056834 containerd[1588]: 2025-10-30 13:19:08.954 [INFO][4540] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ce547bc93a0ac7349504db60afe8a0b02c06d44bd53e87fe2d07ca8251a02758" Namespace="calico-system" Pod="csi-node-driver-jqhml" WorkloadEndpoint="localhost-k8s-csi--node--driver--jqhml-eth0" Oct 30 13:19:09.056834 containerd[1588]: 2025-10-30 13:19:08.992 [INFO][4572] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ce547bc93a0ac7349504db60afe8a0b02c06d44bd53e87fe2d07ca8251a02758" HandleID="k8s-pod-network.ce547bc93a0ac7349504db60afe8a0b02c06d44bd53e87fe2d07ca8251a02758" Workload="localhost-k8s-csi--node--driver--jqhml-eth0" Oct 30 13:19:09.056834 containerd[1588]: 2025-10-30 13:19:08.992 [INFO][4572] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ce547bc93a0ac7349504db60afe8a0b02c06d44bd53e87fe2d07ca8251a02758" HandleID="k8s-pod-network.ce547bc93a0ac7349504db60afe8a0b02c06d44bd53e87fe2d07ca8251a02758" Workload="localhost-k8s-csi--node--driver--jqhml-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f3f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-jqhml", "timestamp":"2025-10-30 13:19:08.992612522 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 13:19:09.056834 containerd[1588]: 2025-10-30 13:19:08.993 [INFO][4572] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 13:19:09.056834 containerd[1588]: 2025-10-30 13:19:08.993 [INFO][4572] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 13:19:09.056834 containerd[1588]: 2025-10-30 13:19:08.993 [INFO][4572] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 13:19:09.056834 containerd[1588]: 2025-10-30 13:19:09.003 [INFO][4572] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ce547bc93a0ac7349504db60afe8a0b02c06d44bd53e87fe2d07ca8251a02758" host="localhost" Oct 30 13:19:09.056834 containerd[1588]: 2025-10-30 13:19:09.008 [INFO][4572] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 13:19:09.056834 containerd[1588]: 2025-10-30 13:19:09.013 [INFO][4572] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 13:19:09.056834 containerd[1588]: 2025-10-30 13:19:09.014 [INFO][4572] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 13:19:09.056834 containerd[1588]: 2025-10-30 13:19:09.017 [INFO][4572] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 13:19:09.056834 containerd[1588]: 2025-10-30 13:19:09.017 [INFO][4572] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ce547bc93a0ac7349504db60afe8a0b02c06d44bd53e87fe2d07ca8251a02758" host="localhost" Oct 30 13:19:09.056834 containerd[1588]: 2025-10-30 13:19:09.018 [INFO][4572] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ce547bc93a0ac7349504db60afe8a0b02c06d44bd53e87fe2d07ca8251a02758 Oct 30 13:19:09.056834 containerd[1588]: 2025-10-30 13:19:09.022 [INFO][4572] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ce547bc93a0ac7349504db60afe8a0b02c06d44bd53e87fe2d07ca8251a02758" host="localhost" Oct 30 13:19:09.056834 containerd[1588]: 2025-10-30 13:19:09.028 [INFO][4572] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.ce547bc93a0ac7349504db60afe8a0b02c06d44bd53e87fe2d07ca8251a02758" host="localhost" Oct 30 13:19:09.056834 containerd[1588]: 2025-10-30 13:19:09.028 [INFO][4572] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.ce547bc93a0ac7349504db60afe8a0b02c06d44bd53e87fe2d07ca8251a02758" host="localhost" Oct 30 13:19:09.056834 containerd[1588]: 2025-10-30 13:19:09.028 [INFO][4572] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 13:19:09.056834 containerd[1588]: 2025-10-30 13:19:09.028 [INFO][4572] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="ce547bc93a0ac7349504db60afe8a0b02c06d44bd53e87fe2d07ca8251a02758" HandleID="k8s-pod-network.ce547bc93a0ac7349504db60afe8a0b02c06d44bd53e87fe2d07ca8251a02758" Workload="localhost-k8s-csi--node--driver--jqhml-eth0" Oct 30 13:19:09.057484 containerd[1588]: 2025-10-30 13:19:09.031 [INFO][4540] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ce547bc93a0ac7349504db60afe8a0b02c06d44bd53e87fe2d07ca8251a02758" Namespace="calico-system" Pod="csi-node-driver-jqhml" WorkloadEndpoint="localhost-k8s-csi--node--driver--jqhml-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jqhml-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f14ca4d9-aac0-4af6-a374-d183e93fb183", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 18, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-jqhml", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califbfed49a3e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:19:09.057484 containerd[1588]: 2025-10-30 13:19:09.031 [INFO][4540] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="ce547bc93a0ac7349504db60afe8a0b02c06d44bd53e87fe2d07ca8251a02758" Namespace="calico-system" Pod="csi-node-driver-jqhml" WorkloadEndpoint="localhost-k8s-csi--node--driver--jqhml-eth0" Oct 30 13:19:09.057484 containerd[1588]: 2025-10-30 13:19:09.031 [INFO][4540] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califbfed49a3e7 ContainerID="ce547bc93a0ac7349504db60afe8a0b02c06d44bd53e87fe2d07ca8251a02758" Namespace="calico-system" Pod="csi-node-driver-jqhml" WorkloadEndpoint="localhost-k8s-csi--node--driver--jqhml-eth0" Oct 30 13:19:09.057484 containerd[1588]: 2025-10-30 13:19:09.036 [INFO][4540] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ce547bc93a0ac7349504db60afe8a0b02c06d44bd53e87fe2d07ca8251a02758" Namespace="calico-system" Pod="csi-node-driver-jqhml" WorkloadEndpoint="localhost-k8s-csi--node--driver--jqhml-eth0" Oct 30 13:19:09.057484 containerd[1588]: 2025-10-30 13:19:09.036 [INFO][4540] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ce547bc93a0ac7349504db60afe8a0b02c06d44bd53e87fe2d07ca8251a02758" Namespace="calico-system" Pod="csi-node-driver-jqhml" WorkloadEndpoint="localhost-k8s-csi--node--driver--jqhml-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jqhml-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f14ca4d9-aac0-4af6-a374-d183e93fb183", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 18, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ce547bc93a0ac7349504db60afe8a0b02c06d44bd53e87fe2d07ca8251a02758", Pod:"csi-node-driver-jqhml", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califbfed49a3e7", MAC:"72:38:bd:5e:6a:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:19:09.057484 containerd[1588]: 2025-10-30 13:19:09.053 [INFO][4540] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ce547bc93a0ac7349504db60afe8a0b02c06d44bd53e87fe2d07ca8251a02758" Namespace="calico-system" Pod="csi-node-driver-jqhml" WorkloadEndpoint="localhost-k8s-csi--node--driver--jqhml-eth0" Oct 30 13:19:09.129004 containerd[1588]: time="2025-10-30T13:19:09.128928668Z" level=info msg="connecting to shim ce547bc93a0ac7349504db60afe8a0b02c06d44bd53e87fe2d07ca8251a02758" address="unix:///run/containerd/s/f8272ccb7fa5127411a5bd2a19e94bee0cfdf047d60bd24a8c950c7f3e52d035" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:19:09.149948 systemd-networkd[1508]: cali9e5b32e7eb9: Link UP Oct 30 13:19:09.150363 systemd-networkd[1508]: cali9e5b32e7eb9: Gained carrier Oct 30 13:19:09.171918 containerd[1588]: 2025-10-30 13:19:08.965 [INFO][4552] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6798f4bdc5--q6qdh-eth0 calico-kube-controllers-6798f4bdc5- calico-system 2f3b781a-b409-44c1-bfbe-62b7c2fd7f95 843 0 2025-10-30 13:18:46 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6798f4bdc5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6798f4bdc5-q6qdh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9e5b32e7eb9 [] [] }} ContainerID="4216af5bf180d883e49f770b8a4d99634a054f427ea40395ed1b39d0f8ed3be5" Namespace="calico-system" Pod="calico-kube-controllers-6798f4bdc5-q6qdh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6798f4bdc5--q6qdh-" Oct 30 13:19:09.171918 containerd[1588]: 2025-10-30 13:19:08.965 [INFO][4552] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4216af5bf180d883e49f770b8a4d99634a054f427ea40395ed1b39d0f8ed3be5" Namespace="calico-system" Pod="calico-kube-controllers-6798f4bdc5-q6qdh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6798f4bdc5--q6qdh-eth0" Oct 30 13:19:09.171918 containerd[1588]: 2025-10-30 13:19:09.003 [INFO][4578] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4216af5bf180d883e49f770b8a4d99634a054f427ea40395ed1b39d0f8ed3be5" HandleID="k8s-pod-network.4216af5bf180d883e49f770b8a4d99634a054f427ea40395ed1b39d0f8ed3be5" Workload="localhost-k8s-calico--kube--controllers--6798f4bdc5--q6qdh-eth0" Oct 30 13:19:09.171918 containerd[1588]: 2025-10-30 13:19:09.003 [INFO][4578] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4216af5bf180d883e49f770b8a4d99634a054f427ea40395ed1b39d0f8ed3be5" HandleID="k8s-pod-network.4216af5bf180d883e49f770b8a4d99634a054f427ea40395ed1b39d0f8ed3be5" Workload="localhost-k8s-calico--kube--controllers--6798f4bdc5--q6qdh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002de500), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6798f4bdc5-q6qdh", "timestamp":"2025-10-30 13:19:09.003571012 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 13:19:09.171918 containerd[1588]: 2025-10-30 13:19:09.004 [INFO][4578] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 13:19:09.171918 containerd[1588]: 2025-10-30 13:19:09.028 [INFO][4578] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 13:19:09.171918 containerd[1588]: 2025-10-30 13:19:09.028 [INFO][4578] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 13:19:09.171918 containerd[1588]: 2025-10-30 13:19:09.104 [INFO][4578] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4216af5bf180d883e49f770b8a4d99634a054f427ea40395ed1b39d0f8ed3be5" host="localhost" Oct 30 13:19:09.171918 containerd[1588]: 2025-10-30 13:19:09.111 [INFO][4578] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 13:19:09.171918 containerd[1588]: 2025-10-30 13:19:09.118 [INFO][4578] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 13:19:09.171918 containerd[1588]: 2025-10-30 13:19:09.121 [INFO][4578] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 13:19:09.171918 containerd[1588]: 2025-10-30 13:19:09.123 [INFO][4578] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 13:19:09.171918 containerd[1588]: 2025-10-30 13:19:09.123 [INFO][4578] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4216af5bf180d883e49f770b8a4d99634a054f427ea40395ed1b39d0f8ed3be5" host="localhost" Oct 30 13:19:09.171918 containerd[1588]: 2025-10-30 13:19:09.125 [INFO][4578] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4216af5bf180d883e49f770b8a4d99634a054f427ea40395ed1b39d0f8ed3be5 Oct 30 13:19:09.171918 containerd[1588]: 2025-10-30 13:19:09.130 [INFO][4578] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4216af5bf180d883e49f770b8a4d99634a054f427ea40395ed1b39d0f8ed3be5" host="localhost" Oct 30 13:19:09.171918 containerd[1588]: 2025-10-30 13:19:09.136 [INFO][4578] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.4216af5bf180d883e49f770b8a4d99634a054f427ea40395ed1b39d0f8ed3be5" host="localhost" Oct 30 13:19:09.171918 containerd[1588]: 2025-10-30 13:19:09.136 [INFO][4578] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.4216af5bf180d883e49f770b8a4d99634a054f427ea40395ed1b39d0f8ed3be5" host="localhost" Oct 30 13:19:09.171918 containerd[1588]: 2025-10-30 13:19:09.136 [INFO][4578] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 13:19:09.171918 containerd[1588]: 2025-10-30 13:19:09.136 [INFO][4578] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="4216af5bf180d883e49f770b8a4d99634a054f427ea40395ed1b39d0f8ed3be5" HandleID="k8s-pod-network.4216af5bf180d883e49f770b8a4d99634a054f427ea40395ed1b39d0f8ed3be5" Workload="localhost-k8s-calico--kube--controllers--6798f4bdc5--q6qdh-eth0" Oct 30 13:19:09.172473 containerd[1588]: 2025-10-30 13:19:09.146 [INFO][4552] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4216af5bf180d883e49f770b8a4d99634a054f427ea40395ed1b39d0f8ed3be5" Namespace="calico-system" Pod="calico-kube-controllers-6798f4bdc5-q6qdh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6798f4bdc5--q6qdh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6798f4bdc5--q6qdh-eth0", GenerateName:"calico-kube-controllers-6798f4bdc5-", Namespace:"calico-system", SelfLink:"", UID:"2f3b781a-b409-44c1-bfbe-62b7c2fd7f95", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 18, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6798f4bdc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6798f4bdc5-q6qdh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9e5b32e7eb9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:19:09.172473 containerd[1588]: 2025-10-30 13:19:09.146 [INFO][4552] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="4216af5bf180d883e49f770b8a4d99634a054f427ea40395ed1b39d0f8ed3be5" Namespace="calico-system" Pod="calico-kube-controllers-6798f4bdc5-q6qdh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6798f4bdc5--q6qdh-eth0" Oct 30 13:19:09.172473 containerd[1588]: 2025-10-30 13:19:09.146 [INFO][4552] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9e5b32e7eb9 ContainerID="4216af5bf180d883e49f770b8a4d99634a054f427ea40395ed1b39d0f8ed3be5" Namespace="calico-system" Pod="calico-kube-controllers-6798f4bdc5-q6qdh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6798f4bdc5--q6qdh-eth0" Oct 30 13:19:09.172473 containerd[1588]: 2025-10-30 13:19:09.151 [INFO][4552] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4216af5bf180d883e49f770b8a4d99634a054f427ea40395ed1b39d0f8ed3be5" Namespace="calico-system" Pod="calico-kube-controllers-6798f4bdc5-q6qdh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6798f4bdc5--q6qdh-eth0" Oct 30 13:19:09.172473 containerd[1588]: 2025-10-30 13:19:09.152 [INFO][4552] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4216af5bf180d883e49f770b8a4d99634a054f427ea40395ed1b39d0f8ed3be5" Namespace="calico-system" Pod="calico-kube-controllers-6798f4bdc5-q6qdh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6798f4bdc5--q6qdh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6798f4bdc5--q6qdh-eth0", GenerateName:"calico-kube-controllers-6798f4bdc5-", Namespace:"calico-system", SelfLink:"", UID:"2f3b781a-b409-44c1-bfbe-62b7c2fd7f95", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 18, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6798f4bdc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4216af5bf180d883e49f770b8a4d99634a054f427ea40395ed1b39d0f8ed3be5", Pod:"calico-kube-controllers-6798f4bdc5-q6qdh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9e5b32e7eb9", MAC:"06:84:0d:6a:11:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:19:09.172473 containerd[1588]: 2025-10-30 13:19:09.166 [INFO][4552] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4216af5bf180d883e49f770b8a4d99634a054f427ea40395ed1b39d0f8ed3be5" Namespace="calico-system" Pod="calico-kube-controllers-6798f4bdc5-q6qdh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6798f4bdc5--q6qdh-eth0" Oct 30 13:19:09.194354 containerd[1588]: time="2025-10-30T13:19:09.194229761Z" level=info msg="connecting to shim 4216af5bf180d883e49f770b8a4d99634a054f427ea40395ed1b39d0f8ed3be5" address="unix:///run/containerd/s/fcc3bc006e041adb17cfd3f9e1544e2717cb89471a600e6bfb683efdeff576bb" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:19:09.205139 systemd[1]: Started cri-containerd-ce547bc93a0ac7349504db60afe8a0b02c06d44bd53e87fe2d07ca8251a02758.scope - libcontainer container ce547bc93a0ac7349504db60afe8a0b02c06d44bd53e87fe2d07ca8251a02758. Oct 30 13:19:09.235138 systemd[1]: Started cri-containerd-4216af5bf180d883e49f770b8a4d99634a054f427ea40395ed1b39d0f8ed3be5.scope - libcontainer container 4216af5bf180d883e49f770b8a4d99634a054f427ea40395ed1b39d0f8ed3be5. Oct 30 13:19:09.240156 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 13:19:09.256318 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 13:19:09.261818 containerd[1588]: time="2025-10-30T13:19:09.261764816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jqhml,Uid:f14ca4d9-aac0-4af6-a374-d183e93fb183,Namespace:calico-system,Attempt:0,} returns sandbox id \"ce547bc93a0ac7349504db60afe8a0b02c06d44bd53e87fe2d07ca8251a02758\"" Oct 30 13:19:09.266043 containerd[1588]: time="2025-10-30T13:19:09.265798568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 13:19:09.294938 containerd[1588]: time="2025-10-30T13:19:09.294795514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6798f4bdc5-q6qdh,Uid:2f3b781a-b409-44c1-bfbe-62b7c2fd7f95,Namespace:calico-system,Attempt:0,} returns sandbox id \"4216af5bf180d883e49f770b8a4d99634a054f427ea40395ed1b39d0f8ed3be5\"" Oct 30 13:19:09.379417 kubelet[2769]: E1030 13:19:09.379350 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:19:09.393020 kubelet[2769]: I1030 13:19:09.391594 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-wsfwb" podStartSLOduration=36.39157665 podStartE2EDuration="36.39157665s" podCreationTimestamp="2025-10-30 13:18:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 13:19:09.39089093 +0000 UTC m=+43.575841018" watchObservedRunningTime="2025-10-30 13:19:09.39157665 +0000 UTC m=+43.576526728" Oct 30 13:19:09.396812 kubelet[2769]: E1030 13:19:09.396762 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff94d9bcc-4c6ph" podUID="c30303b7-2f8a-4e76-affb-92ba5d248c6b" Oct 30 13:19:09.397209 kubelet[2769]: E1030 13:19:09.396975 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ws88l" podUID="ab05aa8b-f302-4973-9a9c-4a341cc5c31e" Oct 30 13:19:09.397209 kubelet[2769]: E1030 13:19:09.397079 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59b5978bfb-8lpn6" podUID="3b9db068-4663-4510-b3a1-4b6c4be0ac8f" Oct 30 13:19:09.590345 containerd[1588]: time="2025-10-30T13:19:09.590164524Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:19:09.591495 containerd[1588]: time="2025-10-30T13:19:09.591411030Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 13:19:09.591495 containerd[1588]: time="2025-10-30T13:19:09.591480245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 13:19:09.591821 kubelet[2769]: E1030 13:19:09.591744 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 13:19:09.591821 kubelet[2769]: E1030 13:19:09.591812 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 13:19:09.592195 kubelet[2769]: E1030 13:19:09.592136 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bpfc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jqhml_calico-system(f14ca4d9-aac0-4af6-a374-d183e93fb183): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 13:19:09.592418 containerd[1588]: time="2025-10-30T13:19:09.592319081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 13:19:09.858264 systemd-networkd[1508]: cali4121eb3e667: Gained IPv6LL Oct 30 13:19:09.914059 containerd[1588]: time="2025-10-30T13:19:09.914006535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff94d9bcc-4z7tw,Uid:8185e627-e431-4f4e-9719-dc6a950cb7cf,Namespace:calico-apiserver,Attempt:0,}" Oct 30 13:19:09.970746 containerd[1588]: time="2025-10-30T13:19:09.970530137Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:19:09.972682 containerd[1588]: time="2025-10-30T13:19:09.972610932Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 13:19:09.972781 containerd[1588]: time="2025-10-30T13:19:09.972628846Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 13:19:09.973125 kubelet[2769]: E1030 13:19:09.973054 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 13:19:09.973208 kubelet[2769]: E1030 13:19:09.973138 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 13:19:09.973468 kubelet[2769]: E1030 13:19:09.973385 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dmp74,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6798f4bdc5-q6qdh_calico-system(2f3b781a-b409-44c1-bfbe-62b7c2fd7f95): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 13:19:09.974032 containerd[1588]: time="2025-10-30T13:19:09.973968925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 13:19:09.975409 kubelet[2769]: E1030 13:19:09.975369 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6798f4bdc5-q6qdh" podUID="2f3b781a-b409-44c1-bfbe-62b7c2fd7f95" Oct 30 13:19:09.987258 systemd-networkd[1508]: caliec514825c6c: Gained IPv6LL Oct 30 13:19:10.018094 systemd-networkd[1508]: cali939707fc11f: Link UP Oct 30 13:19:10.020183 systemd-networkd[1508]: cali939707fc11f: Gained carrier Oct 30 13:19:10.032974 containerd[1588]: 2025-10-30 13:19:09.949 [INFO][4703] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--ff94d9bcc--4z7tw-eth0 calico-apiserver-ff94d9bcc- calico-apiserver 8185e627-e431-4f4e-9719-dc6a950cb7cf 848 0 2025-10-30 13:18:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:ff94d9bcc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-ff94d9bcc-4z7tw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali939707fc11f [] [] }} ContainerID="2165ffacd512a16a08482270948a59e7d25fe3c6141387d7b3555fc56515cb2a" Namespace="calico-apiserver" Pod="calico-apiserver-ff94d9bcc-4z7tw" WorkloadEndpoint="localhost-k8s-calico--apiserver--ff94d9bcc--4z7tw-" Oct 30 13:19:10.032974 containerd[1588]: 2025-10-30 13:19:09.950 [INFO][4703] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2165ffacd512a16a08482270948a59e7d25fe3c6141387d7b3555fc56515cb2a" Namespace="calico-apiserver" Pod="calico-apiserver-ff94d9bcc-4z7tw" WorkloadEndpoint="localhost-k8s-calico--apiserver--ff94d9bcc--4z7tw-eth0" Oct 30 13:19:10.032974 containerd[1588]: 2025-10-30 13:19:09.978 [INFO][4718] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2165ffacd512a16a08482270948a59e7d25fe3c6141387d7b3555fc56515cb2a" HandleID="k8s-pod-network.2165ffacd512a16a08482270948a59e7d25fe3c6141387d7b3555fc56515cb2a" Workload="localhost-k8s-calico--apiserver--ff94d9bcc--4z7tw-eth0" Oct 30 13:19:10.032974 containerd[1588]: 2025-10-30 13:19:09.978 [INFO][4718] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2165ffacd512a16a08482270948a59e7d25fe3c6141387d7b3555fc56515cb2a" HandleID="k8s-pod-network.2165ffacd512a16a08482270948a59e7d25fe3c6141387d7b3555fc56515cb2a" Workload="localhost-k8s-calico--apiserver--ff94d9bcc--4z7tw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dee20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-ff94d9bcc-4z7tw", "timestamp":"2025-10-30 13:19:09.978392683 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 13:19:10.032974 containerd[1588]: 2025-10-30 13:19:09.978 [INFO][4718] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 13:19:10.032974 containerd[1588]: 2025-10-30 13:19:09.978 [INFO][4718] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 13:19:10.032974 containerd[1588]: 2025-10-30 13:19:09.978 [INFO][4718] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 13:19:10.032974 containerd[1588]: 2025-10-30 13:19:09.986 [INFO][4718] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2165ffacd512a16a08482270948a59e7d25fe3c6141387d7b3555fc56515cb2a" host="localhost" Oct 30 13:19:10.032974 containerd[1588]: 2025-10-30 13:19:09.993 [INFO][4718] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 13:19:10.032974 containerd[1588]: 2025-10-30 13:19:09.997 [INFO][4718] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 13:19:10.032974 containerd[1588]: 2025-10-30 13:19:09.999 [INFO][4718] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 13:19:10.032974 containerd[1588]: 2025-10-30 13:19:10.001 [INFO][4718] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 13:19:10.032974 containerd[1588]: 2025-10-30 13:19:10.001 [INFO][4718] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2165ffacd512a16a08482270948a59e7d25fe3c6141387d7b3555fc56515cb2a" host="localhost" Oct 30 13:19:10.032974 containerd[1588]: 2025-10-30 13:19:10.003 [INFO][4718] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2165ffacd512a16a08482270948a59e7d25fe3c6141387d7b3555fc56515cb2a Oct 30 13:19:10.032974 containerd[1588]: 2025-10-30 13:19:10.006 [INFO][4718] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2165ffacd512a16a08482270948a59e7d25fe3c6141387d7b3555fc56515cb2a" host="localhost" Oct 30 13:19:10.032974 containerd[1588]: 2025-10-30 13:19:10.011 [INFO][4718] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.2165ffacd512a16a08482270948a59e7d25fe3c6141387d7b3555fc56515cb2a" host="localhost" Oct 30 13:19:10.032974 containerd[1588]: 2025-10-30 13:19:10.011 [INFO][4718] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.2165ffacd512a16a08482270948a59e7d25fe3c6141387d7b3555fc56515cb2a" host="localhost" Oct 30 13:19:10.032974 containerd[1588]: 2025-10-30 13:19:10.011 [INFO][4718] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 13:19:10.032974 containerd[1588]: 2025-10-30 13:19:10.012 [INFO][4718] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="2165ffacd512a16a08482270948a59e7d25fe3c6141387d7b3555fc56515cb2a" HandleID="k8s-pod-network.2165ffacd512a16a08482270948a59e7d25fe3c6141387d7b3555fc56515cb2a" Workload="localhost-k8s-calico--apiserver--ff94d9bcc--4z7tw-eth0" Oct 30 13:19:10.033817 containerd[1588]: 2025-10-30 13:19:10.015 [INFO][4703] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2165ffacd512a16a08482270948a59e7d25fe3c6141387d7b3555fc56515cb2a" Namespace="calico-apiserver" Pod="calico-apiserver-ff94d9bcc-4z7tw" WorkloadEndpoint="localhost-k8s-calico--apiserver--ff94d9bcc--4z7tw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--ff94d9bcc--4z7tw-eth0", GenerateName:"calico-apiserver-ff94d9bcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"8185e627-e431-4f4e-9719-dc6a950cb7cf", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 18, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ff94d9bcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-ff94d9bcc-4z7tw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali939707fc11f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:19:10.033817 containerd[1588]: 2025-10-30 13:19:10.015 [INFO][4703] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="2165ffacd512a16a08482270948a59e7d25fe3c6141387d7b3555fc56515cb2a" Namespace="calico-apiserver" Pod="calico-apiserver-ff94d9bcc-4z7tw" WorkloadEndpoint="localhost-k8s-calico--apiserver--ff94d9bcc--4z7tw-eth0" Oct 30 13:19:10.033817 containerd[1588]: 2025-10-30 13:19:10.015 [INFO][4703] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali939707fc11f ContainerID="2165ffacd512a16a08482270948a59e7d25fe3c6141387d7b3555fc56515cb2a" Namespace="calico-apiserver" Pod="calico-apiserver-ff94d9bcc-4z7tw" WorkloadEndpoint="localhost-k8s-calico--apiserver--ff94d9bcc--4z7tw-eth0" Oct 30 13:19:10.033817 containerd[1588]: 2025-10-30 13:19:10.018 [INFO][4703] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2165ffacd512a16a08482270948a59e7d25fe3c6141387d7b3555fc56515cb2a" Namespace="calico-apiserver" Pod="calico-apiserver-ff94d9bcc-4z7tw" WorkloadEndpoint="localhost-k8s-calico--apiserver--ff94d9bcc--4z7tw-eth0" Oct 30 13:19:10.033817 containerd[1588]: 2025-10-30 13:19:10.018 [INFO][4703] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2165ffacd512a16a08482270948a59e7d25fe3c6141387d7b3555fc56515cb2a" Namespace="calico-apiserver" Pod="calico-apiserver-ff94d9bcc-4z7tw" WorkloadEndpoint="localhost-k8s-calico--apiserver--ff94d9bcc--4z7tw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--ff94d9bcc--4z7tw-eth0", GenerateName:"calico-apiserver-ff94d9bcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"8185e627-e431-4f4e-9719-dc6a950cb7cf", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 18, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ff94d9bcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2165ffacd512a16a08482270948a59e7d25fe3c6141387d7b3555fc56515cb2a", Pod:"calico-apiserver-ff94d9bcc-4z7tw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali939707fc11f", MAC:"f6:8c:2d:d8:ad:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:19:10.033817 containerd[1588]: 2025-10-30 13:19:10.029 [INFO][4703] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2165ffacd512a16a08482270948a59e7d25fe3c6141387d7b3555fc56515cb2a" Namespace="calico-apiserver" Pod="calico-apiserver-ff94d9bcc-4z7tw" WorkloadEndpoint="localhost-k8s-calico--apiserver--ff94d9bcc--4z7tw-eth0" Oct 30 13:19:10.067743 containerd[1588]: time="2025-10-30T13:19:10.067689973Z" level=info msg="connecting to shim 2165ffacd512a16a08482270948a59e7d25fe3c6141387d7b3555fc56515cb2a" address="unix:///run/containerd/s/a8f746ce4e34b611ed0dbd9c34d1fbd71659643c93f6dfd6bc3ecabfd384cbd4" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:19:10.102127 systemd[1]: Started cri-containerd-2165ffacd512a16a08482270948a59e7d25fe3c6141387d7b3555fc56515cb2a.scope - libcontainer container 2165ffacd512a16a08482270948a59e7d25fe3c6141387d7b3555fc56515cb2a. Oct 30 13:19:10.115165 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 13:19:10.152365 containerd[1588]: time="2025-10-30T13:19:10.152316515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff94d9bcc-4z7tw,Uid:8185e627-e431-4f4e-9719-dc6a950cb7cf,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2165ffacd512a16a08482270948a59e7d25fe3c6141387d7b3555fc56515cb2a\"" Oct 30 13:19:10.306230 systemd-networkd[1508]: vxlan.calico: Gained IPv6LL Oct 30 13:19:10.306608 systemd-networkd[1508]: cali9e5b32e7eb9: Gained IPv6LL Oct 30 13:19:10.386687 containerd[1588]: time="2025-10-30T13:19:10.386564798Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:19:10.387776 containerd[1588]: time="2025-10-30T13:19:10.387689367Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 13:19:10.387776 containerd[1588]: time="2025-10-30T13:19:10.387746618Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 13:19:10.388026 kubelet[2769]: E1030 13:19:10.387955 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 13:19:10.388491 kubelet[2769]: E1030 13:19:10.388040 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 13:19:10.388491 kubelet[2769]: E1030 13:19:10.388357 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bpfc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jqhml_calico-system(f14ca4d9-aac0-4af6-a374-d183e93fb183): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 13:19:10.388617 containerd[1588]: time="2025-10-30T13:19:10.388533082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 13:19:10.389750 kubelet[2769]: E1030 13:19:10.389670 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jqhml" podUID="f14ca4d9-aac0-4af6-a374-d183e93fb183" Oct 30 13:19:10.396921 kubelet[2769]: E1030 13:19:10.396881 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:19:10.398380 kubelet[2769]: E1030 13:19:10.398307 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jqhml" podUID="f14ca4d9-aac0-4af6-a374-d183e93fb183" Oct 30 13:19:10.398514 kubelet[2769]: E1030 13:19:10.398408 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6798f4bdc5-q6qdh" podUID="2f3b781a-b409-44c1-bfbe-62b7c2fd7f95" Oct 30 13:19:10.690210 systemd-networkd[1508]: califbfed49a3e7: Gained IPv6LL Oct 30 13:19:10.756012 containerd[1588]: time="2025-10-30T13:19:10.755919641Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:19:10.757157 containerd[1588]: time="2025-10-30T13:19:10.757111321Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 13:19:10.757263 containerd[1588]: time="2025-10-30T13:19:10.757217516Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 13:19:10.757433 kubelet[2769]: E1030 13:19:10.757382 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 13:19:10.757488 kubelet[2769]: E1030 13:19:10.757440 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 13:19:10.757684 kubelet[2769]: E1030 13:19:10.757633 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c7p4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-ff94d9bcc-4z7tw_calico-apiserver(8185e627-e431-4f4e-9719-dc6a950cb7cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 13:19:10.759797 kubelet[2769]: E1030 13:19:10.759764 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff94d9bcc-4z7tw" podUID="8185e627-e431-4f4e-9719-dc6a950cb7cf" Oct 30 13:19:10.913376 kubelet[2769]: E1030 13:19:10.913340 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:19:10.913865 containerd[1588]: time="2025-10-30T13:19:10.913819334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l4zjz,Uid:f0340278-e067-4b59-87c3-c2890d479a3c,Namespace:kube-system,Attempt:0,}" Oct 30 13:19:10.981742 systemd[1]: Started sshd@8-10.0.0.37:22-10.0.0.1:60590.service - OpenSSH per-connection server daemon (10.0.0.1:60590). Oct 30 13:19:11.016526 systemd-networkd[1508]: caliab1ae019bd7: Link UP Oct 30 13:19:11.018124 systemd-networkd[1508]: caliab1ae019bd7: Gained carrier Oct 30 13:19:11.033466 containerd[1588]: 2025-10-30 13:19:10.947 [INFO][4781] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--l4zjz-eth0 coredns-674b8bbfcf- kube-system f0340278-e067-4b59-87c3-c2890d479a3c 844 0 2025-10-30 13:18:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-l4zjz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliab1ae019bd7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d8b57c65b0bbc971fc55571e7f44cdc5f267dfd797e3b8d20654e0bd13ac901b" Namespace="kube-system" Pod="coredns-674b8bbfcf-l4zjz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--l4zjz-" Oct 30 13:19:11.033466 containerd[1588]: 2025-10-30 13:19:10.948 [INFO][4781] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d8b57c65b0bbc971fc55571e7f44cdc5f267dfd797e3b8d20654e0bd13ac901b" Namespace="kube-system" Pod="coredns-674b8bbfcf-l4zjz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--l4zjz-eth0" Oct 30 13:19:11.033466 containerd[1588]: 2025-10-30 13:19:10.972 [INFO][4796] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d8b57c65b0bbc971fc55571e7f44cdc5f267dfd797e3b8d20654e0bd13ac901b" HandleID="k8s-pod-network.d8b57c65b0bbc971fc55571e7f44cdc5f267dfd797e3b8d20654e0bd13ac901b" Workload="localhost-k8s-coredns--674b8bbfcf--l4zjz-eth0" Oct 30 13:19:11.033466 containerd[1588]: 2025-10-30 13:19:10.972 [INFO][4796] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d8b57c65b0bbc971fc55571e7f44cdc5f267dfd797e3b8d20654e0bd13ac901b" HandleID="k8s-pod-network.d8b57c65b0bbc971fc55571e7f44cdc5f267dfd797e3b8d20654e0bd13ac901b" Workload="localhost-k8s-coredns--674b8bbfcf--l4zjz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00018ab10), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-l4zjz", "timestamp":"2025-10-30 13:19:10.97211058 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 13:19:11.033466 containerd[1588]: 2025-10-30 13:19:10.972 [INFO][4796] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 13:19:11.033466 containerd[1588]: 2025-10-30 13:19:10.972 [INFO][4796] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 13:19:11.033466 containerd[1588]: 2025-10-30 13:19:10.972 [INFO][4796] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 13:19:11.033466 containerd[1588]: 2025-10-30 13:19:10.978 [INFO][4796] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d8b57c65b0bbc971fc55571e7f44cdc5f267dfd797e3b8d20654e0bd13ac901b" host="localhost" Oct 30 13:19:11.033466 containerd[1588]: 2025-10-30 13:19:10.984 [INFO][4796] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 13:19:11.033466 containerd[1588]: 2025-10-30 13:19:10.989 [INFO][4796] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 13:19:11.033466 containerd[1588]: 2025-10-30 13:19:10.991 [INFO][4796] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 13:19:11.033466 containerd[1588]: 2025-10-30 13:19:10.993 [INFO][4796] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 13:19:11.033466 containerd[1588]: 2025-10-30 13:19:10.993 [INFO][4796] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d8b57c65b0bbc971fc55571e7f44cdc5f267dfd797e3b8d20654e0bd13ac901b" host="localhost" Oct 30 13:19:11.033466 containerd[1588]: 2025-10-30 13:19:10.994 [INFO][4796] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d8b57c65b0bbc971fc55571e7f44cdc5f267dfd797e3b8d20654e0bd13ac901b Oct 30 13:19:11.033466 containerd[1588]: 2025-10-30 13:19:10.999 [INFO][4796] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d8b57c65b0bbc971fc55571e7f44cdc5f267dfd797e3b8d20654e0bd13ac901b" host="localhost" Oct 30 13:19:11.033466 containerd[1588]: 2025-10-30 13:19:11.006 [INFO][4796] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.d8b57c65b0bbc971fc55571e7f44cdc5f267dfd797e3b8d20654e0bd13ac901b" host="localhost" Oct 30 13:19:11.033466 containerd[1588]: 2025-10-30 13:19:11.006 [INFO][4796] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.d8b57c65b0bbc971fc55571e7f44cdc5f267dfd797e3b8d20654e0bd13ac901b" host="localhost" Oct 30 13:19:11.033466 containerd[1588]: 2025-10-30 13:19:11.006 [INFO][4796] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 13:19:11.033466 containerd[1588]: 2025-10-30 13:19:11.006 [INFO][4796] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="d8b57c65b0bbc971fc55571e7f44cdc5f267dfd797e3b8d20654e0bd13ac901b" HandleID="k8s-pod-network.d8b57c65b0bbc971fc55571e7f44cdc5f267dfd797e3b8d20654e0bd13ac901b" Workload="localhost-k8s-coredns--674b8bbfcf--l4zjz-eth0" Oct 30 13:19:11.034975 containerd[1588]: 2025-10-30 13:19:11.012 [INFO][4781] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d8b57c65b0bbc971fc55571e7f44cdc5f267dfd797e3b8d20654e0bd13ac901b" Namespace="kube-system" Pod="coredns-674b8bbfcf-l4zjz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--l4zjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--l4zjz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f0340278-e067-4b59-87c3-c2890d479a3c", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 18, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-l4zjz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab1ae019bd7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:19:11.034975 containerd[1588]: 2025-10-30 13:19:11.013 [INFO][4781] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="d8b57c65b0bbc971fc55571e7f44cdc5f267dfd797e3b8d20654e0bd13ac901b" Namespace="kube-system" Pod="coredns-674b8bbfcf-l4zjz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--l4zjz-eth0" Oct 30 13:19:11.034975 containerd[1588]: 2025-10-30 13:19:11.013 [INFO][4781] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliab1ae019bd7 ContainerID="d8b57c65b0bbc971fc55571e7f44cdc5f267dfd797e3b8d20654e0bd13ac901b" Namespace="kube-system" Pod="coredns-674b8bbfcf-l4zjz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--l4zjz-eth0" Oct 30 13:19:11.034975 containerd[1588]: 2025-10-30 13:19:11.017 [INFO][4781] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d8b57c65b0bbc971fc55571e7f44cdc5f267dfd797e3b8d20654e0bd13ac901b" Namespace="kube-system" Pod="coredns-674b8bbfcf-l4zjz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--l4zjz-eth0" Oct 30 13:19:11.034975 containerd[1588]: 2025-10-30 13:19:11.018 [INFO][4781] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d8b57c65b0bbc971fc55571e7f44cdc5f267dfd797e3b8d20654e0bd13ac901b" Namespace="kube-system" Pod="coredns-674b8bbfcf-l4zjz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--l4zjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--l4zjz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f0340278-e067-4b59-87c3-c2890d479a3c", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 18, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d8b57c65b0bbc971fc55571e7f44cdc5f267dfd797e3b8d20654e0bd13ac901b", Pod:"coredns-674b8bbfcf-l4zjz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab1ae019bd7", MAC:"82:6c:c1:d1:6b:9a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:19:11.034975 containerd[1588]: 2025-10-30 13:19:11.027 [INFO][4781] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d8b57c65b0bbc971fc55571e7f44cdc5f267dfd797e3b8d20654e0bd13ac901b" Namespace="kube-system" Pod="coredns-674b8bbfcf-l4zjz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--l4zjz-eth0" Oct 30 13:19:11.059542 sshd[4805]: Accepted publickey for core from 10.0.0.1 port 60590 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:19:11.061222 containerd[1588]: time="2025-10-30T13:19:11.061168087Z" level=info msg="connecting to shim d8b57c65b0bbc971fc55571e7f44cdc5f267dfd797e3b8d20654e0bd13ac901b" address="unix:///run/containerd/s/e9e6a9004f7850903dac81a535f8c9b60548e8a86a88b57a7b9eb2aa42b57873" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:19:11.061689 sshd-session[4805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:19:11.067319 systemd-logind[1576]: New session 9 of user core. Oct 30 13:19:11.076178 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 30 13:19:11.092114 systemd[1]: Started cri-containerd-d8b57c65b0bbc971fc55571e7f44cdc5f267dfd797e3b8d20654e0bd13ac901b.scope - libcontainer container d8b57c65b0bbc971fc55571e7f44cdc5f267dfd797e3b8d20654e0bd13ac901b. Oct 30 13:19:11.107191 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 13:19:11.149024 containerd[1588]: time="2025-10-30T13:19:11.146840858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l4zjz,Uid:f0340278-e067-4b59-87c3-c2890d479a3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8b57c65b0bbc971fc55571e7f44cdc5f267dfd797e3b8d20654e0bd13ac901b\"" Oct 30 13:19:11.149211 kubelet[2769]: E1030 13:19:11.148495 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:19:11.159018 containerd[1588]: time="2025-10-30T13:19:11.158259515Z" level=info msg="CreateContainer within sandbox \"d8b57c65b0bbc971fc55571e7f44cdc5f267dfd797e3b8d20654e0bd13ac901b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 30 13:19:11.170160 containerd[1588]: time="2025-10-30T13:19:11.170100610Z" level=info msg="Container eb7cfca1dd838bd436004e38188696c6adf3603d624d99bbd46dc69548255d04: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:19:11.177005 containerd[1588]: time="2025-10-30T13:19:11.176526125Z" level=info msg="CreateContainer within sandbox \"d8b57c65b0bbc971fc55571e7f44cdc5f267dfd797e3b8d20654e0bd13ac901b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eb7cfca1dd838bd436004e38188696c6adf3603d624d99bbd46dc69548255d04\"" Oct 30 13:19:11.177183 containerd[1588]: time="2025-10-30T13:19:11.177153811Z" level=info msg="StartContainer for \"eb7cfca1dd838bd436004e38188696c6adf3603d624d99bbd46dc69548255d04\"" Oct 30 13:19:11.179657 containerd[1588]: time="2025-10-30T13:19:11.177902792Z" level=info msg="connecting to shim eb7cfca1dd838bd436004e38188696c6adf3603d624d99bbd46dc69548255d04" address="unix:///run/containerd/s/e9e6a9004f7850903dac81a535f8c9b60548e8a86a88b57a7b9eb2aa42b57873" protocol=ttrpc version=3 Oct 30 13:19:11.197921 sshd[4849]: Connection closed by 10.0.0.1 port 60590 Oct 30 13:19:11.198532 sshd-session[4805]: pam_unix(sshd:session): session closed for user core Oct 30 13:19:11.204212 systemd[1]: Started cri-containerd-eb7cfca1dd838bd436004e38188696c6adf3603d624d99bbd46dc69548255d04.scope - libcontainer container eb7cfca1dd838bd436004e38188696c6adf3603d624d99bbd46dc69548255d04. Oct 30 13:19:11.204788 systemd[1]: sshd@8-10.0.0.37:22-10.0.0.1:60590.service: Deactivated successfully. Oct 30 13:19:11.207426 systemd[1]: session-9.scope: Deactivated successfully. Oct 30 13:19:11.210533 systemd-logind[1576]: Session 9 logged out. Waiting for processes to exit. Oct 30 13:19:11.211694 systemd-logind[1576]: Removed session 9. Oct 30 13:19:11.240918 containerd[1588]: time="2025-10-30T13:19:11.239972837Z" level=info msg="StartContainer for \"eb7cfca1dd838bd436004e38188696c6adf3603d624d99bbd46dc69548255d04\" returns successfully" Oct 30 13:19:11.401014 kubelet[2769]: E1030 13:19:11.400697 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:19:11.401488 kubelet[2769]: E1030 13:19:11.401196 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:19:11.401673 kubelet[2769]: E1030 13:19:11.401619 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff94d9bcc-4z7tw" podUID="8185e627-e431-4f4e-9719-dc6a950cb7cf" Oct 30 13:19:11.425013 kubelet[2769]: I1030 13:19:11.424696 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-l4zjz" podStartSLOduration=38.424675881 podStartE2EDuration="38.424675881s" podCreationTimestamp="2025-10-30 13:18:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 13:19:11.423163361 +0000 UTC m=+45.608113440" watchObservedRunningTime="2025-10-30 13:19:11.424675881 +0000 UTC m=+45.609625959" Oct 30 13:19:11.458293 systemd-networkd[1508]: cali939707fc11f: Gained IPv6LL Oct 30 13:19:12.402497 kubelet[2769]: E1030 13:19:12.402442 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:19:12.482302 systemd-networkd[1508]: caliab1ae019bd7: Gained IPv6LL Oct 30 13:19:13.405083 kubelet[2769]: E1030 13:19:13.405036 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:19:14.406282 kubelet[2769]: E1030 13:19:14.406240 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:19:16.213727 systemd[1]: Started sshd@9-10.0.0.37:22-10.0.0.1:57964.service - OpenSSH per-connection server daemon (10.0.0.1:57964). Oct 30 13:19:16.264830 sshd[4927]: Accepted publickey for core from 10.0.0.1 port 57964 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:19:16.266181 sshd-session[4927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:19:16.270803 systemd-logind[1576]: New session 10 of user core. Oct 30 13:19:16.281111 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 30 13:19:16.360140 sshd[4930]: Connection closed by 10.0.0.1 port 57964 Oct 30 13:19:16.360465 sshd-session[4927]: pam_unix(sshd:session): session closed for user core Oct 30 13:19:16.365402 systemd[1]: sshd@9-10.0.0.37:22-10.0.0.1:57964.service: Deactivated successfully. Oct 30 13:19:16.367521 systemd[1]: session-10.scope: Deactivated successfully. Oct 30 13:19:16.368398 systemd-logind[1576]: Session 10 logged out. Waiting for processes to exit. Oct 30 13:19:16.369682 systemd-logind[1576]: Removed session 10. Oct 30 13:19:20.914406 containerd[1588]: time="2025-10-30T13:19:20.914279710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 13:19:21.258062 containerd[1588]: time="2025-10-30T13:19:21.257884476Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:19:21.259214 containerd[1588]: time="2025-10-30T13:19:21.259142895Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 13:19:21.259385 containerd[1588]: time="2025-10-30T13:19:21.259218300Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 13:19:21.259464 kubelet[2769]: E1030 13:19:21.259416 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 13:19:21.259825 kubelet[2769]: E1030 13:19:21.259477 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 13:19:21.259825 kubelet[2769]: E1030 13:19:21.259635 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdf6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-ff94d9bcc-4c6ph_calico-apiserver(c30303b7-2f8a-4e76-affb-92ba5d248c6b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 13:19:21.260940 kubelet[2769]: E1030 13:19:21.260870 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff94d9bcc-4c6ph" podUID="c30303b7-2f8a-4e76-affb-92ba5d248c6b" Oct 30 13:19:21.374530 systemd[1]: Started sshd@10-10.0.0.37:22-10.0.0.1:57970.service - OpenSSH per-connection server daemon (10.0.0.1:57970). Oct 30 13:19:21.427205 sshd[4952]: Accepted publickey for core from 10.0.0.1 port 57970 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:19:21.429244 sshd-session[4952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:19:21.434030 systemd-logind[1576]: New session 11 of user core. Oct 30 13:19:21.444123 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 30 13:19:21.517906 sshd[4955]: Connection closed by 10.0.0.1 port 57970 Oct 30 13:19:21.518237 sshd-session[4952]: pam_unix(sshd:session): session closed for user core Oct 30 13:19:21.523249 systemd[1]: sshd@10-10.0.0.37:22-10.0.0.1:57970.service: Deactivated successfully. Oct 30 13:19:21.525302 systemd[1]: session-11.scope: Deactivated successfully. Oct 30 13:19:21.526126 systemd-logind[1576]: Session 11 logged out. Waiting for processes to exit. Oct 30 13:19:21.527458 systemd-logind[1576]: Removed session 11. Oct 30 13:19:21.914965 containerd[1588]: time="2025-10-30T13:19:21.914786337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 13:19:22.267934 containerd[1588]: time="2025-10-30T13:19:22.267876876Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:19:22.269113 containerd[1588]: time="2025-10-30T13:19:22.269055138Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 13:19:22.269201 containerd[1588]: time="2025-10-30T13:19:22.269139822Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 13:19:22.269363 kubelet[2769]: E1030 13:19:22.269306 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 13:19:22.269767 kubelet[2769]: E1030 13:19:22.269372 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 13:19:22.269767 kubelet[2769]: E1030 13:19:22.269523 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f8ced58a223049ba82448ee04d635af3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hcptg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59b5978bfb-8lpn6_calico-system(3b9db068-4663-4510-b3a1-4b6c4be0ac8f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 13:19:22.271657 containerd[1588]: time="2025-10-30T13:19:22.271608464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 13:19:22.644675 containerd[1588]: time="2025-10-30T13:19:22.644492187Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:19:22.740617 containerd[1588]: time="2025-10-30T13:19:22.740513463Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 13:19:22.740775 containerd[1588]: time="2025-10-30T13:19:22.740597415Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 13:19:22.740856 kubelet[2769]: E1030 13:19:22.740796 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 13:19:22.740908 kubelet[2769]: E1030 13:19:22.740866 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 13:19:22.741113 kubelet[2769]: E1030 13:19:22.741049 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hcptg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59b5978bfb-8lpn6_calico-system(3b9db068-4663-4510-b3a1-4b6c4be0ac8f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 13:19:22.742446 kubelet[2769]: E1030 13:19:22.742314 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59b5978bfb-8lpn6" podUID="3b9db068-4663-4510-b3a1-4b6c4be0ac8f" Oct 30 13:19:22.915098 containerd[1588]: time="2025-10-30T13:19:22.914488441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 13:19:23.348126 containerd[1588]: time="2025-10-30T13:19:23.348068753Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:19:23.349429 containerd[1588]: time="2025-10-30T13:19:23.349382957Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 13:19:23.349520 containerd[1588]: time="2025-10-30T13:19:23.349396493Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 13:19:23.349676 kubelet[2769]: E1030 13:19:23.349629 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 13:19:23.350082 kubelet[2769]: E1030 13:19:23.349679 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 13:19:23.350082 kubelet[2769]: E1030 13:19:23.349812 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c7p4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-ff94d9bcc-4z7tw_calico-apiserver(8185e627-e431-4f4e-9719-dc6a950cb7cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 13:19:23.351043 kubelet[2769]: E1030 13:19:23.351001 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff94d9bcc-4z7tw" podUID="8185e627-e431-4f4e-9719-dc6a950cb7cf" Oct 30 13:19:23.914843 containerd[1588]: time="2025-10-30T13:19:23.914756634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 13:19:24.318510 containerd[1588]: time="2025-10-30T13:19:24.318438332Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:19:24.319690 containerd[1588]: time="2025-10-30T13:19:24.319623678Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 13:19:24.319748 containerd[1588]: time="2025-10-30T13:19:24.319702320Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 13:19:24.319944 kubelet[2769]: E1030 13:19:24.319886 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 13:19:24.320064 kubelet[2769]: E1030 13:19:24.319949 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 13:19:24.320255 kubelet[2769]: E1030 13:19:24.320197 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bpfc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jqhml_calico-system(f14ca4d9-aac0-4af6-a374-d183e93fb183): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 13:19:24.320382 containerd[1588]: time="2025-10-30T13:19:24.320319700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 13:19:24.696326 containerd[1588]: time="2025-10-30T13:19:24.696176159Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:19:24.697348 containerd[1588]: time="2025-10-30T13:19:24.697302882Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 13:19:24.697449 containerd[1588]: time="2025-10-30T13:19:24.697367817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 13:19:24.697562 kubelet[2769]: E1030 13:19:24.697518 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 13:19:24.697962 kubelet[2769]: E1030 13:19:24.697567 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 13:19:24.697962 kubelet[2769]: E1030 13:19:24.697820 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dmp74,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6798f4bdc5-q6qdh_calico-system(2f3b781a-b409-44c1-bfbe-62b7c2fd7f95): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 13:19:24.698191 containerd[1588]: time="2025-10-30T13:19:24.697821111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 13:19:24.699358 kubelet[2769]: E1030 13:19:24.699270 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6798f4bdc5-q6qdh" podUID="2f3b781a-b409-44c1-bfbe-62b7c2fd7f95" Oct 30 13:19:25.035815 containerd[1588]: time="2025-10-30T13:19:25.035757763Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:19:25.037095 containerd[1588]: time="2025-10-30T13:19:25.037045857Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 13:19:25.037095 containerd[1588]: time="2025-10-30T13:19:25.037090663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 13:19:25.037373 kubelet[2769]: E1030 13:19:25.037301 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 13:19:25.037373 kubelet[2769]: E1030 13:19:25.037358 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 13:19:25.037661 kubelet[2769]: E1030 13:19:25.037604 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bpfc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jqhml_calico-system(f14ca4d9-aac0-4af6-a374-d183e93fb183): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 13:19:25.037803 containerd[1588]: time="2025-10-30T13:19:25.037704215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 13:19:25.039201 kubelet[2769]: E1030 13:19:25.039161 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jqhml" podUID="f14ca4d9-aac0-4af6-a374-d183e93fb183" Oct 30 13:19:25.413391 containerd[1588]: time="2025-10-30T13:19:25.413235361Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:19:25.414430 containerd[1588]: time="2025-10-30T13:19:25.414388885Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 13:19:25.414496 containerd[1588]: time="2025-10-30T13:19:25.414462947Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 13:19:25.414621 kubelet[2769]: E1030 13:19:25.414586 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 13:19:25.414678 kubelet[2769]: E1030 13:19:25.414633 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 13:19:25.414836 kubelet[2769]: E1030 13:19:25.414788 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dx8pn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ws88l_calico-system(ab05aa8b-f302-4973-9a9c-4a341cc5c31e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 13:19:25.416012 kubelet[2769]: E1030 13:19:25.415954 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ws88l" podUID="ab05aa8b-f302-4973-9a9c-4a341cc5c31e" Oct 30 13:19:26.531588 systemd[1]: Started sshd@11-10.0.0.37:22-10.0.0.1:40118.service - OpenSSH per-connection server daemon (10.0.0.1:40118). Oct 30 13:19:26.593640 sshd[4973]: Accepted publickey for core from 10.0.0.1 port 40118 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:19:26.595885 sshd-session[4973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:19:26.601695 systemd-logind[1576]: New session 12 of user core. Oct 30 13:19:26.609156 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 30 13:19:26.688717 sshd[4976]: Connection closed by 10.0.0.1 port 40118 Oct 30 13:19:26.689155 sshd-session[4973]: pam_unix(sshd:session): session closed for user core Oct 30 13:19:26.697933 systemd[1]: sshd@11-10.0.0.37:22-10.0.0.1:40118.service: Deactivated successfully. Oct 30 13:19:26.700139 systemd[1]: session-12.scope: Deactivated successfully. Oct 30 13:19:26.701140 systemd-logind[1576]: Session 12 logged out. Waiting for processes to exit. Oct 30 13:19:26.704178 systemd[1]: Started sshd@12-10.0.0.37:22-10.0.0.1:40124.service - OpenSSH per-connection server daemon (10.0.0.1:40124). Oct 30 13:19:26.704808 systemd-logind[1576]: Removed session 12. Oct 30 13:19:26.763161 sshd[4990]: Accepted publickey for core from 10.0.0.1 port 40124 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:19:26.764558 sshd-session[4990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:19:26.769731 systemd-logind[1576]: New session 13 of user core. Oct 30 13:19:26.784146 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 30 13:19:26.907650 sshd[4993]: Connection closed by 10.0.0.1 port 40124 Oct 30 13:19:26.908287 sshd-session[4990]: pam_unix(sshd:session): session closed for user core Oct 30 13:19:26.920642 systemd[1]: sshd@12-10.0.0.37:22-10.0.0.1:40124.service: Deactivated successfully. Oct 30 13:19:26.928325 systemd[1]: session-13.scope: Deactivated successfully. Oct 30 13:19:26.929744 systemd-logind[1576]: Session 13 logged out. Waiting for processes to exit. Oct 30 13:19:26.933123 systemd-logind[1576]: Removed session 13. Oct 30 13:19:26.936596 systemd[1]: Started sshd@13-10.0.0.37:22-10.0.0.1:40136.service - OpenSSH per-connection server daemon (10.0.0.1:40136). Oct 30 13:19:27.002449 sshd[5005]: Accepted publickey for core from 10.0.0.1 port 40136 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:19:27.004299 sshd-session[5005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:19:27.009369 systemd-logind[1576]: New session 14 of user core. Oct 30 13:19:27.017149 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 30 13:19:27.100066 sshd[5008]: Connection closed by 10.0.0.1 port 40136 Oct 30 13:19:27.100322 sshd-session[5005]: pam_unix(sshd:session): session closed for user core Oct 30 13:19:27.105834 systemd[1]: sshd@13-10.0.0.37:22-10.0.0.1:40136.service: Deactivated successfully. Oct 30 13:19:27.108030 systemd[1]: session-14.scope: Deactivated successfully. Oct 30 13:19:27.108859 systemd-logind[1576]: Session 14 logged out. Waiting for processes to exit. Oct 30 13:19:27.110143 systemd-logind[1576]: Removed session 14. Oct 30 13:19:32.117508 systemd[1]: Started sshd@14-10.0.0.37:22-10.0.0.1:40142.service - OpenSSH per-connection server daemon (10.0.0.1:40142). Oct 30 13:19:32.173391 sshd[5035]: Accepted publickey for core from 10.0.0.1 port 40142 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:19:32.175282 sshd-session[5035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:19:32.180099 systemd-logind[1576]: New session 15 of user core. Oct 30 13:19:32.192172 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 30 13:19:32.266830 sshd[5038]: Connection closed by 10.0.0.1 port 40142 Oct 30 13:19:32.267162 sshd-session[5035]: pam_unix(sshd:session): session closed for user core Oct 30 13:19:32.272108 systemd[1]: sshd@14-10.0.0.37:22-10.0.0.1:40142.service: Deactivated successfully. Oct 30 13:19:32.274309 systemd[1]: session-15.scope: Deactivated successfully. Oct 30 13:19:32.275418 systemd-logind[1576]: Session 15 logged out. Waiting for processes to exit. Oct 30 13:19:32.276818 systemd-logind[1576]: Removed session 15. Oct 30 13:19:33.914185 kubelet[2769]: E1030 13:19:33.913975 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff94d9bcc-4c6ph" podUID="c30303b7-2f8a-4e76-affb-92ba5d248c6b" Oct 30 13:19:35.914558 kubelet[2769]: E1030 13:19:35.914455 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:19:35.915611 kubelet[2769]: E1030 13:19:35.915537 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ws88l" podUID="ab05aa8b-f302-4973-9a9c-4a341cc5c31e" Oct 30 13:19:35.916136 kubelet[2769]: E1030 13:19:35.916103 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59b5978bfb-8lpn6" podUID="3b9db068-4663-4510-b3a1-4b6c4be0ac8f" Oct 30 13:19:36.914014 kubelet[2769]: E1030 13:19:36.913937 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jqhml" podUID="f14ca4d9-aac0-4af6-a374-d183e93fb183" Oct 30 13:19:37.279779 systemd[1]: Started sshd@15-10.0.0.37:22-10.0.0.1:48484.service - OpenSSH per-connection server daemon (10.0.0.1:48484). Oct 30 13:19:37.327680 sshd[5054]: Accepted publickey for core from 10.0.0.1 port 48484 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:19:37.329063 sshd-session[5054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:19:37.333510 systemd-logind[1576]: New session 16 of user core. Oct 30 13:19:37.342120 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 30 13:19:37.437363 sshd[5057]: Connection closed by 10.0.0.1 port 48484 Oct 30 13:19:37.436525 sshd-session[5054]: pam_unix(sshd:session): session closed for user core Oct 30 13:19:37.441159 systemd[1]: sshd@15-10.0.0.37:22-10.0.0.1:48484.service: Deactivated successfully. Oct 30 13:19:37.443642 systemd[1]: session-16.scope: Deactivated successfully. Oct 30 13:19:37.445037 systemd-logind[1576]: Session 16 logged out. Waiting for processes to exit. Oct 30 13:19:37.446790 systemd-logind[1576]: Removed session 16. Oct 30 13:19:37.452109 containerd[1588]: time="2025-10-30T13:19:37.452068976Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3673948fe289201045ad98242d220bf2e3564a93bf7a67c02c1118e9a1e88758\" id:\"4483b555bfe8d211a3855cbd7f498ba8d9490995766f947ac98b245b42c76c6c\" pid:5078 exited_at:{seconds:1761830377 nanos:451792008}" Oct 30 13:19:37.454220 kubelet[2769]: E1030 13:19:37.454191 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:19:38.913364 kubelet[2769]: E1030 13:19:38.913292 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6798f4bdc5-q6qdh" podUID="2f3b781a-b409-44c1-bfbe-62b7c2fd7f95" Oct 30 13:19:38.913828 kubelet[2769]: E1030 13:19:38.913621 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff94d9bcc-4z7tw" podUID="8185e627-e431-4f4e-9719-dc6a950cb7cf" Oct 30 13:19:42.453423 systemd[1]: Started sshd@16-10.0.0.37:22-10.0.0.1:48494.service - OpenSSH per-connection server daemon (10.0.0.1:48494). Oct 30 13:19:42.527731 sshd[5098]: Accepted publickey for core from 10.0.0.1 port 48494 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:19:42.529878 sshd-session[5098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:19:42.534823 systemd-logind[1576]: New session 17 of user core. Oct 30 13:19:42.541245 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 30 13:19:42.641500 sshd[5101]: Connection closed by 10.0.0.1 port 48494 Oct 30 13:19:42.643727 sshd-session[5098]: pam_unix(sshd:session): session closed for user core Oct 30 13:19:42.647714 systemd[1]: sshd@16-10.0.0.37:22-10.0.0.1:48494.service: Deactivated successfully. Oct 30 13:19:42.649859 systemd[1]: session-17.scope: Deactivated successfully. Oct 30 13:19:42.651665 systemd-logind[1576]: Session 17 logged out. Waiting for processes to exit. Oct 30 13:19:42.653751 systemd-logind[1576]: Removed session 17. Oct 30 13:19:46.915115 containerd[1588]: time="2025-10-30T13:19:46.914777749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 13:19:47.444418 containerd[1588]: time="2025-10-30T13:19:47.444341481Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:19:47.445658 containerd[1588]: time="2025-10-30T13:19:47.445594874Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 13:19:47.445833 containerd[1588]: time="2025-10-30T13:19:47.445641554Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 13:19:47.445961 kubelet[2769]: E1030 13:19:47.445882 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 13:19:47.446575 kubelet[2769]: E1030 13:19:47.446000 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 13:19:47.446575 kubelet[2769]: E1030 13:19:47.446192 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdf6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-ff94d9bcc-4c6ph_calico-apiserver(c30303b7-2f8a-4e76-affb-92ba5d248c6b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 13:19:47.448206 kubelet[2769]: E1030 13:19:47.448139 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff94d9bcc-4c6ph" podUID="c30303b7-2f8a-4e76-affb-92ba5d248c6b" Oct 30 13:19:47.657653 systemd[1]: Started sshd@17-10.0.0.37:22-10.0.0.1:38608.service - OpenSSH per-connection server daemon (10.0.0.1:38608). Oct 30 13:19:47.729123 sshd[5114]: Accepted publickey for core from 10.0.0.1 port 38608 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:19:47.730472 sshd-session[5114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:19:47.735532 systemd-logind[1576]: New session 18 of user core. Oct 30 13:19:47.739135 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 30 13:19:47.835781 sshd[5117]: Connection closed by 10.0.0.1 port 38608 Oct 30 13:19:47.836354 sshd-session[5114]: pam_unix(sshd:session): session closed for user core Oct 30 13:19:47.847381 systemd[1]: sshd@17-10.0.0.37:22-10.0.0.1:38608.service: Deactivated successfully. Oct 30 13:19:47.849546 systemd[1]: session-18.scope: Deactivated successfully. Oct 30 13:19:47.850563 systemd-logind[1576]: Session 18 logged out. Waiting for processes to exit. Oct 30 13:19:47.853607 systemd[1]: Started sshd@18-10.0.0.37:22-10.0.0.1:38624.service - OpenSSH per-connection server daemon (10.0.0.1:38624). Oct 30 13:19:47.854313 systemd-logind[1576]: Removed session 18. Oct 30 13:19:47.921501 sshd[5131]: Accepted publickey for core from 10.0.0.1 port 38624 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:19:47.923541 sshd-session[5131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:19:47.928784 systemd-logind[1576]: New session 19 of user core. Oct 30 13:19:47.936179 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 30 13:19:48.239452 sshd[5134]: Connection closed by 10.0.0.1 port 38624 Oct 30 13:19:48.240528 sshd-session[5131]: pam_unix(sshd:session): session closed for user core Oct 30 13:19:48.251895 systemd[1]: sshd@18-10.0.0.37:22-10.0.0.1:38624.service: Deactivated successfully. Oct 30 13:19:48.254249 systemd[1]: session-19.scope: Deactivated successfully. Oct 30 13:19:48.255149 systemd-logind[1576]: Session 19 logged out. Waiting for processes to exit. Oct 30 13:19:48.257963 systemd[1]: Started sshd@19-10.0.0.37:22-10.0.0.1:38640.service - OpenSSH per-connection server daemon (10.0.0.1:38640). Oct 30 13:19:48.259173 systemd-logind[1576]: Removed session 19. Oct 30 13:19:48.350020 sshd[5146]: Accepted publickey for core from 10.0.0.1 port 38640 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:19:48.351620 sshd-session[5146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:19:48.356973 systemd-logind[1576]: New session 20 of user core. Oct 30 13:19:48.366273 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 30 13:19:48.915592 containerd[1588]: time="2025-10-30T13:19:48.915509260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 13:19:48.941311 sshd[5149]: Connection closed by 10.0.0.1 port 38640 Oct 30 13:19:48.943564 sshd-session[5146]: pam_unix(sshd:session): session closed for user core Oct 30 13:19:48.956308 systemd[1]: sshd@19-10.0.0.37:22-10.0.0.1:38640.service: Deactivated successfully. Oct 30 13:19:48.959564 systemd[1]: session-20.scope: Deactivated successfully. Oct 30 13:19:48.961461 systemd-logind[1576]: Session 20 logged out. Waiting for processes to exit. Oct 30 13:19:48.968181 systemd[1]: Started sshd@20-10.0.0.37:22-10.0.0.1:38642.service - OpenSSH per-connection server daemon (10.0.0.1:38642). Oct 30 13:19:48.971044 systemd-logind[1576]: Removed session 20. Oct 30 13:19:49.020694 sshd[5176]: Accepted publickey for core from 10.0.0.1 port 38642 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:19:49.022802 sshd-session[5176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:19:49.028006 systemd-logind[1576]: New session 21 of user core. Oct 30 13:19:49.037155 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 30 13:19:49.241943 sshd[5179]: Connection closed by 10.0.0.1 port 38642 Oct 30 13:19:49.243693 sshd-session[5176]: pam_unix(sshd:session): session closed for user core Oct 30 13:19:49.256547 systemd[1]: sshd@20-10.0.0.37:22-10.0.0.1:38642.service: Deactivated successfully. Oct 30 13:19:49.260462 containerd[1588]: time="2025-10-30T13:19:49.260410378Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:19:49.261999 containerd[1588]: time="2025-10-30T13:19:49.261874935Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 13:19:49.261999 containerd[1588]: time="2025-10-30T13:19:49.261924069Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 13:19:49.262323 kubelet[2769]: E1030 13:19:49.262208 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 13:19:49.262323 kubelet[2769]: E1030 13:19:49.262286 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 13:19:49.262270 systemd[1]: session-21.scope: Deactivated successfully. Oct 30 13:19:49.264369 containerd[1588]: time="2025-10-30T13:19:49.264165511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 13:19:49.264391 systemd-logind[1576]: Session 21 logged out. Waiting for processes to exit. Oct 30 13:19:49.266457 kubelet[2769]: E1030 13:19:49.266288 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dx8pn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ws88l_calico-system(ab05aa8b-f302-4973-9a9c-4a341cc5c31e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 13:19:49.268349 kubelet[2769]: E1030 13:19:49.267920 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ws88l" podUID="ab05aa8b-f302-4973-9a9c-4a341cc5c31e" Oct 30 13:19:49.273733 systemd[1]: Started sshd@21-10.0.0.37:22-10.0.0.1:38654.service - OpenSSH per-connection server daemon (10.0.0.1:38654). Oct 30 13:19:49.276501 systemd-logind[1576]: Removed session 21. Oct 30 13:19:49.349475 sshd[5190]: Accepted publickey for core from 10.0.0.1 port 38654 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:19:49.351212 sshd-session[5190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:19:49.357093 systemd-logind[1576]: New session 22 of user core. Oct 30 13:19:49.366132 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 30 13:19:49.445835 sshd[5193]: Connection closed by 10.0.0.1 port 38654 Oct 30 13:19:49.446208 sshd-session[5190]: pam_unix(sshd:session): session closed for user core Oct 30 13:19:49.451908 systemd[1]: sshd@21-10.0.0.37:22-10.0.0.1:38654.service: Deactivated successfully. Oct 30 13:19:49.454186 systemd[1]: session-22.scope: Deactivated successfully. Oct 30 13:19:49.455737 systemd-logind[1576]: Session 22 logged out. Waiting for processes to exit. Oct 30 13:19:49.458060 systemd-logind[1576]: Removed session 22. Oct 30 13:19:49.654508 containerd[1588]: time="2025-10-30T13:19:49.654440721Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:19:49.655606 containerd[1588]: time="2025-10-30T13:19:49.655571310Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 13:19:49.655722 containerd[1588]: time="2025-10-30T13:19:49.655648137Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 13:19:49.655865 kubelet[2769]: E1030 13:19:49.655818 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 13:19:49.655912 kubelet[2769]: E1030 13:19:49.655869 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 13:19:49.656084 kubelet[2769]: E1030 13:19:49.656034 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bpfc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jqhml_calico-system(f14ca4d9-aac0-4af6-a374-d183e93fb183): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 13:19:49.658156 containerd[1588]: time="2025-10-30T13:19:49.658128654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 13:19:49.916160 kubelet[2769]: E1030 13:19:49.915570 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:19:50.025862 containerd[1588]: time="2025-10-30T13:19:50.025798043Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:19:50.027071 containerd[1588]: time="2025-10-30T13:19:50.026958861Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 13:19:50.027071 containerd[1588]: time="2025-10-30T13:19:50.027023424Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 13:19:50.027308 kubelet[2769]: E1030 13:19:50.027231 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 13:19:50.027401 kubelet[2769]: E1030 13:19:50.027314 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 13:19:50.027548 kubelet[2769]: E1030 13:19:50.027496 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bpfc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jqhml_calico-system(f14ca4d9-aac0-4af6-a374-d183e93fb183): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 13:19:50.028725 kubelet[2769]: E1030 13:19:50.028663 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jqhml" podUID="f14ca4d9-aac0-4af6-a374-d183e93fb183" Oct 30 13:19:50.918011 containerd[1588]: time="2025-10-30T13:19:50.917231916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 13:19:51.274005 containerd[1588]: time="2025-10-30T13:19:51.273770671Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:19:51.275085 containerd[1588]: time="2025-10-30T13:19:51.275048793Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 13:19:51.275163 containerd[1588]: time="2025-10-30T13:19:51.275141540Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 13:19:51.275460 kubelet[2769]: E1030 13:19:51.275363 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 13:19:51.275460 kubelet[2769]: E1030 13:19:51.275456 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 13:19:51.277150 kubelet[2769]: E1030 13:19:51.275580 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f8ced58a223049ba82448ee04d635af3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hcptg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59b5978bfb-8lpn6_calico-system(3b9db068-4663-4510-b3a1-4b6c4be0ac8f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 13:19:51.279376 containerd[1588]: time="2025-10-30T13:19:51.279344060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 13:19:51.658415 containerd[1588]: time="2025-10-30T13:19:51.658223951Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:19:51.660003 containerd[1588]: time="2025-10-30T13:19:51.659874396Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 13:19:51.660162 containerd[1588]: time="2025-10-30T13:19:51.660134863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 13:19:51.660309 kubelet[2769]: E1030 13:19:51.660227 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 13:19:51.660401 kubelet[2769]: E1030 13:19:51.660325 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 13:19:51.660507 kubelet[2769]: E1030 13:19:51.660457 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hcptg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59b5978bfb-8lpn6_calico-system(3b9db068-4663-4510-b3a1-4b6c4be0ac8f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 13:19:51.662115 kubelet[2769]: E1030 13:19:51.661955 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59b5978bfb-8lpn6" podUID="3b9db068-4663-4510-b3a1-4b6c4be0ac8f" Oct 30 13:19:52.914933 containerd[1588]: time="2025-10-30T13:19:52.914880516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 13:19:53.290392 containerd[1588]: time="2025-10-30T13:19:53.290330788Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:19:53.291496 containerd[1588]: time="2025-10-30T13:19:53.291459807Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 13:19:53.291563 containerd[1588]: time="2025-10-30T13:19:53.291526143Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 13:19:53.291995 kubelet[2769]: E1030 13:19:53.291706 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 13:19:53.291995 kubelet[2769]: E1030 13:19:53.291762 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 13:19:53.293218 containerd[1588]: time="2025-10-30T13:19:53.292465388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 13:19:53.293296 kubelet[2769]: E1030 13:19:53.292449 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c7p4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-ff94d9bcc-4z7tw_calico-apiserver(8185e627-e431-4f4e-9719-dc6a950cb7cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 13:19:53.294652 kubelet[2769]: E1030 13:19:53.294327 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff94d9bcc-4z7tw" podUID="8185e627-e431-4f4e-9719-dc6a950cb7cf" Oct 30 13:19:53.620232 containerd[1588]: time="2025-10-30T13:19:53.620067739Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:19:53.622218 containerd[1588]: time="2025-10-30T13:19:53.622172812Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 13:19:53.622378 containerd[1588]: time="2025-10-30T13:19:53.622210214Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 13:19:53.622427 kubelet[2769]: E1030 13:19:53.622378 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 13:19:53.622473 kubelet[2769]: E1030 13:19:53.622432 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 13:19:53.622626 kubelet[2769]: E1030 13:19:53.622576 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dmp74,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6798f4bdc5-q6qdh_calico-system(2f3b781a-b409-44c1-bfbe-62b7c2fd7f95): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 13:19:53.623844 kubelet[2769]: E1030 13:19:53.623790 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6798f4bdc5-q6qdh" podUID="2f3b781a-b409-44c1-bfbe-62b7c2fd7f95" Oct 30 13:19:54.467244 systemd[1]: Started sshd@22-10.0.0.37:22-10.0.0.1:38664.service - OpenSSH per-connection server daemon (10.0.0.1:38664). Oct 30 13:19:54.540682 sshd[5211]: Accepted publickey for core from 10.0.0.1 port 38664 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:19:54.542390 sshd-session[5211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:19:54.547232 systemd-logind[1576]: New session 23 of user core. Oct 30 13:19:54.556132 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 30 13:19:54.642798 sshd[5214]: Connection closed by 10.0.0.1 port 38664 Oct 30 13:19:54.643127 sshd-session[5211]: pam_unix(sshd:session): session closed for user core Oct 30 13:19:54.649648 systemd[1]: sshd@22-10.0.0.37:22-10.0.0.1:38664.service: Deactivated successfully. Oct 30 13:19:54.652909 systemd[1]: session-23.scope: Deactivated successfully. Oct 30 13:19:54.655709 systemd-logind[1576]: Session 23 logged out. Waiting for processes to exit. Oct 30 13:19:54.657108 systemd-logind[1576]: Removed session 23. Oct 30 13:19:59.662363 systemd[1]: Started sshd@23-10.0.0.37:22-10.0.0.1:34664.service - OpenSSH per-connection server daemon (10.0.0.1:34664). Oct 30 13:19:59.726016 sshd[5227]: Accepted publickey for core from 10.0.0.1 port 34664 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:19:59.726524 sshd-session[5227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:19:59.732341 systemd-logind[1576]: New session 24 of user core. Oct 30 13:19:59.742302 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 30 13:19:59.826701 sshd[5230]: Connection closed by 10.0.0.1 port 34664 Oct 30 13:19:59.829287 sshd-session[5227]: pam_unix(sshd:session): session closed for user core Oct 30 13:19:59.834658 systemd[1]: sshd@23-10.0.0.37:22-10.0.0.1:34664.service: Deactivated successfully. Oct 30 13:19:59.837379 systemd[1]: session-24.scope: Deactivated successfully. Oct 30 13:19:59.838321 systemd-logind[1576]: Session 24 logged out. Waiting for processes to exit. Oct 30 13:19:59.840944 systemd-logind[1576]: Removed session 24. Oct 30 13:20:01.913696 kubelet[2769]: E1030 13:20:01.913637 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ws88l" podUID="ab05aa8b-f302-4973-9a9c-4a341cc5c31e" Oct 30 13:20:02.913614 kubelet[2769]: E1030 13:20:02.913551 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:20:02.914889 kubelet[2769]: E1030 13:20:02.914805 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff94d9bcc-4c6ph" podUID="c30303b7-2f8a-4e76-affb-92ba5d248c6b" Oct 30 13:20:02.916118 kubelet[2769]: E1030 13:20:02.915938 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59b5978bfb-8lpn6" podUID="3b9db068-4663-4510-b3a1-4b6c4be0ac8f" Oct 30 13:20:03.913944 kubelet[2769]: E1030 13:20:03.913817 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:20:03.914485 kubelet[2769]: E1030 13:20:03.914397 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ff94d9bcc-4z7tw" podUID="8185e627-e431-4f4e-9719-dc6a950cb7cf" Oct 30 13:20:03.915671 kubelet[2769]: E1030 13:20:03.915605 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jqhml" podUID="f14ca4d9-aac0-4af6-a374-d183e93fb183" Oct 30 13:20:04.840403 systemd[1]: Started sshd@24-10.0.0.37:22-10.0.0.1:34676.service - OpenSSH per-connection server daemon (10.0.0.1:34676). Oct 30 13:20:04.902588 sshd[5246]: Accepted publickey for core from 10.0.0.1 port 34676 ssh2: RSA SHA256:c3t/zpy+7hheQnx8VQXkkdRUAhmSlZ5PvCdvAoB0wVo Oct 30 13:20:04.904064 sshd-session[5246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:20:04.908998 systemd-logind[1576]: New session 25 of user core. Oct 30 13:20:04.914032 kubelet[2769]: E1030 13:20:04.913796 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6798f4bdc5-q6qdh" podUID="2f3b781a-b409-44c1-bfbe-62b7c2fd7f95" Oct 30 13:20:04.920188 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 30 13:20:05.002345 sshd[5249]: Connection closed by 10.0.0.1 port 34676 Oct 30 13:20:05.002687 sshd-session[5246]: pam_unix(sshd:session): session closed for user core Oct 30 13:20:05.008065 systemd[1]: sshd@24-10.0.0.37:22-10.0.0.1:34676.service: Deactivated successfully. Oct 30 13:20:05.010409 systemd[1]: session-25.scope: Deactivated successfully. Oct 30 13:20:05.011293 systemd-logind[1576]: Session 25 logged out. Waiting for processes to exit. Oct 30 13:20:05.013009 systemd-logind[1576]: Removed session 25.