Oct 27 08:16:47.322704 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Mon Oct 27 06:24:35 -00 2025 Oct 27 08:16:47.322727 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e6ac205aca0358d0b739fe2cba6f8244850dbdc9027fd8e7442161fce065515e Oct 27 08:16:47.322738 kernel: BIOS-provided physical RAM map: Oct 27 08:16:47.322745 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 27 08:16:47.322752 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 27 08:16:47.322759 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 27 08:16:47.322767 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Oct 27 08:16:47.322774 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Oct 27 08:16:47.322784 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 27 08:16:47.322793 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 27 08:16:47.322800 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 27 08:16:47.322806 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 27 08:16:47.322813 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 27 08:16:47.322820 kernel: NX (Execute Disable) protection: active Oct 27 08:16:47.322836 kernel: APIC: Static calls initialized Oct 27 08:16:47.322844 kernel: SMBIOS 2.8 present. Oct 27 08:16:47.322855 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Oct 27 08:16:47.322862 kernel: DMI: Memory slots populated: 1/1 Oct 27 08:16:47.322870 kernel: Hypervisor detected: KVM Oct 27 08:16:47.322877 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Oct 27 08:16:47.322885 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 27 08:16:47.322892 kernel: kvm-clock: using sched offset of 3640308068 cycles Oct 27 08:16:47.322900 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 27 08:16:47.322908 kernel: tsc: Detected 2794.748 MHz processor Oct 27 08:16:47.322919 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 27 08:16:47.322927 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 27 08:16:47.322935 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Oct 27 08:16:47.322943 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 27 08:16:47.322951 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 27 08:16:47.322958 kernel: Using GB pages for direct mapping Oct 27 08:16:47.322966 kernel: ACPI: Early table checksum verification disabled Oct 27 08:16:47.322976 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Oct 27 08:16:47.322984 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:16:47.322992 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:16:47.323000 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:16:47.323008 kernel: ACPI: FACS 0x000000009CFE0000 000040 Oct 27 08:16:47.323016 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:16:47.323024 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:16:47.323034 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:16:47.323042 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:16:47.323053 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Oct 27 08:16:47.323061 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Oct 27 08:16:47.323069 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Oct 27 08:16:47.323079 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Oct 27 08:16:47.323087 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Oct 27 08:16:47.323095 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Oct 27 08:16:47.323102 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Oct 27 08:16:47.323110 kernel: No NUMA configuration found Oct 27 08:16:47.323118 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Oct 27 08:16:47.323128 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Oct 27 08:16:47.323136 kernel: Zone ranges: Oct 27 08:16:47.323144 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 27 08:16:47.323152 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Oct 27 08:16:47.323160 kernel: Normal empty Oct 27 08:16:47.323168 kernel: Device empty Oct 27 08:16:47.323176 kernel: Movable zone start for each node Oct 27 08:16:47.323544 kernel: Early memory node ranges Oct 27 08:16:47.323559 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 27 08:16:47.323567 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Oct 27 08:16:47.323575 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Oct 27 08:16:47.323583 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 27 08:16:47.323592 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 27 08:16:47.323600 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Oct 27 08:16:47.323612 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 27 08:16:47.323620 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 27 08:16:47.323631 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 27 08:16:47.323639 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 27 08:16:47.323649 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 27 08:16:47.323658 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 27 08:16:47.323666 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 27 08:16:47.323674 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 27 08:16:47.323682 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 27 08:16:47.323692 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 27 08:16:47.323700 kernel: TSC deadline timer available Oct 27 08:16:47.323708 kernel: CPU topo: Max. logical packages: 1 Oct 27 08:16:47.323717 kernel: CPU topo: Max. logical dies: 1 Oct 27 08:16:47.323724 kernel: CPU topo: Max. dies per package: 1 Oct 27 08:16:47.323732 kernel: CPU topo: Max. threads per core: 1 Oct 27 08:16:47.323740 kernel: CPU topo: Num. cores per package: 4 Oct 27 08:16:47.323750 kernel: CPU topo: Num. threads per package: 4 Oct 27 08:16:47.323758 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Oct 27 08:16:47.323766 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 27 08:16:47.323774 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 27 08:16:47.323782 kernel: kvm-guest: setup PV sched yield Oct 27 08:16:47.323790 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 27 08:16:47.323798 kernel: Booting paravirtualized kernel on KVM Oct 27 08:16:47.323807 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 27 08:16:47.323817 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 27 08:16:47.323825 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Oct 27 08:16:47.323833 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Oct 27 08:16:47.323841 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 27 08:16:47.323849 kernel: kvm-guest: PV spinlocks enabled Oct 27 08:16:47.323857 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 27 08:16:47.323866 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e6ac205aca0358d0b739fe2cba6f8244850dbdc9027fd8e7442161fce065515e Oct 27 08:16:47.323877 kernel: random: crng init done Oct 27 08:16:47.323885 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 27 08:16:47.323894 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 27 08:16:47.323902 kernel: Fallback order for Node 0: 0 Oct 27 08:16:47.323910 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Oct 27 08:16:47.323918 kernel: Policy zone: DMA32 Oct 27 08:16:47.323926 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 27 08:16:47.323936 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 27 08:16:47.323944 kernel: ftrace: allocating 40092 entries in 157 pages Oct 27 08:16:47.323952 kernel: ftrace: allocated 157 pages with 5 groups Oct 27 08:16:47.323960 kernel: Dynamic Preempt: voluntary Oct 27 08:16:47.323968 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 27 08:16:47.323977 kernel: rcu: RCU event tracing is enabled. Oct 27 08:16:47.323985 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 27 08:16:47.323995 kernel: Trampoline variant of Tasks RCU enabled. Oct 27 08:16:47.324006 kernel: Rude variant of Tasks RCU enabled. Oct 27 08:16:47.324014 kernel: Tracing variant of Tasks RCU enabled. Oct 27 08:16:47.324022 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 27 08:16:47.324030 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 27 08:16:47.324038 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 27 08:16:47.324046 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 27 08:16:47.324056 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 27 08:16:47.324065 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 27 08:16:47.324073 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 27 08:16:47.324088 kernel: Console: colour VGA+ 80x25 Oct 27 08:16:47.324098 kernel: printk: legacy console [ttyS0] enabled Oct 27 08:16:47.324107 kernel: ACPI: Core revision 20240827 Oct 27 08:16:47.324115 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 27 08:16:47.324123 kernel: APIC: Switch to symmetric I/O mode setup Oct 27 08:16:47.324132 kernel: x2apic enabled Oct 27 08:16:47.324140 kernel: APIC: Switched APIC routing to: physical x2apic Oct 27 08:16:47.324153 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 27 08:16:47.324161 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 27 08:16:47.324170 kernel: kvm-guest: setup PV IPIs Oct 27 08:16:47.324178 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 27 08:16:47.324207 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Oct 27 08:16:47.324216 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 27 08:16:47.324224 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 27 08:16:47.324233 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 27 08:16:47.324241 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 27 08:16:47.324249 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 27 08:16:47.324268 kernel: Spectre V2 : Mitigation: Retpolines Oct 27 08:16:47.324276 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 27 08:16:47.324285 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 27 08:16:47.324293 kernel: active return thunk: retbleed_return_thunk Oct 27 08:16:47.324301 kernel: RETBleed: Mitigation: untrained return thunk Oct 27 08:16:47.324310 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 27 08:16:47.324318 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 27 08:16:47.324329 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 27 08:16:47.324338 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 27 08:16:47.324347 kernel: active return thunk: srso_return_thunk Oct 27 08:16:47.324355 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 27 08:16:47.324363 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 27 08:16:47.324372 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 27 08:16:47.324380 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 27 08:16:47.324390 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 27 08:16:47.324399 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 27 08:16:47.324407 kernel: Freeing SMP alternatives memory: 32K Oct 27 08:16:47.324415 kernel: pid_max: default: 32768 minimum: 301 Oct 27 08:16:47.324424 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 27 08:16:47.324432 kernel: landlock: Up and running. Oct 27 08:16:47.324440 kernel: SELinux: Initializing. Oct 27 08:16:47.324453 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 27 08:16:47.324461 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 27 08:16:47.324470 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 27 08:16:47.324478 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 27 08:16:47.324487 kernel: ... version: 0 Oct 27 08:16:47.324495 kernel: ... bit width: 48 Oct 27 08:16:47.324503 kernel: ... generic registers: 6 Oct 27 08:16:47.324514 kernel: ... value mask: 0000ffffffffffff Oct 27 08:16:47.324522 kernel: ... max period: 00007fffffffffff Oct 27 08:16:47.324530 kernel: ... fixed-purpose events: 0 Oct 27 08:16:47.324538 kernel: ... event mask: 000000000000003f Oct 27 08:16:47.324546 kernel: signal: max sigframe size: 1776 Oct 27 08:16:47.324554 kernel: rcu: Hierarchical SRCU implementation. Oct 27 08:16:47.324563 kernel: rcu: Max phase no-delay instances is 400. Oct 27 08:16:47.324572 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 27 08:16:47.324583 kernel: smp: Bringing up secondary CPUs ... Oct 27 08:16:47.324591 kernel: smpboot: x86: Booting SMP configuration: Oct 27 08:16:47.324599 kernel: .... node #0, CPUs: #1 #2 #3 Oct 27 08:16:47.324607 kernel: smp: Brought up 1 node, 4 CPUs Oct 27 08:16:47.324616 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 27 08:16:47.324624 kernel: Memory: 2451440K/2571752K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15964K init, 2080K bss, 114376K reserved, 0K cma-reserved) Oct 27 08:16:47.324633 kernel: devtmpfs: initialized Oct 27 08:16:47.324643 kernel: x86/mm: Memory block size: 128MB Oct 27 08:16:47.324652 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 27 08:16:47.324660 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 27 08:16:47.324668 kernel: pinctrl core: initialized pinctrl subsystem Oct 27 08:16:47.324676 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 27 08:16:47.324685 kernel: audit: initializing netlink subsys (disabled) Oct 27 08:16:47.324693 kernel: audit: type=2000 audit(1761553004.775:1): state=initialized audit_enabled=0 res=1 Oct 27 08:16:47.324704 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 27 08:16:47.324712 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 27 08:16:47.324720 kernel: cpuidle: using governor menu Oct 27 08:16:47.324728 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 27 08:16:47.324737 kernel: dca service started, version 1.12.1 Oct 27 08:16:47.324745 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Oct 27 08:16:47.324753 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 27 08:16:47.324764 kernel: PCI: Using configuration type 1 for base access Oct 27 08:16:47.324772 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 27 08:16:47.324781 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 27 08:16:47.324789 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 27 08:16:47.324798 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 27 08:16:47.324806 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 27 08:16:47.324814 kernel: ACPI: Added _OSI(Module Device) Oct 27 08:16:47.324825 kernel: ACPI: Added _OSI(Processor Device) Oct 27 08:16:47.324833 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 27 08:16:47.324841 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 27 08:16:47.324849 kernel: ACPI: Interpreter enabled Oct 27 08:16:47.324858 kernel: ACPI: PM: (supports S0 S3 S5) Oct 27 08:16:47.324866 kernel: ACPI: Using IOAPIC for interrupt routing Oct 27 08:16:47.324874 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 27 08:16:47.324885 kernel: PCI: Using E820 reservations for host bridge windows Oct 27 08:16:47.324893 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 27 08:16:47.324901 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 27 08:16:47.325156 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 27 08:16:47.325368 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 27 08:16:47.325547 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 27 08:16:47.325562 kernel: PCI host bridge to bus 0000:00 Oct 27 08:16:47.325740 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 27 08:16:47.325901 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 27 08:16:47.326061 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 27 08:16:47.326266 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 27 08:16:47.326431 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 27 08:16:47.326595 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Oct 27 08:16:47.326752 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 27 08:16:47.326946 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Oct 27 08:16:47.327130 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Oct 27 08:16:47.327330 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Oct 27 08:16:47.327539 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Oct 27 08:16:47.327710 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Oct 27 08:16:47.327880 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 27 08:16:47.328063 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 27 08:16:47.328264 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Oct 27 08:16:47.328443 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Oct 27 08:16:47.328621 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Oct 27 08:16:47.328803 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 27 08:16:47.328976 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Oct 27 08:16:47.329150 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Oct 27 08:16:47.329350 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Oct 27 08:16:47.329535 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 27 08:16:47.329713 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Oct 27 08:16:47.329883 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Oct 27 08:16:47.330053 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Oct 27 08:16:47.330244 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Oct 27 08:16:47.330436 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Oct 27 08:16:47.330613 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 27 08:16:47.330793 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Oct 27 08:16:47.330968 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Oct 27 08:16:47.331142 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Oct 27 08:16:47.331352 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Oct 27 08:16:47.331525 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Oct 27 08:16:47.331541 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 27 08:16:47.331550 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 27 08:16:47.331559 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 27 08:16:47.331567 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 27 08:16:47.331575 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 27 08:16:47.331584 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 27 08:16:47.331594 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 27 08:16:47.331603 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 27 08:16:47.331611 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 27 08:16:47.331620 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 27 08:16:47.331628 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 27 08:16:47.331637 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 27 08:16:47.331645 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 27 08:16:47.331654 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 27 08:16:47.331664 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 27 08:16:47.331672 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 27 08:16:47.331681 kernel: iommu: Default domain type: Translated Oct 27 08:16:47.331690 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 27 08:16:47.331698 kernel: PCI: Using ACPI for IRQ routing Oct 27 08:16:47.331706 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 27 08:16:47.331714 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 27 08:16:47.331725 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Oct 27 08:16:47.331896 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 27 08:16:47.332067 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 27 08:16:47.332355 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 27 08:16:47.332392 kernel: vgaarb: loaded Oct 27 08:16:47.332403 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 27 08:16:47.332420 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 27 08:16:47.332429 kernel: clocksource: Switched to clocksource kvm-clock Oct 27 08:16:47.332438 kernel: VFS: Disk quotas dquot_6.6.0 Oct 27 08:16:47.332448 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 27 08:16:47.332456 kernel: pnp: PnP ACPI init Oct 27 08:16:47.332664 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 27 08:16:47.332679 kernel: pnp: PnP ACPI: found 6 devices Oct 27 08:16:47.332692 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 27 08:16:47.332701 kernel: NET: Registered PF_INET protocol family Oct 27 08:16:47.332710 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 27 08:16:47.332719 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 27 08:16:47.332728 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 27 08:16:47.332737 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 27 08:16:47.332746 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 27 08:16:47.332758 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 27 08:16:47.332767 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 27 08:16:47.332776 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 27 08:16:47.332785 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 27 08:16:47.332794 kernel: NET: Registered PF_XDP protocol family Oct 27 08:16:47.332969 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 27 08:16:47.333131 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 27 08:16:47.335237 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 27 08:16:47.335417 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 27 08:16:47.335581 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 27 08:16:47.335742 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Oct 27 08:16:47.335754 kernel: PCI: CLS 0 bytes, default 64 Oct 27 08:16:47.335763 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Oct 27 08:16:47.335773 kernel: Initialise system trusted keyrings Oct 27 08:16:47.335787 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 27 08:16:47.335796 kernel: Key type asymmetric registered Oct 27 08:16:47.335805 kernel: Asymmetric key parser 'x509' registered Oct 27 08:16:47.335814 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 27 08:16:47.335823 kernel: io scheduler mq-deadline registered Oct 27 08:16:47.335832 kernel: io scheduler kyber registered Oct 27 08:16:47.335841 kernel: io scheduler bfq registered Oct 27 08:16:47.335852 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 27 08:16:47.335862 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 27 08:16:47.335871 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 27 08:16:47.335880 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 27 08:16:47.335889 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 27 08:16:47.335898 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 27 08:16:47.335907 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 27 08:16:47.335919 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 27 08:16:47.335928 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 27 08:16:47.335937 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 27 08:16:47.336123 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 27 08:16:47.336327 kernel: rtc_cmos 00:04: registered as rtc0 Oct 27 08:16:47.336495 kernel: rtc_cmos 00:04: setting system clock to 2025-10-27T08:16:45 UTC (1761553005) Oct 27 08:16:47.336665 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 27 08:16:47.336677 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 27 08:16:47.336687 kernel: NET: Registered PF_INET6 protocol family Oct 27 08:16:47.336696 kernel: Segment Routing with IPv6 Oct 27 08:16:47.336704 kernel: In-situ OAM (IOAM) with IPv6 Oct 27 08:16:47.336713 kernel: NET: Registered PF_PACKET protocol family Oct 27 08:16:47.336722 kernel: Key type dns_resolver registered Oct 27 08:16:47.336733 kernel: IPI shorthand broadcast: enabled Oct 27 08:16:47.336742 kernel: sched_clock: Marking stable (1209002680, 199352670)->(1458306776, -49951426) Oct 27 08:16:47.336751 kernel: registered taskstats version 1 Oct 27 08:16:47.336760 kernel: Loading compiled-in X.509 certificates Oct 27 08:16:47.336769 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 6c7ef547b8d769f7afd2708799fb9c3145695bfb' Oct 27 08:16:47.336777 kernel: Demotion targets for Node 0: null Oct 27 08:16:47.336786 kernel: Key type .fscrypt registered Oct 27 08:16:47.336797 kernel: Key type fscrypt-provisioning registered Oct 27 08:16:47.336806 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 27 08:16:47.336815 kernel: ima: Allocated hash algorithm: sha1 Oct 27 08:16:47.336824 kernel: ima: No architecture policies found Oct 27 08:16:47.336832 kernel: clk: Disabling unused clocks Oct 27 08:16:47.336841 kernel: Freeing unused kernel image (initmem) memory: 15964K Oct 27 08:16:47.336850 kernel: Write protecting the kernel read-only data: 40960k Oct 27 08:16:47.336859 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Oct 27 08:16:47.336870 kernel: Run /init as init process Oct 27 08:16:47.336879 kernel: with arguments: Oct 27 08:16:47.336889 kernel: /init Oct 27 08:16:47.336897 kernel: with environment: Oct 27 08:16:47.336906 kernel: HOME=/ Oct 27 08:16:47.336915 kernel: TERM=linux Oct 27 08:16:47.336924 kernel: SCSI subsystem initialized Oct 27 08:16:47.336935 kernel: libata version 3.00 loaded. Oct 27 08:16:47.337117 kernel: ahci 0000:00:1f.2: version 3.0 Oct 27 08:16:47.337149 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 27 08:16:47.337350 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Oct 27 08:16:47.337527 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Oct 27 08:16:47.337847 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 27 08:16:47.338305 kernel: scsi host0: ahci Oct 27 08:16:47.338621 kernel: scsi host1: ahci Oct 27 08:16:47.338872 kernel: scsi host2: ahci Oct 27 08:16:47.339067 kernel: scsi host3: ahci Oct 27 08:16:47.339287 kernel: scsi host4: ahci Oct 27 08:16:47.339480 kernel: scsi host5: ahci Oct 27 08:16:47.339494 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Oct 27 08:16:47.339503 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Oct 27 08:16:47.339512 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Oct 27 08:16:47.339521 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Oct 27 08:16:47.339530 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Oct 27 08:16:47.339542 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Oct 27 08:16:47.339551 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 27 08:16:47.339560 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 27 08:16:47.339569 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 27 08:16:47.339579 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 27 08:16:47.339588 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 27 08:16:47.339597 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 27 08:16:47.339608 kernel: ata3.00: LPM support broken, forcing max_power Oct 27 08:16:47.339616 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 27 08:16:47.339626 kernel: ata3.00: applying bridge limits Oct 27 08:16:47.339635 kernel: ata3.00: LPM support broken, forcing max_power Oct 27 08:16:47.339643 kernel: ata3.00: configured for UDMA/100 Oct 27 08:16:47.339848 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 27 08:16:47.340044 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 27 08:16:47.340238 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 27 08:16:47.340252 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 27 08:16:47.340269 kernel: GPT:16515071 != 27000831 Oct 27 08:16:47.340278 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 27 08:16:47.340287 kernel: GPT:16515071 != 27000831 Oct 27 08:16:47.340296 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 27 08:16:47.340308 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 27 08:16:47.340318 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:16:47.340563 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 27 08:16:47.340576 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 27 08:16:47.340768 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 27 08:16:47.340780 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 27 08:16:47.340790 kernel: device-mapper: uevent: version 1.0.3 Oct 27 08:16:47.340803 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 27 08:16:47.340815 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 27 08:16:47.340827 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:16:47.340835 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:16:47.340844 kernel: raid6: avx2x4 gen() 30401 MB/s Oct 27 08:16:47.340855 kernel: raid6: avx2x2 gen() 30733 MB/s Oct 27 08:16:47.340864 kernel: raid6: avx2x1 gen() 25652 MB/s Oct 27 08:16:47.340873 kernel: raid6: using algorithm avx2x2 gen() 30733 MB/s Oct 27 08:16:47.340882 kernel: raid6: .... xor() 19839 MB/s, rmw enabled Oct 27 08:16:47.340891 kernel: raid6: using avx2x2 recovery algorithm Oct 27 08:16:47.340900 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:16:47.340909 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:16:47.340919 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:16:47.340928 kernel: xor: automatically using best checksumming function avx Oct 27 08:16:47.340937 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:16:47.340945 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 27 08:16:47.340957 kernel: BTRFS: device fsid bf514789-bcec-4c15-ac9d-e4c3d19a42b2 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (176) Oct 27 08:16:47.340967 kernel: BTRFS info (device dm-0): first mount of filesystem bf514789-bcec-4c15-ac9d-e4c3d19a42b2 Oct 27 08:16:47.340978 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 27 08:16:47.340989 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 27 08:16:47.340998 kernel: BTRFS info (device dm-0): enabling free space tree Oct 27 08:16:47.341007 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:16:47.341016 kernel: loop: module loaded Oct 27 08:16:47.341025 kernel: loop0: detected capacity change from 0 to 100120 Oct 27 08:16:47.341034 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 27 08:16:47.341044 systemd[1]: Successfully made /usr/ read-only. Oct 27 08:16:47.341060 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 27 08:16:47.341071 systemd[1]: Detected virtualization kvm. Oct 27 08:16:47.341080 systemd[1]: Detected architecture x86-64. Oct 27 08:16:47.341089 systemd[1]: Running in initrd. Oct 27 08:16:47.341098 systemd[1]: No hostname configured, using default hostname. Oct 27 08:16:47.341108 systemd[1]: Hostname set to . Oct 27 08:16:47.341119 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 27 08:16:47.341129 systemd[1]: Queued start job for default target initrd.target. Oct 27 08:16:47.341138 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 27 08:16:47.341160 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 27 08:16:47.341194 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 27 08:16:47.341205 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 27 08:16:47.341214 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 27 08:16:47.341229 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 27 08:16:47.341239 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 27 08:16:47.341248 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 27 08:16:47.341266 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 27 08:16:47.341275 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 27 08:16:47.341287 systemd[1]: Reached target paths.target - Path Units. Oct 27 08:16:47.341297 systemd[1]: Reached target slices.target - Slice Units. Oct 27 08:16:47.341307 systemd[1]: Reached target swap.target - Swaps. Oct 27 08:16:47.341316 systemd[1]: Reached target timers.target - Timer Units. Oct 27 08:16:47.341326 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 27 08:16:47.341335 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 27 08:16:47.341344 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 27 08:16:47.341357 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 27 08:16:47.341366 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 27 08:16:47.341376 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 27 08:16:47.341386 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 27 08:16:47.341395 systemd[1]: Reached target sockets.target - Socket Units. Oct 27 08:16:47.341405 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 27 08:16:47.341415 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 27 08:16:47.341426 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 27 08:16:47.341436 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 27 08:16:47.341446 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 27 08:16:47.341456 systemd[1]: Starting systemd-fsck-usr.service... Oct 27 08:16:47.341465 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 27 08:16:47.341475 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 27 08:16:47.341484 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 08:16:47.341496 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 27 08:16:47.341506 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 27 08:16:47.341515 systemd[1]: Finished systemd-fsck-usr.service. Oct 27 08:16:47.341529 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 27 08:16:47.341569 systemd-journald[310]: Collecting audit messages is disabled. Oct 27 08:16:47.341595 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 27 08:16:47.341608 systemd-journald[310]: Journal started Oct 27 08:16:47.341629 systemd-journald[310]: Runtime Journal (/run/log/journal/7ba28b7eed504ba6976f848c39fb835c) is 6M, max 48.3M, 42.2M free. Oct 27 08:16:47.344212 systemd[1]: Started systemd-journald.service - Journal Service. Oct 27 08:16:47.346466 systemd-modules-load[313]: Inserted module 'br_netfilter' Oct 27 08:16:47.348425 kernel: Bridge firewalling registered Oct 27 08:16:47.350502 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 27 08:16:47.352047 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 27 08:16:47.358054 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 27 08:16:47.361139 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 27 08:16:47.435894 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 27 08:16:47.441940 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:16:47.445178 systemd-tmpfiles[327]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 27 08:16:47.447123 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 27 08:16:47.453793 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 27 08:16:47.457816 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 27 08:16:47.471895 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 27 08:16:47.481369 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 27 08:16:47.485423 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 27 08:16:47.491237 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 27 08:16:47.527768 dracut-cmdline[357]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e6ac205aca0358d0b739fe2cba6f8244850dbdc9027fd8e7442161fce065515e Oct 27 08:16:47.539595 systemd-resolved[344]: Positive Trust Anchors: Oct 27 08:16:47.539609 systemd-resolved[344]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 27 08:16:47.539614 systemd-resolved[344]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 27 08:16:47.539644 systemd-resolved[344]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 27 08:16:47.573960 systemd-resolved[344]: Defaulting to hostname 'linux'. Oct 27 08:16:47.575411 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 27 08:16:47.577704 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 27 08:16:47.672241 kernel: Loading iSCSI transport class v2.0-870. Oct 27 08:16:47.686228 kernel: iscsi: registered transport (tcp) Oct 27 08:16:47.713217 kernel: iscsi: registered transport (qla4xxx) Oct 27 08:16:47.713263 kernel: QLogic iSCSI HBA Driver Oct 27 08:16:47.745635 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 27 08:16:47.779775 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 27 08:16:47.783151 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 27 08:16:47.840209 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 27 08:16:47.845463 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 27 08:16:47.847778 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 27 08:16:47.893601 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 27 08:16:47.897270 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 27 08:16:47.928500 systemd-udevd[596]: Using default interface naming scheme 'v257'. Oct 27 08:16:47.942385 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 27 08:16:47.948420 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 27 08:16:47.978510 dracut-pre-trigger[661]: rd.md=0: removing MD RAID activation Oct 27 08:16:47.988333 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 27 08:16:47.992226 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 27 08:16:48.019590 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 27 08:16:48.021080 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 27 08:16:48.047556 systemd-networkd[710]: lo: Link UP Oct 27 08:16:48.047565 systemd-networkd[710]: lo: Gained carrier Oct 27 08:16:48.048148 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 27 08:16:48.050689 systemd[1]: Reached target network.target - Network. Oct 27 08:16:48.121170 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 27 08:16:48.160870 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 27 08:16:48.195806 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 27 08:16:48.225224 kernel: cryptd: max_cpu_qlen set to 1000 Oct 27 08:16:48.226647 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 27 08:16:48.232087 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 27 08:16:48.250988 systemd-networkd[710]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 27 08:16:48.261960 kernel: AES CTR mode by8 optimization enabled Oct 27 08:16:48.250993 systemd-networkd[710]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 27 08:16:48.277496 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 27 08:16:48.254036 systemd-networkd[710]: eth0: Link UP Oct 27 08:16:48.254264 systemd-networkd[710]: eth0: Gained carrier Oct 27 08:16:48.254273 systemd-networkd[710]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 27 08:16:48.263772 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 27 08:16:48.284386 systemd-networkd[710]: eth0: DHCPv4 address 10.0.0.23/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 27 08:16:48.288743 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 27 08:16:48.298134 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 27 08:16:48.300563 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 27 08:16:48.307282 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 27 08:16:48.310435 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 27 08:16:48.314627 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 27 08:16:48.314960 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 27 08:16:48.315086 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:16:48.318903 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 08:16:48.332482 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 08:16:48.350240 disk-uuid[835]: Primary Header is updated. Oct 27 08:16:48.350240 disk-uuid[835]: Secondary Entries is updated. Oct 27 08:16:48.350240 disk-uuid[835]: Secondary Header is updated. Oct 27 08:16:48.349572 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 27 08:16:48.442122 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:16:48.637231 systemd-resolved[344]: Detected conflict on linux IN A 10.0.0.23 Oct 27 08:16:48.637245 systemd-resolved[344]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Oct 27 08:16:49.479657 disk-uuid[843]: Warning: The kernel is still using the old partition table. Oct 27 08:16:49.479657 disk-uuid[843]: The new table will be used at the next reboot or after you Oct 27 08:16:49.479657 disk-uuid[843]: run partprobe(8) or kpartx(8) Oct 27 08:16:49.479657 disk-uuid[843]: The operation has completed successfully. Oct 27 08:16:49.496220 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 27 08:16:49.496582 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 27 08:16:49.499767 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 27 08:16:49.547531 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (856) Oct 27 08:16:49.547571 kernel: BTRFS info (device vda6): first mount of filesystem 3c7e1d30-69bc-4811-963d-029e55854883 Oct 27 08:16:49.547591 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 27 08:16:49.552836 kernel: BTRFS info (device vda6): turning on async discard Oct 27 08:16:49.552856 kernel: BTRFS info (device vda6): enabling free space tree Oct 27 08:16:49.560214 kernel: BTRFS info (device vda6): last unmount of filesystem 3c7e1d30-69bc-4811-963d-029e55854883 Oct 27 08:16:49.561295 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 27 08:16:49.565717 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 27 08:16:49.624392 systemd-networkd[710]: eth0: Gained IPv6LL Oct 27 08:16:49.725476 ignition[875]: Ignition 2.22.0 Oct 27 08:16:49.725491 ignition[875]: Stage: fetch-offline Oct 27 08:16:49.725543 ignition[875]: no configs at "/usr/lib/ignition/base.d" Oct 27 08:16:49.725558 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 08:16:49.725667 ignition[875]: parsed url from cmdline: "" Oct 27 08:16:49.725671 ignition[875]: no config URL provided Oct 27 08:16:49.725678 ignition[875]: reading system config file "/usr/lib/ignition/user.ign" Oct 27 08:16:49.725690 ignition[875]: no config at "/usr/lib/ignition/user.ign" Oct 27 08:16:49.725738 ignition[875]: op(1): [started] loading QEMU firmware config module Oct 27 08:16:49.725744 ignition[875]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 27 08:16:49.745733 ignition[875]: op(1): [finished] loading QEMU firmware config module Oct 27 08:16:49.823895 ignition[875]: parsing config with SHA512: c2b3183d520a820db39d746e9c739937345f69f08bd0fa20f08f3dba49b67f1ebf8887501b27b969b7de8dcd7a1f1162cf799256ee236572c75fcd47974e5e96 Oct 27 08:16:49.829979 unknown[875]: fetched base config from "system" Oct 27 08:16:49.829992 unknown[875]: fetched user config from "qemu" Oct 27 08:16:49.830436 ignition[875]: fetch-offline: fetch-offline passed Oct 27 08:16:49.830504 ignition[875]: Ignition finished successfully Oct 27 08:16:49.836153 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 27 08:16:49.840613 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 27 08:16:49.844600 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 27 08:16:49.909865 ignition[886]: Ignition 2.22.0 Oct 27 08:16:49.909881 ignition[886]: Stage: kargs Oct 27 08:16:49.910179 ignition[886]: no configs at "/usr/lib/ignition/base.d" Oct 27 08:16:49.910215 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 08:16:49.910946 ignition[886]: kargs: kargs passed Oct 27 08:16:49.910995 ignition[886]: Ignition finished successfully Oct 27 08:16:49.963803 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 27 08:16:49.966585 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 27 08:16:50.017432 ignition[893]: Ignition 2.22.0 Oct 27 08:16:50.017446 ignition[893]: Stage: disks Oct 27 08:16:50.017584 ignition[893]: no configs at "/usr/lib/ignition/base.d" Oct 27 08:16:50.017595 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 08:16:50.018282 ignition[893]: disks: disks passed Oct 27 08:16:50.018333 ignition[893]: Ignition finished successfully Oct 27 08:16:50.026358 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 27 08:16:50.029828 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 27 08:16:50.029932 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 27 08:16:50.033375 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 27 08:16:50.037236 systemd[1]: Reached target sysinit.target - System Initialization. Oct 27 08:16:50.040313 systemd[1]: Reached target basic.target - Basic System. Oct 27 08:16:50.044605 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 27 08:16:50.092475 systemd-fsck[903]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 27 08:16:50.101080 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 27 08:16:50.102373 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 27 08:16:50.232207 kernel: EXT4-fs (vda9): mounted filesystem e90e2fe3-e1db-4bff-abac-c8d1d032f674 r/w with ordered data mode. Quota mode: none. Oct 27 08:16:50.232764 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 27 08:16:50.233549 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 27 08:16:50.236856 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 27 08:16:50.241046 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 27 08:16:50.243457 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 27 08:16:50.243492 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 27 08:16:50.243518 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 27 08:16:50.262646 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 27 08:16:50.270727 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (912) Oct 27 08:16:50.270758 kernel: BTRFS info (device vda6): first mount of filesystem 3c7e1d30-69bc-4811-963d-029e55854883 Oct 27 08:16:50.270771 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 27 08:16:50.266039 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 27 08:16:50.275437 kernel: BTRFS info (device vda6): turning on async discard Oct 27 08:16:50.275453 kernel: BTRFS info (device vda6): enabling free space tree Oct 27 08:16:50.279050 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 27 08:16:50.335399 initrd-setup-root[936]: cut: /sysroot/etc/passwd: No such file or directory Oct 27 08:16:50.343158 initrd-setup-root[943]: cut: /sysroot/etc/group: No such file or directory Oct 27 08:16:50.348704 initrd-setup-root[950]: cut: /sysroot/etc/shadow: No such file or directory Oct 27 08:16:50.354236 initrd-setup-root[957]: cut: /sysroot/etc/gshadow: No such file or directory Oct 27 08:16:50.472160 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 27 08:16:50.475631 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 27 08:16:50.479038 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 27 08:16:50.505224 kernel: BTRFS info (device vda6): last unmount of filesystem 3c7e1d30-69bc-4811-963d-029e55854883 Oct 27 08:16:50.522366 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 27 08:16:50.535567 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 27 08:16:50.548356 ignition[1026]: INFO : Ignition 2.22.0 Oct 27 08:16:50.548356 ignition[1026]: INFO : Stage: mount Oct 27 08:16:50.551070 ignition[1026]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 27 08:16:50.551070 ignition[1026]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 08:16:50.551070 ignition[1026]: INFO : mount: mount passed Oct 27 08:16:50.551070 ignition[1026]: INFO : Ignition finished successfully Oct 27 08:16:50.559765 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 27 08:16:50.561124 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 27 08:16:50.589816 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 27 08:16:50.621631 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1038) Oct 27 08:16:50.621669 kernel: BTRFS info (device vda6): first mount of filesystem 3c7e1d30-69bc-4811-963d-029e55854883 Oct 27 08:16:50.621682 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 27 08:16:50.626946 kernel: BTRFS info (device vda6): turning on async discard Oct 27 08:16:50.626966 kernel: BTRFS info (device vda6): enabling free space tree Oct 27 08:16:50.628814 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 27 08:16:50.673002 ignition[1055]: INFO : Ignition 2.22.0 Oct 27 08:16:50.673002 ignition[1055]: INFO : Stage: files Oct 27 08:16:50.675652 ignition[1055]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 27 08:16:50.675652 ignition[1055]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 08:16:50.679617 ignition[1055]: DEBUG : files: compiled without relabeling support, skipping Oct 27 08:16:50.682098 ignition[1055]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 27 08:16:50.682098 ignition[1055]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 27 08:16:50.689779 ignition[1055]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 27 08:16:50.692088 ignition[1055]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 27 08:16:50.694311 ignition[1055]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 27 08:16:50.694311 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 27 08:16:50.694311 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 27 08:16:50.692628 unknown[1055]: wrote ssh authorized keys file for user: core Oct 27 08:16:50.765263 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 27 08:16:50.825815 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 27 08:16:50.829238 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 27 08:16:50.829238 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 27 08:16:50.829238 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 27 08:16:50.829238 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 27 08:16:50.829238 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 27 08:16:50.829238 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 27 08:16:50.829238 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 27 08:16:50.829238 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 27 08:16:50.852837 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 27 08:16:50.852837 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 27 08:16:50.852837 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 27 08:16:50.852837 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 27 08:16:50.852837 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 27 08:16:50.852837 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Oct 27 08:16:51.390858 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 27 08:16:54.259036 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 27 08:16:54.259036 ignition[1055]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 27 08:16:54.265482 ignition[1055]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 27 08:16:54.265482 ignition[1055]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 27 08:16:54.265482 ignition[1055]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 27 08:16:54.265482 ignition[1055]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 27 08:16:54.265482 ignition[1055]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 27 08:16:54.265482 ignition[1055]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 27 08:16:54.265482 ignition[1055]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 27 08:16:54.265482 ignition[1055]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 27 08:16:54.295557 ignition[1055]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 27 08:16:54.302632 ignition[1055]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 27 08:16:54.305239 ignition[1055]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 27 08:16:54.305239 ignition[1055]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 27 08:16:54.305239 ignition[1055]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 27 08:16:54.305239 ignition[1055]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 27 08:16:54.305239 ignition[1055]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 27 08:16:54.305239 ignition[1055]: INFO : files: files passed Oct 27 08:16:54.305239 ignition[1055]: INFO : Ignition finished successfully Oct 27 08:16:54.309585 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 27 08:16:54.317019 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 27 08:16:54.322885 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 27 08:16:54.340317 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 27 08:16:54.340454 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 27 08:16:54.343922 initrd-setup-root-after-ignition[1086]: grep: /sysroot/oem/oem-release: No such file or directory Oct 27 08:16:54.348058 initrd-setup-root-after-ignition[1088]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 27 08:16:54.348058 initrd-setup-root-after-ignition[1088]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 27 08:16:54.353174 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 27 08:16:54.357497 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 27 08:16:54.359741 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 27 08:16:54.365249 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 27 08:16:54.430239 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 27 08:16:54.430379 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 27 08:16:54.432278 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 27 08:16:54.435931 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 27 08:16:54.442575 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 27 08:16:54.445278 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 27 08:16:54.482327 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 27 08:16:54.483897 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 27 08:16:54.504776 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 27 08:16:54.504921 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 27 08:16:54.508755 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 27 08:16:54.512585 systemd[1]: Stopped target timers.target - Timer Units. Oct 27 08:16:54.516139 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 27 08:16:54.516285 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 27 08:16:54.521238 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 27 08:16:54.524832 systemd[1]: Stopped target basic.target - Basic System. Oct 27 08:16:54.525258 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 27 08:16:54.525818 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 27 08:16:54.532967 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 27 08:16:54.533721 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 27 08:16:54.534674 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 27 08:16:54.535272 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 27 08:16:54.536155 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 27 08:16:54.550264 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 27 08:16:54.550440 systemd[1]: Stopped target swap.target - Swaps. Oct 27 08:16:54.551025 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 27 08:16:54.551150 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 27 08:16:54.552828 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 27 08:16:54.553721 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 27 08:16:54.566482 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 27 08:16:54.568271 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 27 08:16:54.570895 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 27 08:16:54.571090 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 27 08:16:54.576390 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 27 08:16:54.576529 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 27 08:16:54.578128 systemd[1]: Stopped target paths.target - Path Units. Oct 27 08:16:54.583376 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 27 08:16:54.587293 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 27 08:16:54.591143 systemd[1]: Stopped target slices.target - Slice Units. Oct 27 08:16:54.594491 systemd[1]: Stopped target sockets.target - Socket Units. Oct 27 08:16:54.596469 systemd[1]: iscsid.socket: Deactivated successfully. Oct 27 08:16:54.596605 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 27 08:16:54.599506 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 27 08:16:54.599616 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 27 08:16:54.603008 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 27 08:16:54.603180 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 27 08:16:54.607578 systemd[1]: ignition-files.service: Deactivated successfully. Oct 27 08:16:54.607743 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 27 08:16:54.612234 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 27 08:16:54.616166 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 27 08:16:54.617967 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 27 08:16:54.618222 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 27 08:16:54.620501 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 27 08:16:54.620645 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 27 08:16:54.623946 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 27 08:16:54.624093 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 27 08:16:54.636483 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 27 08:16:54.636647 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 27 08:16:54.659935 ignition[1112]: INFO : Ignition 2.22.0 Oct 27 08:16:54.659935 ignition[1112]: INFO : Stage: umount Oct 27 08:16:54.662883 ignition[1112]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 27 08:16:54.662883 ignition[1112]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 08:16:54.662883 ignition[1112]: INFO : umount: umount passed Oct 27 08:16:54.662883 ignition[1112]: INFO : Ignition finished successfully Oct 27 08:16:54.672234 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 27 08:16:54.672455 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 27 08:16:54.674303 systemd[1]: Stopped target network.target - Network. Oct 27 08:16:54.677390 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 27 08:16:54.677462 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 27 08:16:54.681782 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 27 08:16:54.681839 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 27 08:16:54.683342 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 27 08:16:54.683402 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 27 08:16:54.686436 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 27 08:16:54.686494 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 27 08:16:54.687035 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 27 08:16:54.692532 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 27 08:16:54.699981 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 27 08:16:54.703300 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 27 08:16:54.703484 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 27 08:16:54.710409 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 27 08:16:54.710542 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 27 08:16:54.716270 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 27 08:16:54.717759 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 27 08:16:54.717822 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 27 08:16:54.724132 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 27 08:16:54.726404 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 27 08:16:54.726474 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 27 08:16:54.730396 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 27 08:16:54.730454 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 27 08:16:54.733739 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 27 08:16:54.733792 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 27 08:16:54.737398 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 27 08:16:54.753246 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 27 08:16:54.753504 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 27 08:16:54.757444 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 27 08:16:54.757499 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 27 08:16:54.762004 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 27 08:16:54.762066 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 27 08:16:54.765850 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 27 08:16:54.765976 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 27 08:16:54.772733 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 27 08:16:54.772816 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 27 08:16:54.779859 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 27 08:16:54.779936 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 27 08:16:54.786627 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 27 08:16:54.789849 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 27 08:16:54.789910 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 27 08:16:54.792076 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 27 08:16:54.792157 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 27 08:16:54.793574 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 27 08:16:54.793635 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 27 08:16:54.800175 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 27 08:16:54.800259 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 27 08:16:54.800957 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 27 08:16:54.801005 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:16:54.808345 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 27 08:16:54.824453 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 27 08:16:54.827812 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 27 08:16:54.827941 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 27 08:16:54.835030 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 27 08:16:54.835219 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 27 08:16:54.840657 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 27 08:16:54.840791 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 27 08:16:54.845326 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 27 08:16:54.849451 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 27 08:16:54.867692 systemd[1]: Switching root. Oct 27 08:16:54.915595 systemd-journald[310]: Journal stopped Oct 27 08:16:56.421164 systemd-journald[310]: Received SIGTERM from PID 1 (systemd). Oct 27 08:16:56.421251 kernel: SELinux: policy capability network_peer_controls=1 Oct 27 08:16:56.421272 kernel: SELinux: policy capability open_perms=1 Oct 27 08:16:56.421290 kernel: SELinux: policy capability extended_socket_class=1 Oct 27 08:16:56.421303 kernel: SELinux: policy capability always_check_network=0 Oct 27 08:16:56.421317 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 27 08:16:56.421337 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 27 08:16:56.421349 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 27 08:16:56.421361 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 27 08:16:56.421374 kernel: SELinux: policy capability userspace_initial_context=0 Oct 27 08:16:56.421386 kernel: audit: type=1403 audit(1761553015.495:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 27 08:16:56.421399 systemd[1]: Successfully loaded SELinux policy in 70.092ms. Oct 27 08:16:56.421424 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.303ms. Oct 27 08:16:56.421440 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 27 08:16:56.421454 systemd[1]: Detected virtualization kvm. Oct 27 08:16:56.421467 systemd[1]: Detected architecture x86-64. Oct 27 08:16:56.421480 systemd[1]: Detected first boot. Oct 27 08:16:56.421492 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 27 08:16:56.421505 zram_generator::config[1158]: No configuration found. Oct 27 08:16:56.421521 kernel: Guest personality initialized and is inactive Oct 27 08:16:56.421533 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 27 08:16:56.421546 kernel: Initialized host personality Oct 27 08:16:56.421558 kernel: NET: Registered PF_VSOCK protocol family Oct 27 08:16:56.421570 systemd[1]: Populated /etc with preset unit settings. Oct 27 08:16:56.421582 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 27 08:16:56.421595 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 27 08:16:56.421614 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 27 08:16:56.421628 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 27 08:16:56.421641 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 27 08:16:56.421654 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 27 08:16:56.421666 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 27 08:16:56.421679 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 27 08:16:56.421692 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 27 08:16:56.421708 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 27 08:16:56.421720 systemd[1]: Created slice user.slice - User and Session Slice. Oct 27 08:16:56.421733 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 27 08:16:56.421746 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 27 08:16:56.421759 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 27 08:16:56.421772 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 27 08:16:56.421785 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 27 08:16:56.421802 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 27 08:16:56.421814 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 27 08:16:56.421828 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 27 08:16:56.421840 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 27 08:16:56.421853 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 27 08:16:56.421866 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 27 08:16:56.421881 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 27 08:16:56.421894 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 27 08:16:56.421906 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 27 08:16:56.421919 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 27 08:16:56.421932 systemd[1]: Reached target slices.target - Slice Units. Oct 27 08:16:56.422202 systemd[1]: Reached target swap.target - Swaps. Oct 27 08:16:56.422216 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 27 08:16:56.422229 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 27 08:16:56.422245 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 27 08:16:56.422258 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 27 08:16:56.422271 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 27 08:16:56.422283 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 27 08:16:56.422296 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 27 08:16:56.422308 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 27 08:16:56.422321 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 27 08:16:56.422336 systemd[1]: Mounting media.mount - External Media Directory... Oct 27 08:16:56.422349 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:16:56.422362 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 27 08:16:56.422375 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 27 08:16:56.422387 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 27 08:16:56.422400 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 27 08:16:56.422415 systemd[1]: Reached target machines.target - Containers. Oct 27 08:16:56.422429 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 27 08:16:56.422441 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 08:16:56.422454 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 27 08:16:56.422467 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 27 08:16:56.422487 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 08:16:56.422500 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 27 08:16:56.422516 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 08:16:56.422529 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 27 08:16:56.422542 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 08:16:56.422555 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 27 08:16:56.422568 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 27 08:16:56.422581 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 27 08:16:56.422593 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 27 08:16:56.422608 systemd[1]: Stopped systemd-fsck-usr.service. Oct 27 08:16:56.422622 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 08:16:56.422636 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 27 08:16:56.422648 kernel: fuse: init (API version 7.41) Oct 27 08:16:56.422661 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 27 08:16:56.422673 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 27 08:16:56.422686 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 27 08:16:56.422704 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 27 08:16:56.422718 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 27 08:16:56.422731 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:16:56.422743 kernel: ACPI: bus type drm_connector registered Oct 27 08:16:56.422758 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 27 08:16:56.422779 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 27 08:16:56.422792 systemd[1]: Mounted media.mount - External Media Directory. Oct 27 08:16:56.422805 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 27 08:16:56.422818 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 27 08:16:56.422830 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 27 08:16:56.422862 systemd-journald[1233]: Collecting audit messages is disabled. Oct 27 08:16:56.422895 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 27 08:16:56.422908 systemd-journald[1233]: Journal started Oct 27 08:16:56.422931 systemd-journald[1233]: Runtime Journal (/run/log/journal/7ba28b7eed504ba6976f848c39fb835c) is 6M, max 48.3M, 42.2M free. Oct 27 08:16:56.100213 systemd[1]: Queued start job for default target multi-user.target. Oct 27 08:16:56.120382 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 27 08:16:56.120939 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 27 08:16:56.425209 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 27 08:16:56.429203 systemd[1]: Started systemd-journald.service - Journal Service. Oct 27 08:16:56.431527 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 27 08:16:56.431751 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 27 08:16:56.434028 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 08:16:56.434262 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 08:16:56.436449 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 27 08:16:56.436663 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 27 08:16:56.438725 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 08:16:56.438937 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 08:16:56.441288 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 27 08:16:56.441498 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 27 08:16:56.443638 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 08:16:56.443847 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 08:16:56.446031 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 27 08:16:56.448299 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 27 08:16:56.450704 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 27 08:16:56.453202 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 27 08:16:56.469300 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 27 08:16:56.471782 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 27 08:16:56.473889 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 27 08:16:56.473920 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 27 08:16:56.476652 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 27 08:16:56.478984 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 08:16:56.480487 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 27 08:16:56.483259 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 27 08:16:56.485231 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 27 08:16:56.487411 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 27 08:16:56.489228 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 27 08:16:56.492371 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 27 08:16:56.496317 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 27 08:16:56.500218 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 27 08:16:56.502306 systemd-journald[1233]: Time spent on flushing to /var/log/journal/7ba28b7eed504ba6976f848c39fb835c is 23.781ms for 974 entries. Oct 27 08:16:56.502306 systemd-journald[1233]: System Journal (/var/log/journal/7ba28b7eed504ba6976f848c39fb835c) is 8M, max 163.5M, 155.5M free. Oct 27 08:16:56.542111 systemd-journald[1233]: Received client request to flush runtime journal. Oct 27 08:16:56.542346 kernel: loop1: detected capacity change from 0 to 110984 Oct 27 08:16:56.503232 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 27 08:16:56.504522 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 27 08:16:56.510045 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 27 08:16:56.517336 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 27 08:16:56.529780 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Oct 27 08:16:56.529794 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Oct 27 08:16:56.533275 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 27 08:16:56.535773 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 27 08:16:56.547219 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 27 08:16:56.549506 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 27 08:16:56.560547 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 27 08:16:56.565331 kernel: loop2: detected capacity change from 0 to 219144 Oct 27 08:16:56.584378 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 27 08:16:56.588679 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 27 08:16:56.592246 kernel: loop3: detected capacity change from 0 to 128048 Oct 27 08:16:56.593324 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 27 08:16:56.606857 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 27 08:16:56.621269 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Oct 27 08:16:56.621287 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Oct 27 08:16:56.626496 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 27 08:16:56.631272 kernel: loop4: detected capacity change from 0 to 110984 Oct 27 08:16:56.639208 kernel: loop5: detected capacity change from 0 to 219144 Oct 27 08:16:56.647222 kernel: loop6: detected capacity change from 0 to 128048 Oct 27 08:16:56.653305 (sd-merge)[1302]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 27 08:16:56.656975 (sd-merge)[1302]: Merged extensions into '/usr'. Oct 27 08:16:56.659358 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 27 08:16:56.663797 systemd[1]: Reload requested from client PID 1275 ('systemd-sysext') (unit systemd-sysext.service)... Oct 27 08:16:56.663820 systemd[1]: Reloading... Oct 27 08:16:56.726225 zram_generator::config[1335]: No configuration found. Oct 27 08:16:56.762155 systemd-resolved[1296]: Positive Trust Anchors: Oct 27 08:16:56.762172 systemd-resolved[1296]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 27 08:16:56.762177 systemd-resolved[1296]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 27 08:16:56.762663 systemd-resolved[1296]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 27 08:16:56.766700 systemd-resolved[1296]: Defaulting to hostname 'linux'. Oct 27 08:16:56.928287 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 27 08:16:56.928656 systemd[1]: Reloading finished in 264 ms. Oct 27 08:16:56.965879 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 27 08:16:56.968164 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 27 08:16:56.972661 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 27 08:16:56.982952 systemd[1]: Starting ensure-sysext.service... Oct 27 08:16:56.985508 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 27 08:16:56.998488 systemd[1]: Reload requested from client PID 1371 ('systemctl') (unit ensure-sysext.service)... Oct 27 08:16:56.998629 systemd[1]: Reloading... Oct 27 08:16:57.006374 systemd-tmpfiles[1372]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 27 08:16:57.006413 systemd-tmpfiles[1372]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 27 08:16:57.006732 systemd-tmpfiles[1372]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 27 08:16:57.007024 systemd-tmpfiles[1372]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 27 08:16:57.008016 systemd-tmpfiles[1372]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 27 08:16:57.008315 systemd-tmpfiles[1372]: ACLs are not supported, ignoring. Oct 27 08:16:57.008391 systemd-tmpfiles[1372]: ACLs are not supported, ignoring. Oct 27 08:16:57.016506 systemd-tmpfiles[1372]: Detected autofs mount point /boot during canonicalization of boot. Oct 27 08:16:57.016520 systemd-tmpfiles[1372]: Skipping /boot Oct 27 08:16:57.031415 systemd-tmpfiles[1372]: Detected autofs mount point /boot during canonicalization of boot. Oct 27 08:16:57.031432 systemd-tmpfiles[1372]: Skipping /boot Oct 27 08:16:57.059222 zram_generator::config[1401]: No configuration found. Oct 27 08:16:57.248027 systemd[1]: Reloading finished in 249 ms. Oct 27 08:16:57.270872 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 27 08:16:57.289924 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 27 08:16:57.300676 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 27 08:16:57.303792 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 27 08:16:57.321687 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 27 08:16:57.325222 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 27 08:16:57.328841 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 27 08:16:57.331785 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 27 08:16:57.336402 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 27 08:16:57.344407 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 27 08:16:57.350477 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 27 08:16:57.352657 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 27 08:16:57.397937 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 27 08:16:57.411357 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:16:57.411576 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 08:16:57.413263 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 08:16:57.417654 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 08:16:57.427272 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 08:16:57.435651 augenrules[1474]: No rules Oct 27 08:16:57.429727 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 08:16:57.429860 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 08:16:57.429964 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:16:57.433071 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 27 08:16:57.437834 systemd[1]: audit-rules.service: Deactivated successfully. Oct 27 08:16:57.438112 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 27 08:16:57.442398 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 08:16:57.449542 systemd-udevd[1448]: Using default interface naming scheme 'v257'. Oct 27 08:16:57.450074 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 08:16:57.452930 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 08:16:57.453198 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 08:16:57.455993 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 08:16:57.456304 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 08:16:57.470326 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:16:57.472032 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 27 08:16:57.473709 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 08:16:57.474908 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 08:16:57.484959 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 27 08:16:57.488330 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 08:16:57.492472 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 08:16:57.494501 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 08:16:57.494550 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 08:16:57.494621 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:16:57.495219 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 27 08:16:57.497996 systemd[1]: Finished ensure-sysext.service. Oct 27 08:16:57.503258 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 27 08:16:57.505916 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 08:16:57.506757 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 08:16:57.507929 augenrules[1484]: /sbin/augenrules: No change Oct 27 08:16:57.512579 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 27 08:16:57.515369 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 27 08:16:57.518861 augenrules[1515]: No rules Oct 27 08:16:57.518800 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 08:16:57.519066 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 08:16:57.521891 systemd[1]: audit-rules.service: Deactivated successfully. Oct 27 08:16:57.522483 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 27 08:16:57.527274 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 08:16:57.527905 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 08:16:57.538392 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 27 08:16:57.541386 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 27 08:16:57.541464 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 27 08:16:57.543361 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 27 08:16:57.545336 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 27 08:16:57.582258 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 27 08:16:57.605338 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 27 08:16:57.637789 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 27 08:16:57.732452 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 27 08:16:57.732513 kernel: mousedev: PS/2 mouse device common for all mice Oct 27 08:16:57.742829 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 27 08:16:57.743157 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 27 08:16:57.751112 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 27 08:16:57.762325 kernel: ACPI: button: Power Button [PWRF] Oct 27 08:16:57.807515 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 08:16:57.962490 kernel: kvm_amd: TSC scaling supported Oct 27 08:16:57.962572 kernel: kvm_amd: Nested Virtualization enabled Oct 27 08:16:57.962586 kernel: kvm_amd: Nested Paging enabled Oct 27 08:16:57.967418 kernel: kvm_amd: LBR virtualization supported Oct 27 08:16:57.967446 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 27 08:16:57.967489 kernel: kvm_amd: Virtual GIF supported Oct 27 08:16:57.976415 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 27 08:16:57.976595 systemd[1]: Reached target time-set.target - System Time Set. Oct 27 08:16:58.014054 systemd-networkd[1531]: lo: Link UP Oct 27 08:16:58.014066 systemd-networkd[1531]: lo: Gained carrier Oct 27 08:16:58.016040 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 27 08:16:58.016353 systemd[1]: Reached target network.target - Network. Oct 27 08:16:58.018333 systemd-networkd[1531]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 27 08:16:58.018338 systemd-networkd[1531]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 27 08:16:58.019322 systemd-networkd[1531]: eth0: Link UP Oct 27 08:16:58.019584 systemd-networkd[1531]: eth0: Gained carrier Oct 27 08:16:58.019607 systemd-networkd[1531]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 27 08:16:58.022458 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 27 08:16:58.025844 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 27 08:16:58.034519 systemd-networkd[1531]: eth0: DHCPv4 address 10.0.0.23/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 27 08:16:58.035295 systemd-timesyncd[1532]: Network configuration changed, trying to establish connection. Oct 27 08:16:58.624966 systemd-resolved[1296]: Clock change detected. Flushing caches. Oct 27 08:16:58.625002 systemd-timesyncd[1532]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 27 08:16:58.625067 systemd-timesyncd[1532]: Initial clock synchronization to Mon 2025-10-27 08:16:58.624878 UTC. Oct 27 08:16:58.629274 kernel: EDAC MC: Ver: 3.0.0 Oct 27 08:16:58.699096 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:16:58.711523 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 27 08:16:58.771809 ldconfig[1445]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 27 08:16:58.780669 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 27 08:16:58.784748 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 27 08:16:58.875187 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 27 08:16:58.877258 systemd[1]: Reached target sysinit.target - System Initialization. Oct 27 08:16:58.879067 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 27 08:16:58.881063 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 27 08:16:58.883073 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 27 08:16:58.885078 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 27 08:16:58.886917 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 27 08:16:58.888930 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 27 08:16:58.890935 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 27 08:16:58.890967 systemd[1]: Reached target paths.target - Path Units. Oct 27 08:16:58.892448 systemd[1]: Reached target timers.target - Timer Units. Oct 27 08:16:58.894836 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 27 08:16:58.898127 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 27 08:16:58.901906 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 27 08:16:58.904088 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 27 08:16:58.906110 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 27 08:16:58.913716 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 27 08:16:58.916070 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 27 08:16:58.918540 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 27 08:16:58.920919 systemd[1]: Reached target sockets.target - Socket Units. Oct 27 08:16:58.922475 systemd[1]: Reached target basic.target - Basic System. Oct 27 08:16:58.924015 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 27 08:16:58.924046 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 27 08:16:58.925338 systemd[1]: Starting containerd.service - containerd container runtime... Oct 27 08:16:58.928450 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 27 08:16:58.944535 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 27 08:16:58.947762 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 27 08:16:58.950912 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 27 08:16:58.952531 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 27 08:16:58.953624 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 27 08:16:58.962320 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 27 08:16:58.967304 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 27 08:16:58.969797 google_oslogin_nss_cache[1591]: oslogin_cache_refresh[1591]: Refreshing passwd entry cache Oct 27 08:16:58.969808 oslogin_cache_refresh[1591]: Refreshing passwd entry cache Oct 27 08:16:58.970735 jq[1589]: false Oct 27 08:16:58.973347 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 27 08:16:58.973541 extend-filesystems[1590]: Found /dev/vda6 Oct 27 08:16:58.977538 extend-filesystems[1590]: Found /dev/vda9 Oct 27 08:16:58.979282 extend-filesystems[1590]: Checking size of /dev/vda9 Oct 27 08:16:58.980950 oslogin_cache_refresh[1591]: Failure getting users, quitting Oct 27 08:16:58.981405 google_oslogin_nss_cache[1591]: oslogin_cache_refresh[1591]: Failure getting users, quitting Oct 27 08:16:58.981405 google_oslogin_nss_cache[1591]: oslogin_cache_refresh[1591]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 27 08:16:58.981405 google_oslogin_nss_cache[1591]: oslogin_cache_refresh[1591]: Refreshing group entry cache Oct 27 08:16:58.980973 oslogin_cache_refresh[1591]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 27 08:16:58.981040 oslogin_cache_refresh[1591]: Refreshing group entry cache Oct 27 08:16:58.983357 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 27 08:16:58.990232 google_oslogin_nss_cache[1591]: oslogin_cache_refresh[1591]: Failure getting groups, quitting Oct 27 08:16:58.990232 google_oslogin_nss_cache[1591]: oslogin_cache_refresh[1591]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 27 08:16:58.990203 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 27 08:16:58.989368 oslogin_cache_refresh[1591]: Failure getting groups, quitting Oct 27 08:16:58.989383 oslogin_cache_refresh[1591]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 27 08:16:58.991859 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 27 08:16:58.992417 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 27 08:16:58.994122 systemd[1]: Starting update-engine.service - Update Engine... Oct 27 08:16:58.998483 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 27 08:16:58.999577 extend-filesystems[1590]: Resized partition /dev/vda9 Oct 27 08:16:59.002921 extend-filesystems[1616]: resize2fs 1.47.3 (8-Jul-2025) Oct 27 08:16:59.011242 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 27 08:16:59.012416 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 27 08:16:59.015867 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 27 08:16:59.016133 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 27 08:16:59.016582 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 27 08:16:59.016868 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 27 08:16:59.020597 systemd[1]: motdgen.service: Deactivated successfully. Oct 27 08:16:59.020905 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 27 08:16:59.027642 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 27 08:16:59.027952 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 27 08:16:59.035184 jq[1612]: true Oct 27 08:16:59.040239 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 27 08:16:59.043563 update_engine[1611]: I20251027 08:16:59.043478 1611 main.cc:92] Flatcar Update Engine starting Oct 27 08:16:59.058722 (ntainerd)[1622]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 27 08:16:59.066626 jq[1628]: true Oct 27 08:16:59.074077 extend-filesystems[1616]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 27 08:16:59.074077 extend-filesystems[1616]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 27 08:16:59.074077 extend-filesystems[1616]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 27 08:16:59.079879 extend-filesystems[1590]: Resized filesystem in /dev/vda9 Oct 27 08:16:59.078326 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 27 08:16:59.079295 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 27 08:16:59.091629 tar[1621]: linux-amd64/LICENSE Oct 27 08:16:59.092047 tar[1621]: linux-amd64/helm Oct 27 08:16:59.102423 sshd_keygen[1615]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 27 08:16:59.115243 dbus-daemon[1587]: [system] SELinux support is enabled Oct 27 08:16:59.115648 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 27 08:16:59.120654 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 27 08:16:59.120686 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 27 08:16:59.123322 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 27 08:16:59.123349 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 27 08:16:59.128383 systemd[1]: Started update-engine.service - Update Engine. Oct 27 08:16:59.128796 update_engine[1611]: I20251027 08:16:59.128648 1611 update_check_scheduler.cc:74] Next update check in 2m40s Oct 27 08:16:59.133456 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 27 08:16:59.152159 systemd-logind[1605]: Watching system buttons on /dev/input/event2 (Power Button) Oct 27 08:16:59.152190 systemd-logind[1605]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 27 08:16:59.153703 systemd-logind[1605]: New seat seat0. Oct 27 08:16:59.155440 systemd[1]: Started systemd-logind.service - User Login Management. Oct 27 08:16:59.175053 bash[1663]: Updated "/home/core/.ssh/authorized_keys" Oct 27 08:16:59.175029 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 27 08:16:59.194556 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 27 08:16:59.209930 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 27 08:16:59.214431 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 27 08:16:59.219480 systemd[1]: Started sshd@0-10.0.0.23:22-10.0.0.1:35764.service - OpenSSH per-connection server daemon (10.0.0.1:35764). Oct 27 08:16:59.222761 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 27 08:16:59.234856 locksmithd[1662]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 27 08:16:59.244369 systemd[1]: issuegen.service: Deactivated successfully. Oct 27 08:16:59.244708 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 27 08:16:59.252284 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 27 08:16:59.329412 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 27 08:16:59.334769 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 27 08:16:59.338280 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 27 08:16:59.340399 systemd[1]: Reached target getty.target - Login Prompts. Oct 27 08:16:59.427597 sshd[1674]: Accepted publickey for core from 10.0.0.1 port 35764 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:16:59.431662 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:16:59.440271 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 27 08:16:59.443390 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 27 08:16:59.455071 systemd-logind[1605]: New session 1 of user core. Oct 27 08:16:59.488784 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 27 08:16:59.494171 containerd[1622]: time="2025-10-27T08:16:59Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 27 08:16:59.494836 containerd[1622]: time="2025-10-27T08:16:59.494795307Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 27 08:16:59.495596 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 27 08:16:59.505028 containerd[1622]: time="2025-10-27T08:16:59.504862226Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.557µs" Oct 27 08:16:59.505028 containerd[1622]: time="2025-10-27T08:16:59.504905407Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 27 08:16:59.505028 containerd[1622]: time="2025-10-27T08:16:59.504926506Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 27 08:16:59.505168 containerd[1622]: time="2025-10-27T08:16:59.505143123Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 27 08:16:59.505168 containerd[1622]: time="2025-10-27T08:16:59.505165895Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 27 08:16:59.505225 containerd[1622]: time="2025-10-27T08:16:59.505199568Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 27 08:16:59.505396 containerd[1622]: time="2025-10-27T08:16:59.505288145Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 27 08:16:59.505396 containerd[1622]: time="2025-10-27T08:16:59.505304726Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 27 08:16:59.505600 containerd[1622]: time="2025-10-27T08:16:59.505568631Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 27 08:16:59.505600 containerd[1622]: time="2025-10-27T08:16:59.505592505Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 27 08:16:59.505660 containerd[1622]: time="2025-10-27T08:16:59.505605380Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 27 08:16:59.505660 containerd[1622]: time="2025-10-27T08:16:59.505613986Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 27 08:16:59.505919 containerd[1622]: time="2025-10-27T08:16:59.505713112Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 27 08:16:59.505995 containerd[1622]: time="2025-10-27T08:16:59.505970655Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 27 08:16:59.506052 containerd[1622]: time="2025-10-27T08:16:59.506026159Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 27 08:16:59.506088 containerd[1622]: time="2025-10-27T08:16:59.506050975Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 27 08:16:59.506109 containerd[1622]: time="2025-10-27T08:16:59.506095789Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 27 08:16:59.506473 containerd[1622]: time="2025-10-27T08:16:59.506363121Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 27 08:16:59.506473 containerd[1622]: time="2025-10-27T08:16:59.506444403Z" level=info msg="metadata content store policy set" policy=shared Oct 27 08:16:59.511828 containerd[1622]: time="2025-10-27T08:16:59.511796901Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 27 08:16:59.511956 containerd[1622]: time="2025-10-27T08:16:59.511864097Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 27 08:16:59.511956 containerd[1622]: time="2025-10-27T08:16:59.511892260Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 27 08:16:59.511956 containerd[1622]: time="2025-10-27T08:16:59.511907819Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 27 08:16:59.511956 containerd[1622]: time="2025-10-27T08:16:59.511922196Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 27 08:16:59.511956 containerd[1622]: time="2025-10-27T08:16:59.511937294Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 27 08:16:59.511956 containerd[1622]: time="2025-10-27T08:16:59.511952002Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 27 08:16:59.512090 containerd[1622]: time="2025-10-27T08:16:59.511974243Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 27 08:16:59.512090 containerd[1622]: time="2025-10-27T08:16:59.511987428Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 27 08:16:59.512090 containerd[1622]: time="2025-10-27T08:16:59.511999381Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 27 08:16:59.512090 containerd[1622]: time="2025-10-27T08:16:59.512009770Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 27 08:16:59.512090 containerd[1622]: time="2025-10-27T08:16:59.512024417Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 27 08:16:59.512421 containerd[1622]: time="2025-10-27T08:16:59.512202792Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 27 08:16:59.512421 containerd[1622]: time="2025-10-27T08:16:59.512254429Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 27 08:16:59.512421 containerd[1622]: time="2025-10-27T08:16:59.512271170Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 27 08:16:59.512421 containerd[1622]: time="2025-10-27T08:16:59.512288513Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 27 08:16:59.512421 containerd[1622]: time="2025-10-27T08:16:59.512301056Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 27 08:16:59.512421 containerd[1622]: time="2025-10-27T08:16:59.512313710Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 27 08:16:59.512421 containerd[1622]: time="2025-10-27T08:16:59.512326554Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 27 08:16:59.512421 containerd[1622]: time="2025-10-27T08:16:59.512338336Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 27 08:16:59.512421 containerd[1622]: time="2025-10-27T08:16:59.512350409Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 27 08:16:59.512421 containerd[1622]: time="2025-10-27T08:16:59.512362001Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 27 08:16:59.512421 containerd[1622]: time="2025-10-27T08:16:59.512373983Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 27 08:16:59.512812 containerd[1622]: time="2025-10-27T08:16:59.512470324Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 27 08:16:59.512812 containerd[1622]: time="2025-10-27T08:16:59.512494429Z" level=info msg="Start snapshots syncer" Oct 27 08:16:59.512812 containerd[1622]: time="2025-10-27T08:16:59.512531098Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 27 08:16:59.512874 containerd[1622]: time="2025-10-27T08:16:59.512811704Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 27 08:16:59.512874 containerd[1622]: time="2025-10-27T08:16:59.512855296Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 27 08:16:59.513086 containerd[1622]: time="2025-10-27T08:16:59.512951356Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 27 08:16:59.513086 containerd[1622]: time="2025-10-27T08:16:59.513063656Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 27 08:16:59.513086 containerd[1622]: time="2025-10-27T08:16:59.513082231Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 27 08:16:59.513146 containerd[1622]: time="2025-10-27T08:16:59.513092761Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 27 08:16:59.513146 containerd[1622]: time="2025-10-27T08:16:59.513104713Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 27 08:16:59.513146 containerd[1622]: time="2025-10-27T08:16:59.513116415Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 27 08:16:59.513146 containerd[1622]: time="2025-10-27T08:16:59.513127867Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 27 08:16:59.513936 containerd[1622]: time="2025-10-27T08:16:59.513899644Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 27 08:16:59.514232 containerd[1622]: time="2025-10-27T08:16:59.513977109Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 27 08:16:59.514232 containerd[1622]: time="2025-10-27T08:16:59.514009260Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 27 08:16:59.514232 containerd[1622]: time="2025-10-27T08:16:59.514021503Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 27 08:16:59.514232 containerd[1622]: time="2025-10-27T08:16:59.514068140Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 27 08:16:59.514232 containerd[1622]: time="2025-10-27T08:16:59.514083609Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 27 08:16:59.514232 containerd[1622]: time="2025-10-27T08:16:59.514093428Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 27 08:16:59.514232 containerd[1622]: time="2025-10-27T08:16:59.514102655Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 27 08:16:59.514232 containerd[1622]: time="2025-10-27T08:16:59.514184058Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 27 08:16:59.514232 containerd[1622]: time="2025-10-27T08:16:59.514197643Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 27 08:16:59.514421 containerd[1622]: time="2025-10-27T08:16:59.514222910Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 27 08:16:59.514421 containerd[1622]: time="2025-10-27T08:16:59.514271712Z" level=info msg="runtime interface created" Oct 27 08:16:59.514421 containerd[1622]: time="2025-10-27T08:16:59.514277904Z" level=info msg="created NRI interface" Oct 27 08:16:59.514421 containerd[1622]: time="2025-10-27T08:16:59.514286420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 27 08:16:59.514421 containerd[1622]: time="2025-10-27T08:16:59.514330833Z" level=info msg="Connect containerd service" Oct 27 08:16:59.514421 containerd[1622]: time="2025-10-27T08:16:59.514363344Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 27 08:16:59.515897 containerd[1622]: time="2025-10-27T08:16:59.515681285Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 27 08:16:59.517665 (systemd)[1692]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 27 08:16:59.521532 systemd-logind[1605]: New session c1 of user core. Oct 27 08:16:59.694466 tar[1621]: linux-amd64/README.md Oct 27 08:16:59.716480 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 27 08:16:59.718556 containerd[1622]: time="2025-10-27T08:16:59.717226475Z" level=info msg="Start subscribing containerd event" Oct 27 08:16:59.718556 containerd[1622]: time="2025-10-27T08:16:59.717940934Z" level=info msg="Start recovering state" Oct 27 08:16:59.718556 containerd[1622]: time="2025-10-27T08:16:59.718227552Z" level=info msg="Start event monitor" Oct 27 08:16:59.718683 containerd[1622]: time="2025-10-27T08:16:59.718261856Z" level=info msg="Start cni network conf syncer for default" Oct 27 08:16:59.718683 containerd[1622]: time="2025-10-27T08:16:59.718654543Z" level=info msg="Start streaming server" Oct 27 08:16:59.718683 containerd[1622]: time="2025-10-27T08:16:59.718673789Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 27 08:16:59.718683 containerd[1622]: time="2025-10-27T08:16:59.718681022Z" level=info msg="runtime interface starting up..." Oct 27 08:16:59.718754 containerd[1622]: time="2025-10-27T08:16:59.718689548Z" level=info msg="starting plugins..." Oct 27 08:16:59.718754 containerd[1622]: time="2025-10-27T08:16:59.718705208Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 27 08:16:59.720473 containerd[1622]: time="2025-10-27T08:16:59.719402596Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 27 08:16:59.720473 containerd[1622]: time="2025-10-27T08:16:59.719561734Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 27 08:16:59.721302 systemd[1692]: Queued start job for default target default.target. Oct 27 08:16:59.721371 systemd[1]: Started containerd.service - containerd container runtime. Oct 27 08:16:59.723677 containerd[1622]: time="2025-10-27T08:16:59.721690807Z" level=info msg="containerd successfully booted in 0.228155s" Oct 27 08:16:59.727587 systemd[1692]: Created slice app.slice - User Application Slice. Oct 27 08:16:59.727618 systemd[1692]: Reached target paths.target - Paths. Oct 27 08:16:59.727663 systemd[1692]: Reached target timers.target - Timers. Oct 27 08:16:59.729492 systemd[1692]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 27 08:16:59.742190 systemd[1692]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 27 08:16:59.742336 systemd[1692]: Reached target sockets.target - Sockets. Oct 27 08:16:59.742379 systemd[1692]: Reached target basic.target - Basic System. Oct 27 08:16:59.742422 systemd[1692]: Reached target default.target - Main User Target. Oct 27 08:16:59.742458 systemd[1692]: Startup finished in 203ms. Oct 27 08:16:59.742881 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 27 08:16:59.746277 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 27 08:16:59.749345 systemd-networkd[1531]: eth0: Gained IPv6LL Oct 27 08:16:59.752325 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 27 08:16:59.754909 systemd[1]: Reached target network-online.target - Network is Online. Oct 27 08:16:59.758441 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 27 08:16:59.761426 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:16:59.775853 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 27 08:16:59.804614 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 27 08:16:59.805074 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 27 08:16:59.808858 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 27 08:16:59.811642 systemd[1]: Started sshd@1-10.0.0.23:22-10.0.0.1:35768.service - OpenSSH per-connection server daemon (10.0.0.1:35768). Oct 27 08:16:59.816629 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 27 08:16:59.871430 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 35768 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:16:59.873250 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:16:59.879111 systemd-logind[1605]: New session 2 of user core. Oct 27 08:16:59.888381 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 27 08:16:59.948708 sshd[1744]: Connection closed by 10.0.0.1 port 35768 Oct 27 08:16:59.949845 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Oct 27 08:17:00.012295 systemd[1]: sshd@1-10.0.0.23:22-10.0.0.1:35768.service: Deactivated successfully. Oct 27 08:17:00.014494 systemd[1]: session-2.scope: Deactivated successfully. Oct 27 08:17:00.015315 systemd-logind[1605]: Session 2 logged out. Waiting for processes to exit. Oct 27 08:17:00.018105 systemd[1]: Started sshd@2-10.0.0.23:22-10.0.0.1:35772.service - OpenSSH per-connection server daemon (10.0.0.1:35772). Oct 27 08:17:00.020987 systemd-logind[1605]: Removed session 2. Oct 27 08:17:00.075527 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 35772 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:17:00.077067 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:17:00.081675 systemd-logind[1605]: New session 3 of user core. Oct 27 08:17:00.088341 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 27 08:17:00.146317 sshd[1754]: Connection closed by 10.0.0.1 port 35772 Oct 27 08:17:00.146669 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Oct 27 08:17:00.152106 systemd[1]: sshd@2-10.0.0.23:22-10.0.0.1:35772.service: Deactivated successfully. Oct 27 08:17:00.154346 systemd[1]: session-3.scope: Deactivated successfully. Oct 27 08:17:00.155097 systemd-logind[1605]: Session 3 logged out. Waiting for processes to exit. Oct 27 08:17:00.156672 systemd-logind[1605]: Removed session 3. Oct 27 08:17:00.983439 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:17:00.985858 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 27 08:17:00.987781 systemd[1]: Startup finished in 2.616s (kernel) + 8.501s (initrd) + 4.973s (userspace) = 16.091s. Oct 27 08:17:01.016612 (kubelet)[1764]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 27 08:17:01.757312 kubelet[1764]: E1027 08:17:01.757169 1764 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 27 08:17:01.761292 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 27 08:17:01.761512 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 27 08:17:01.761939 systemd[1]: kubelet.service: Consumed 1.777s CPU time, 258.2M memory peak. Oct 27 08:17:10.169864 systemd[1]: Started sshd@3-10.0.0.23:22-10.0.0.1:55546.service - OpenSSH per-connection server daemon (10.0.0.1:55546). Oct 27 08:17:10.232422 sshd[1777]: Accepted publickey for core from 10.0.0.1 port 55546 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:17:10.233936 sshd-session[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:17:10.238384 systemd-logind[1605]: New session 4 of user core. Oct 27 08:17:10.248331 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 27 08:17:10.301480 sshd[1780]: Connection closed by 10.0.0.1 port 55546 Oct 27 08:17:10.301801 sshd-session[1777]: pam_unix(sshd:session): session closed for user core Oct 27 08:17:10.318603 systemd[1]: sshd@3-10.0.0.23:22-10.0.0.1:55546.service: Deactivated successfully. Oct 27 08:17:10.320545 systemd[1]: session-4.scope: Deactivated successfully. Oct 27 08:17:10.321360 systemd-logind[1605]: Session 4 logged out. Waiting for processes to exit. Oct 27 08:17:10.324108 systemd[1]: Started sshd@4-10.0.0.23:22-10.0.0.1:55560.service - OpenSSH per-connection server daemon (10.0.0.1:55560). Oct 27 08:17:10.324716 systemd-logind[1605]: Removed session 4. Oct 27 08:17:10.386416 sshd[1786]: Accepted publickey for core from 10.0.0.1 port 55560 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:17:10.387677 sshd-session[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:17:10.392279 systemd-logind[1605]: New session 5 of user core. Oct 27 08:17:10.404345 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 27 08:17:10.453127 sshd[1789]: Connection closed by 10.0.0.1 port 55560 Oct 27 08:17:10.453364 sshd-session[1786]: pam_unix(sshd:session): session closed for user core Oct 27 08:17:10.461868 systemd[1]: sshd@4-10.0.0.23:22-10.0.0.1:55560.service: Deactivated successfully. Oct 27 08:17:10.463840 systemd[1]: session-5.scope: Deactivated successfully. Oct 27 08:17:10.464601 systemd-logind[1605]: Session 5 logged out. Waiting for processes to exit. Oct 27 08:17:10.467581 systemd[1]: Started sshd@5-10.0.0.23:22-10.0.0.1:55570.service - OpenSSH per-connection server daemon (10.0.0.1:55570). Oct 27 08:17:10.468139 systemd-logind[1605]: Removed session 5. Oct 27 08:17:10.537013 sshd[1795]: Accepted publickey for core from 10.0.0.1 port 55570 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:17:10.538494 sshd-session[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:17:10.543058 systemd-logind[1605]: New session 6 of user core. Oct 27 08:17:10.553354 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 27 08:17:10.606606 sshd[1798]: Connection closed by 10.0.0.1 port 55570 Oct 27 08:17:10.606928 sshd-session[1795]: pam_unix(sshd:session): session closed for user core Oct 27 08:17:10.615904 systemd[1]: sshd@5-10.0.0.23:22-10.0.0.1:55570.service: Deactivated successfully. Oct 27 08:17:10.617784 systemd[1]: session-6.scope: Deactivated successfully. Oct 27 08:17:10.618520 systemd-logind[1605]: Session 6 logged out. Waiting for processes to exit. Oct 27 08:17:10.621299 systemd[1]: Started sshd@6-10.0.0.23:22-10.0.0.1:55584.service - OpenSSH per-connection server daemon (10.0.0.1:55584). Oct 27 08:17:10.621883 systemd-logind[1605]: Removed session 6. Oct 27 08:17:10.683828 sshd[1804]: Accepted publickey for core from 10.0.0.1 port 55584 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:17:10.685034 sshd-session[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:17:10.689347 systemd-logind[1605]: New session 7 of user core. Oct 27 08:17:10.703337 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 27 08:17:10.766204 sudo[1808]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 27 08:17:10.766540 sudo[1808]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 08:17:10.779811 sudo[1808]: pam_unix(sudo:session): session closed for user root Oct 27 08:17:10.781840 sshd[1807]: Connection closed by 10.0.0.1 port 55584 Oct 27 08:17:10.782226 sshd-session[1804]: pam_unix(sshd:session): session closed for user core Oct 27 08:17:10.796828 systemd[1]: sshd@6-10.0.0.23:22-10.0.0.1:55584.service: Deactivated successfully. Oct 27 08:17:10.798723 systemd[1]: session-7.scope: Deactivated successfully. Oct 27 08:17:10.799460 systemd-logind[1605]: Session 7 logged out. Waiting for processes to exit. Oct 27 08:17:10.802537 systemd[1]: Started sshd@7-10.0.0.23:22-10.0.0.1:55600.service - OpenSSH per-connection server daemon (10.0.0.1:55600). Oct 27 08:17:10.803067 systemd-logind[1605]: Removed session 7. Oct 27 08:17:10.863427 sshd[1814]: Accepted publickey for core from 10.0.0.1 port 55600 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:17:10.864788 sshd-session[1814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:17:10.869140 systemd-logind[1605]: New session 8 of user core. Oct 27 08:17:10.882349 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 27 08:17:10.936665 sudo[1819]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 27 08:17:10.936988 sudo[1819]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 08:17:11.136601 sudo[1819]: pam_unix(sudo:session): session closed for user root Oct 27 08:17:11.145060 sudo[1818]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 27 08:17:11.145398 sudo[1818]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 08:17:11.155600 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 27 08:17:11.206925 augenrules[1841]: No rules Oct 27 08:17:11.208758 systemd[1]: audit-rules.service: Deactivated successfully. Oct 27 08:17:11.209043 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 27 08:17:11.210203 sudo[1818]: pam_unix(sudo:session): session closed for user root Oct 27 08:17:11.212034 sshd[1817]: Connection closed by 10.0.0.1 port 55600 Oct 27 08:17:11.212352 sshd-session[1814]: pam_unix(sshd:session): session closed for user core Oct 27 08:17:11.226704 systemd[1]: sshd@7-10.0.0.23:22-10.0.0.1:55600.service: Deactivated successfully. Oct 27 08:17:11.228547 systemd[1]: session-8.scope: Deactivated successfully. Oct 27 08:17:11.229294 systemd-logind[1605]: Session 8 logged out. Waiting for processes to exit. Oct 27 08:17:11.232080 systemd[1]: Started sshd@8-10.0.0.23:22-10.0.0.1:55606.service - OpenSSH per-connection server daemon (10.0.0.1:55606). Oct 27 08:17:11.232628 systemd-logind[1605]: Removed session 8. Oct 27 08:17:11.291880 sshd[1850]: Accepted publickey for core from 10.0.0.1 port 55606 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:17:11.293039 sshd-session[1850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:17:11.297280 systemd-logind[1605]: New session 9 of user core. Oct 27 08:17:11.307324 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 27 08:17:11.361253 sudo[1854]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 27 08:17:11.361566 sudo[1854]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 08:17:12.033259 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 27 08:17:12.035698 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:17:12.138983 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 27 08:17:12.162631 (dockerd)[1877]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 27 08:17:12.368357 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:17:12.375027 (kubelet)[1883]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 27 08:17:12.498987 kubelet[1883]: E1027 08:17:12.498918 1883 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 27 08:17:12.505598 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 27 08:17:12.505840 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 27 08:17:12.506390 systemd[1]: kubelet.service: Consumed 457ms CPU time, 111.5M memory peak. Oct 27 08:17:12.843288 dockerd[1877]: time="2025-10-27T08:17:12.843118728Z" level=info msg="Starting up" Oct 27 08:17:12.844310 dockerd[1877]: time="2025-10-27T08:17:12.844285155Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 27 08:17:12.914754 dockerd[1877]: time="2025-10-27T08:17:12.914687705Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 27 08:17:13.385336 systemd[1]: var-lib-docker-metacopy\x2dcheck1964928056-merged.mount: Deactivated successfully. Oct 27 08:17:13.413881 dockerd[1877]: time="2025-10-27T08:17:13.413806571Z" level=info msg="Loading containers: start." Oct 27 08:17:13.428254 kernel: Initializing XFRM netlink socket Oct 27 08:17:13.740460 systemd-networkd[1531]: docker0: Link UP Oct 27 08:17:13.746323 dockerd[1877]: time="2025-10-27T08:17:13.746270570Z" level=info msg="Loading containers: done." Oct 27 08:17:13.773870 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck810911496-merged.mount: Deactivated successfully. Oct 27 08:17:13.776607 dockerd[1877]: time="2025-10-27T08:17:13.776548883Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 27 08:17:13.776680 dockerd[1877]: time="2025-10-27T08:17:13.776670380Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 27 08:17:13.776795 dockerd[1877]: time="2025-10-27T08:17:13.776767723Z" level=info msg="Initializing buildkit" Oct 27 08:17:13.807689 dockerd[1877]: time="2025-10-27T08:17:13.807639208Z" level=info msg="Completed buildkit initialization" Oct 27 08:17:13.813652 dockerd[1877]: time="2025-10-27T08:17:13.813602992Z" level=info msg="Daemon has completed initialization" Oct 27 08:17:13.813759 dockerd[1877]: time="2025-10-27T08:17:13.813682411Z" level=info msg="API listen on /run/docker.sock" Oct 27 08:17:13.814020 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 27 08:17:14.600009 containerd[1622]: time="2025-10-27T08:17:14.599879521Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Oct 27 08:17:15.420107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3329262620.mount: Deactivated successfully. Oct 27 08:17:16.987347 containerd[1622]: time="2025-10-27T08:17:16.987259571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:17:16.987971 containerd[1622]: time="2025-10-27T08:17:16.987943524Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Oct 27 08:17:16.989108 containerd[1622]: time="2025-10-27T08:17:16.989059116Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:17:16.991713 containerd[1622]: time="2025-10-27T08:17:16.991676344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:17:16.992766 containerd[1622]: time="2025-10-27T08:17:16.992712267Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 2.39271281s" Oct 27 08:17:16.992824 containerd[1622]: time="2025-10-27T08:17:16.992769604Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Oct 27 08:17:16.993752 containerd[1622]: time="2025-10-27T08:17:16.993715869Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Oct 27 08:17:19.075031 containerd[1622]: time="2025-10-27T08:17:19.074952290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:17:19.095168 containerd[1622]: time="2025-10-27T08:17:19.095111456Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Oct 27 08:17:19.111059 containerd[1622]: time="2025-10-27T08:17:19.110963644Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:17:19.140952 containerd[1622]: time="2025-10-27T08:17:19.140887562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:17:19.142409 containerd[1622]: time="2025-10-27T08:17:19.142327333Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 2.148575707s" Oct 27 08:17:19.142409 containerd[1622]: time="2025-10-27T08:17:19.142404938Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Oct 27 08:17:19.143456 containerd[1622]: time="2025-10-27T08:17:19.143399884Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Oct 27 08:17:21.188094 containerd[1622]: time="2025-10-27T08:17:21.188000189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:17:21.188854 containerd[1622]: time="2025-10-27T08:17:21.188822021Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Oct 27 08:17:21.190352 containerd[1622]: time="2025-10-27T08:17:21.190290935Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:17:21.193627 containerd[1622]: time="2025-10-27T08:17:21.193573932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:17:21.195315 containerd[1622]: time="2025-10-27T08:17:21.195281124Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 2.051818101s" Oct 27 08:17:21.195400 containerd[1622]: time="2025-10-27T08:17:21.195320197Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Oct 27 08:17:21.196013 containerd[1622]: time="2025-10-27T08:17:21.195927576Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 27 08:17:22.306044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2592947604.mount: Deactivated successfully. Oct 27 08:17:22.756378 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 27 08:17:22.758383 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:17:23.052568 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:17:23.061489 (kubelet)[2191]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 27 08:17:23.109571 kubelet[2191]: E1027 08:17:23.109515 2191 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 27 08:17:23.113737 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 27 08:17:23.113972 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 27 08:17:23.114426 systemd[1]: kubelet.service: Consumed 258ms CPU time, 109.1M memory peak. Oct 27 08:17:24.031949 containerd[1622]: time="2025-10-27T08:17:24.031887082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:17:24.039060 containerd[1622]: time="2025-10-27T08:17:24.039005903Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Oct 27 08:17:24.113601 containerd[1622]: time="2025-10-27T08:17:24.113548136Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:17:24.197187 containerd[1622]: time="2025-10-27T08:17:24.197145872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:17:24.197656 containerd[1622]: time="2025-10-27T08:17:24.197626553Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 3.001654273s" Oct 27 08:17:24.197733 containerd[1622]: time="2025-10-27T08:17:24.197657471Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Oct 27 08:17:24.198349 containerd[1622]: time="2025-10-27T08:17:24.198241206Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 27 08:17:25.716695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2349893457.mount: Deactivated successfully. Oct 27 08:17:26.597968 containerd[1622]: time="2025-10-27T08:17:26.597892378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:17:26.598645 containerd[1622]: time="2025-10-27T08:17:26.598590407Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Oct 27 08:17:26.599923 containerd[1622]: time="2025-10-27T08:17:26.599882460Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:17:26.602712 containerd[1622]: time="2025-10-27T08:17:26.602663626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:17:26.603807 containerd[1622]: time="2025-10-27T08:17:26.603770662Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.405502495s" Oct 27 08:17:26.603860 containerd[1622]: time="2025-10-27T08:17:26.603811950Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Oct 27 08:17:26.604383 containerd[1622]: time="2025-10-27T08:17:26.604310224Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Oct 27 08:17:27.007042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3514372837.mount: Deactivated successfully. Oct 27 08:17:27.012765 containerd[1622]: time="2025-10-27T08:17:27.012719347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:17:27.013448 containerd[1622]: time="2025-10-27T08:17:27.013407718Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Oct 27 08:17:27.014580 containerd[1622]: time="2025-10-27T08:17:27.014538839Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:17:27.019178 containerd[1622]: time="2025-10-27T08:17:27.019122365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:17:27.019760 containerd[1622]: time="2025-10-27T08:17:27.019725286Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 415.384855ms" Oct 27 08:17:27.019760 containerd[1622]: time="2025-10-27T08:17:27.019757626Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Oct 27 08:17:27.020271 containerd[1622]: time="2025-10-27T08:17:27.020238358Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Oct 27 08:17:30.849133 containerd[1622]: time="2025-10-27T08:17:30.849046620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:17:30.880069 containerd[1622]: time="2025-10-27T08:17:30.880011851Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Oct 27 08:17:30.905899 containerd[1622]: time="2025-10-27T08:17:30.905847492Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:17:30.978992 containerd[1622]: time="2025-10-27T08:17:30.978944655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:17:30.979891 containerd[1622]: time="2025-10-27T08:17:30.979851516Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.959577361s" Oct 27 08:17:30.979891 containerd[1622]: time="2025-10-27T08:17:30.979879378Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Oct 27 08:17:33.295012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 27 08:17:33.296888 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:17:33.310098 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 27 08:17:33.310226 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 27 08:17:33.310505 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:17:33.312802 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:17:33.341799 systemd[1]: Reload requested from client PID 2330 ('systemctl') (unit session-9.scope)... Oct 27 08:17:33.341816 systemd[1]: Reloading... Oct 27 08:17:33.436422 zram_generator::config[2374]: No configuration found. Oct 27 08:17:33.805479 systemd[1]: Reloading finished in 463 ms. Oct 27 08:17:33.884049 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 27 08:17:33.884168 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 27 08:17:33.884494 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:17:33.884537 systemd[1]: kubelet.service: Consumed 168ms CPU time, 98.2M memory peak. Oct 27 08:17:33.886265 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:17:34.072232 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:17:34.076702 (kubelet)[2422]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 27 08:17:34.164224 kubelet[2422]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 27 08:17:34.164224 kubelet[2422]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 08:17:34.164716 kubelet[2422]: I1027 08:17:34.164429 2422 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 27 08:17:35.057072 kubelet[2422]: I1027 08:17:35.057014 2422 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 27 08:17:35.057072 kubelet[2422]: I1027 08:17:35.057051 2422 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 27 08:17:35.057072 kubelet[2422]: I1027 08:17:35.057082 2422 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 27 08:17:35.057276 kubelet[2422]: I1027 08:17:35.057089 2422 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 27 08:17:35.057356 kubelet[2422]: I1027 08:17:35.057327 2422 server.go:956] "Client rotation is on, will bootstrap in background" Oct 27 08:17:35.747470 kubelet[2422]: E1027 08:17:35.747417 2422 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 27 08:17:35.747935 kubelet[2422]: I1027 08:17:35.747481 2422 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 27 08:17:35.751990 kubelet[2422]: I1027 08:17:35.751966 2422 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 27 08:17:35.757394 kubelet[2422]: I1027 08:17:35.757355 2422 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 27 08:17:35.758461 kubelet[2422]: I1027 08:17:35.758418 2422 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 27 08:17:35.758622 kubelet[2422]: I1027 08:17:35.758454 2422 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 27 08:17:35.758786 kubelet[2422]: I1027 08:17:35.758626 2422 topology_manager.go:138] "Creating topology manager with none policy" Oct 27 08:17:35.758786 kubelet[2422]: I1027 08:17:35.758636 2422 container_manager_linux.go:306] "Creating device plugin manager" Oct 27 08:17:35.758786 kubelet[2422]: I1027 08:17:35.758746 2422 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 27 08:17:36.152369 kubelet[2422]: I1027 08:17:36.152183 2422 state_mem.go:36] "Initialized new in-memory state store" Oct 27 08:17:36.152576 kubelet[2422]: I1027 08:17:36.152506 2422 kubelet.go:475] "Attempting to sync node with API server" Oct 27 08:17:36.152576 kubelet[2422]: I1027 08:17:36.152530 2422 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 27 08:17:36.152576 kubelet[2422]: I1027 08:17:36.152559 2422 kubelet.go:387] "Adding apiserver pod source" Oct 27 08:17:36.152656 kubelet[2422]: I1027 08:17:36.152589 2422 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 27 08:17:36.157231 kubelet[2422]: E1027 08:17:36.156933 2422 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 27 08:17:36.157231 kubelet[2422]: E1027 08:17:36.156951 2422 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 27 08:17:36.158899 kubelet[2422]: I1027 08:17:36.157710 2422 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 27 08:17:36.158899 kubelet[2422]: I1027 08:17:36.158540 2422 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 27 08:17:36.158899 kubelet[2422]: I1027 08:17:36.158588 2422 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 27 08:17:36.158899 kubelet[2422]: W1027 08:17:36.158694 2422 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 27 08:17:36.164411 kubelet[2422]: I1027 08:17:36.164381 2422 server.go:1262] "Started kubelet" Oct 27 08:17:36.164654 kubelet[2422]: I1027 08:17:36.164599 2422 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 27 08:17:36.164865 kubelet[2422]: I1027 08:17:36.164803 2422 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 27 08:17:36.164925 kubelet[2422]: I1027 08:17:36.164872 2422 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 27 08:17:36.165707 kubelet[2422]: I1027 08:17:36.165514 2422 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 27 08:17:36.165977 kubelet[2422]: I1027 08:17:36.165954 2422 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 27 08:17:36.169376 kubelet[2422]: I1027 08:17:36.166670 2422 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 27 08:17:36.169530 kubelet[2422]: I1027 08:17:36.169304 2422 server.go:310] "Adding debug handlers to kubelet server" Oct 27 08:17:36.169601 kubelet[2422]: E1027 08:17:36.169578 2422 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:17:36.169689 kubelet[2422]: I1027 08:17:36.169439 2422 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 27 08:17:36.170111 kubelet[2422]: I1027 08:17:36.169421 2422 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 27 08:17:36.170457 kubelet[2422]: I1027 08:17:36.170420 2422 reconciler.go:29] "Reconciler: start to sync state" Oct 27 08:17:36.170588 kubelet[2422]: E1027 08:17:36.170491 2422 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 27 08:17:36.171513 kubelet[2422]: E1027 08:17:36.170924 2422 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="200ms" Oct 27 08:17:36.172895 kubelet[2422]: E1027 08:17:36.172843 2422 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 27 08:17:36.173502 kubelet[2422]: E1027 08:17:36.171809 2422 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.23:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.23:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18724b2be5adadbd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-27 08:17:36.164326845 +0000 UTC m=+2.081058421,LastTimestamp:2025-10-27 08:17:36.164326845 +0000 UTC m=+2.081058421,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 27 08:17:36.174089 kubelet[2422]: I1027 08:17:36.174064 2422 factory.go:223] Registration of the containerd container factory successfully Oct 27 08:17:36.174089 kubelet[2422]: I1027 08:17:36.174081 2422 factory.go:223] Registration of the systemd container factory successfully Oct 27 08:17:36.174234 kubelet[2422]: I1027 08:17:36.174186 2422 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 27 08:17:36.188048 kubelet[2422]: I1027 08:17:36.188019 2422 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 27 08:17:36.188048 kubelet[2422]: I1027 08:17:36.188040 2422 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 27 08:17:36.188152 kubelet[2422]: I1027 08:17:36.188055 2422 state_mem.go:36] "Initialized new in-memory state store" Oct 27 08:17:36.192354 kubelet[2422]: I1027 08:17:36.192310 2422 policy_none.go:49] "None policy: Start" Oct 27 08:17:36.192354 kubelet[2422]: I1027 08:17:36.192333 2422 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 27 08:17:36.192354 kubelet[2422]: I1027 08:17:36.192346 2422 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 27 08:17:36.193036 kubelet[2422]: I1027 08:17:36.192972 2422 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 27 08:17:36.193677 kubelet[2422]: I1027 08:17:36.193654 2422 policy_none.go:47] "Start" Oct 27 08:17:36.195053 kubelet[2422]: I1027 08:17:36.194777 2422 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 27 08:17:36.195156 kubelet[2422]: I1027 08:17:36.195139 2422 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 27 08:17:36.195231 kubelet[2422]: I1027 08:17:36.195167 2422 kubelet.go:2427] "Starting kubelet main sync loop" Oct 27 08:17:36.195267 kubelet[2422]: E1027 08:17:36.195248 2422 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 27 08:17:36.196737 kubelet[2422]: E1027 08:17:36.196706 2422 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 27 08:17:36.199991 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 27 08:17:36.222452 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 27 08:17:36.225952 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 27 08:17:36.239340 kubelet[2422]: E1027 08:17:36.239173 2422 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 27 08:17:36.239526 kubelet[2422]: I1027 08:17:36.239501 2422 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 27 08:17:36.239596 kubelet[2422]: I1027 08:17:36.239526 2422 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 27 08:17:36.239931 kubelet[2422]: I1027 08:17:36.239907 2422 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 27 08:17:36.241088 kubelet[2422]: E1027 08:17:36.241039 2422 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 27 08:17:36.241088 kubelet[2422]: E1027 08:17:36.241094 2422 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 27 08:17:36.310381 systemd[1]: Created slice kubepods-burstable-poda05d045b36fb1412d8fd4a3351719417.slice - libcontainer container kubepods-burstable-poda05d045b36fb1412d8fd4a3351719417.slice. Oct 27 08:17:36.341332 kubelet[2422]: I1027 08:17:36.341256 2422 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 08:17:36.341781 kubelet[2422]: E1027 08:17:36.341751 2422 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.23:6443/api/v1/nodes\": dial tcp 10.0.0.23:6443: connect: connection refused" node="localhost" Oct 27 08:17:36.342348 kubelet[2422]: E1027 08:17:36.342326 2422 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 08:17:36.346152 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Oct 27 08:17:36.348393 kubelet[2422]: E1027 08:17:36.348371 2422 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 08:17:36.350646 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Oct 27 08:17:36.352498 kubelet[2422]: E1027 08:17:36.352465 2422 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 08:17:36.371961 kubelet[2422]: I1027 08:17:36.371857 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a05d045b36fb1412d8fd4a3351719417-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a05d045b36fb1412d8fd4a3351719417\") " pod="kube-system/kube-apiserver-localhost" Oct 27 08:17:36.371961 kubelet[2422]: I1027 08:17:36.371944 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:17:36.371961 kubelet[2422]: I1027 08:17:36.371966 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:17:36.372103 kubelet[2422]: I1027 08:17:36.371990 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:17:36.372103 kubelet[2422]: I1027 08:17:36.372016 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a05d045b36fb1412d8fd4a3351719417-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a05d045b36fb1412d8fd4a3351719417\") " pod="kube-system/kube-apiserver-localhost" Oct 27 08:17:36.372103 kubelet[2422]: I1027 08:17:36.372037 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a05d045b36fb1412d8fd4a3351719417-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a05d045b36fb1412d8fd4a3351719417\") " pod="kube-system/kube-apiserver-localhost" Oct 27 08:17:36.372103 kubelet[2422]: I1027 08:17:36.372053 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:17:36.372103 kubelet[2422]: I1027 08:17:36.372098 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:17:36.372409 kubelet[2422]: I1027 08:17:36.372142 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 27 08:17:36.372409 kubelet[2422]: E1027 08:17:36.372358 2422 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="400ms" Oct 27 08:17:36.543813 kubelet[2422]: I1027 08:17:36.543638 2422 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 08:17:36.544049 kubelet[2422]: E1027 08:17:36.544010 2422 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.23:6443/api/v1/nodes\": dial tcp 10.0.0.23:6443: connect: connection refused" node="localhost" Oct 27 08:17:36.646958 kubelet[2422]: E1027 08:17:36.646836 2422 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:17:36.647832 containerd[1622]: time="2025-10-27T08:17:36.647758285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a05d045b36fb1412d8fd4a3351719417,Namespace:kube-system,Attempt:0,}" Oct 27 08:17:36.652495 kubelet[2422]: E1027 08:17:36.652448 2422 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:17:36.652923 containerd[1622]: time="2025-10-27T08:17:36.652884930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Oct 27 08:17:36.656106 kubelet[2422]: E1027 08:17:36.656052 2422 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:17:36.656530 containerd[1622]: time="2025-10-27T08:17:36.656493920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Oct 27 08:17:36.773726 kubelet[2422]: E1027 08:17:36.773642 2422 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="800ms" Oct 27 08:17:36.946149 kubelet[2422]: I1027 08:17:36.946030 2422 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 08:17:36.946899 kubelet[2422]: E1027 08:17:36.946841 2422 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.23:6443/api/v1/nodes\": dial tcp 10.0.0.23:6443: connect: connection refused" node="localhost" Oct 27 08:17:36.993096 kubelet[2422]: E1027 08:17:36.992943 2422 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.23:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.23:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18724b2be5adadbd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-27 08:17:36.164326845 +0000 UTC m=+2.081058421,LastTimestamp:2025-10-27 08:17:36.164326845 +0000 UTC m=+2.081058421,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 27 08:17:37.078827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2790353895.mount: Deactivated successfully. Oct 27 08:17:37.087504 containerd[1622]: time="2025-10-27T08:17:37.087435681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 08:17:37.090985 containerd[1622]: time="2025-10-27T08:17:37.090921757Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 27 08:17:37.091893 containerd[1622]: time="2025-10-27T08:17:37.091862355Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 08:17:37.092730 containerd[1622]: time="2025-10-27T08:17:37.092692122Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 08:17:37.093505 containerd[1622]: time="2025-10-27T08:17:37.093479477Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 27 08:17:37.094518 containerd[1622]: time="2025-10-27T08:17:37.094490801Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 08:17:37.095256 containerd[1622]: time="2025-10-27T08:17:37.095224093Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 27 08:17:37.097321 containerd[1622]: time="2025-10-27T08:17:37.097274975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 08:17:37.099512 containerd[1622]: time="2025-10-27T08:17:37.099470624Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 438.762545ms" Oct 27 08:17:37.100007 containerd[1622]: time="2025-10-27T08:17:37.099979867Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 447.881122ms" Oct 27 08:17:37.102779 containerd[1622]: time="2025-10-27T08:17:37.102745074Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 446.129602ms" Oct 27 08:17:37.140242 containerd[1622]: time="2025-10-27T08:17:37.140145077Z" level=info msg="connecting to shim dd7606f10765234eafdf9ba3dc522219fe8fbab3774f5afce282d6b90bf303b4" address="unix:///run/containerd/s/f12892aef89c9c69afc972ef3daf7bce30de9ecb26943b92609fcfe0c2003239" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:17:37.141805 containerd[1622]: time="2025-10-27T08:17:37.141783210Z" level=info msg="connecting to shim 3ee61bb3eb76dc43c8d2a4546ecdb075abf51bb17d542e6106333c839e680505" address="unix:///run/containerd/s/c0a2a0caa99d0e4337d52595f22e226974383905e93bb5c86f98cd81e75ea321" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:17:37.142310 containerd[1622]: time="2025-10-27T08:17:37.142267696Z" level=info msg="connecting to shim c39d05232dcdf1228d2f5cead96d3557ab1ced342cece51d077b6e8785d51cfc" address="unix:///run/containerd/s/a8120ad94d59a288fc7855d33bb69122a5ed6aab6fa6ed43783d75323857140d" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:17:37.197388 systemd[1]: Started cri-containerd-3ee61bb3eb76dc43c8d2a4546ecdb075abf51bb17d542e6106333c839e680505.scope - libcontainer container 3ee61bb3eb76dc43c8d2a4546ecdb075abf51bb17d542e6106333c839e680505. Oct 27 08:17:37.203024 systemd[1]: Started cri-containerd-c39d05232dcdf1228d2f5cead96d3557ab1ced342cece51d077b6e8785d51cfc.scope - libcontainer container c39d05232dcdf1228d2f5cead96d3557ab1ced342cece51d077b6e8785d51cfc. Oct 27 08:17:37.205397 systemd[1]: Started cri-containerd-dd7606f10765234eafdf9ba3dc522219fe8fbab3774f5afce282d6b90bf303b4.scope - libcontainer container dd7606f10765234eafdf9ba3dc522219fe8fbab3774f5afce282d6b90bf303b4. Oct 27 08:17:37.263038 kubelet[2422]: E1027 08:17:37.262990 2422 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 27 08:17:37.268928 containerd[1622]: time="2025-10-27T08:17:37.268883960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a05d045b36fb1412d8fd4a3351719417,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ee61bb3eb76dc43c8d2a4546ecdb075abf51bb17d542e6106333c839e680505\"" Oct 27 08:17:37.269998 kubelet[2422]: E1027 08:17:37.269932 2422 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:17:37.358011 containerd[1622]: time="2025-10-27T08:17:37.357949734Z" level=info msg="CreateContainer within sandbox \"3ee61bb3eb76dc43c8d2a4546ecdb075abf51bb17d542e6106333c839e680505\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 27 08:17:37.358011 containerd[1622]: time="2025-10-27T08:17:37.358000711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd7606f10765234eafdf9ba3dc522219fe8fbab3774f5afce282d6b90bf303b4\"" Oct 27 08:17:37.358791 kubelet[2422]: E1027 08:17:37.358745 2422 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:17:37.359853 containerd[1622]: time="2025-10-27T08:17:37.359820231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"c39d05232dcdf1228d2f5cead96d3557ab1ced342cece51d077b6e8785d51cfc\"" Oct 27 08:17:37.360748 kubelet[2422]: E1027 08:17:37.360723 2422 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:17:37.362988 containerd[1622]: time="2025-10-27T08:17:37.362942471Z" level=info msg="CreateContainer within sandbox \"dd7606f10765234eafdf9ba3dc522219fe8fbab3774f5afce282d6b90bf303b4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 27 08:17:37.365614 containerd[1622]: time="2025-10-27T08:17:37.365571467Z" level=info msg="CreateContainer within sandbox \"c39d05232dcdf1228d2f5cead96d3557ab1ced342cece51d077b6e8785d51cfc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 27 08:17:37.370818 kubelet[2422]: E1027 08:17:37.370768 2422 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 27 08:17:37.375527 containerd[1622]: time="2025-10-27T08:17:37.375493620Z" level=info msg="Container 4f2768500ab483035eb14f2361efeeec1757bd555fa63ca7cb81322b050c5d10: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:17:37.378444 containerd[1622]: time="2025-10-27T08:17:37.378409325Z" level=info msg="Container 2d37afcee18988d7fa7e002b140ba4fc843a1f25b2d982424cfa463787fc6033: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:17:37.381123 containerd[1622]: time="2025-10-27T08:17:37.381086204Z" level=info msg="Container a9e27d97c524231d0b2cdc8a239b1f6728f3e8a5896393e78bdf36694357a4f4: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:17:37.385494 containerd[1622]: time="2025-10-27T08:17:37.385460618Z" level=info msg="CreateContainer within sandbox \"3ee61bb3eb76dc43c8d2a4546ecdb075abf51bb17d542e6106333c839e680505\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4f2768500ab483035eb14f2361efeeec1757bd555fa63ca7cb81322b050c5d10\"" Oct 27 08:17:37.385989 containerd[1622]: time="2025-10-27T08:17:37.385964983Z" level=info msg="StartContainer for \"4f2768500ab483035eb14f2361efeeec1757bd555fa63ca7cb81322b050c5d10\"" Oct 27 08:17:37.387048 containerd[1622]: time="2025-10-27T08:17:37.387023016Z" level=info msg="connecting to shim 4f2768500ab483035eb14f2361efeeec1757bd555fa63ca7cb81322b050c5d10" address="unix:///run/containerd/s/c0a2a0caa99d0e4337d52595f22e226974383905e93bb5c86f98cd81e75ea321" protocol=ttrpc version=3 Oct 27 08:17:37.388825 containerd[1622]: time="2025-10-27T08:17:37.388779544Z" level=info msg="CreateContainer within sandbox \"dd7606f10765234eafdf9ba3dc522219fe8fbab3774f5afce282d6b90bf303b4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2d37afcee18988d7fa7e002b140ba4fc843a1f25b2d982424cfa463787fc6033\"" Oct 27 08:17:37.389321 containerd[1622]: time="2025-10-27T08:17:37.389296653Z" level=info msg="StartContainer for \"2d37afcee18988d7fa7e002b140ba4fc843a1f25b2d982424cfa463787fc6033\"" Oct 27 08:17:37.390318 containerd[1622]: time="2025-10-27T08:17:37.390189220Z" level=info msg="connecting to shim 2d37afcee18988d7fa7e002b140ba4fc843a1f25b2d982424cfa463787fc6033" address="unix:///run/containerd/s/f12892aef89c9c69afc972ef3daf7bce30de9ecb26943b92609fcfe0c2003239" protocol=ttrpc version=3 Oct 27 08:17:37.394747 containerd[1622]: time="2025-10-27T08:17:37.394708502Z" level=info msg="CreateContainer within sandbox \"c39d05232dcdf1228d2f5cead96d3557ab1ced342cece51d077b6e8785d51cfc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a9e27d97c524231d0b2cdc8a239b1f6728f3e8a5896393e78bdf36694357a4f4\"" Oct 27 08:17:37.395342 containerd[1622]: time="2025-10-27T08:17:37.395292959Z" level=info msg="StartContainer for \"a9e27d97c524231d0b2cdc8a239b1f6728f3e8a5896393e78bdf36694357a4f4\"" Oct 27 08:17:37.396289 containerd[1622]: time="2025-10-27T08:17:37.396261781Z" level=info msg="connecting to shim a9e27d97c524231d0b2cdc8a239b1f6728f3e8a5896393e78bdf36694357a4f4" address="unix:///run/containerd/s/a8120ad94d59a288fc7855d33bb69122a5ed6aab6fa6ed43783d75323857140d" protocol=ttrpc version=3 Oct 27 08:17:37.409371 systemd[1]: Started cri-containerd-4f2768500ab483035eb14f2361efeeec1757bd555fa63ca7cb81322b050c5d10.scope - libcontainer container 4f2768500ab483035eb14f2361efeeec1757bd555fa63ca7cb81322b050c5d10. Oct 27 08:17:37.413384 systemd[1]: Started cri-containerd-2d37afcee18988d7fa7e002b140ba4fc843a1f25b2d982424cfa463787fc6033.scope - libcontainer container 2d37afcee18988d7fa7e002b140ba4fc843a1f25b2d982424cfa463787fc6033. Oct 27 08:17:37.420105 systemd[1]: Started cri-containerd-a9e27d97c524231d0b2cdc8a239b1f6728f3e8a5896393e78bdf36694357a4f4.scope - libcontainer container a9e27d97c524231d0b2cdc8a239b1f6728f3e8a5896393e78bdf36694357a4f4. Oct 27 08:17:37.554606 kubelet[2422]: E1027 08:17:37.553825 2422 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 27 08:17:37.574487 kubelet[2422]: E1027 08:17:37.574399 2422 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="1.6s" Oct 27 08:17:37.597621 containerd[1622]: time="2025-10-27T08:17:37.597562110Z" level=info msg="StartContainer for \"4f2768500ab483035eb14f2361efeeec1757bd555fa63ca7cb81322b050c5d10\" returns successfully" Oct 27 08:17:37.620329 containerd[1622]: time="2025-10-27T08:17:37.620276633Z" level=info msg="StartContainer for \"a9e27d97c524231d0b2cdc8a239b1f6728f3e8a5896393e78bdf36694357a4f4\" returns successfully" Oct 27 08:17:37.637348 containerd[1622]: time="2025-10-27T08:17:37.637181045Z" level=info msg="StartContainer for \"2d37afcee18988d7fa7e002b140ba4fc843a1f25b2d982424cfa463787fc6033\" returns successfully" Oct 27 08:17:37.749708 kubelet[2422]: I1027 08:17:37.749542 2422 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 08:17:38.216240 kubelet[2422]: E1027 08:17:38.209804 2422 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 08:17:38.216240 kubelet[2422]: E1027 08:17:38.209948 2422 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:17:38.216240 kubelet[2422]: E1027 08:17:38.212476 2422 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 08:17:38.216240 kubelet[2422]: E1027 08:17:38.212569 2422 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:17:38.221463 kubelet[2422]: E1027 08:17:38.221435 2422 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 08:17:38.221570 kubelet[2422]: E1027 08:17:38.221549 2422 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:17:39.219717 kubelet[2422]: E1027 08:17:39.219681 2422 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 08:17:39.220143 kubelet[2422]: E1027 08:17:39.219828 2422 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:17:39.220143 kubelet[2422]: E1027 08:17:39.219901 2422 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 08:17:39.220143 kubelet[2422]: E1027 08:17:39.220073 2422 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:17:39.262171 kubelet[2422]: E1027 08:17:39.262039 2422 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 27 08:17:39.372470 kubelet[2422]: I1027 08:17:39.372420 2422 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 27 08:17:39.372470 kubelet[2422]: E1027 08:17:39.372458 2422 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 27 08:17:39.385618 kubelet[2422]: E1027 08:17:39.385544 2422 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:17:39.486106 kubelet[2422]: E1027 08:17:39.485941 2422 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:17:39.586643 kubelet[2422]: E1027 08:17:39.586559 2422 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:17:39.687532 kubelet[2422]: E1027 08:17:39.687478 2422 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:17:39.788661 kubelet[2422]: E1027 08:17:39.788489 2422 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:17:39.889178 kubelet[2422]: E1027 08:17:39.889135 2422 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:17:39.970533 kubelet[2422]: I1027 08:17:39.970492 2422 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 27 08:17:39.975090 kubelet[2422]: E1027 08:17:39.975042 2422 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 27 08:17:39.975090 kubelet[2422]: I1027 08:17:39.975087 2422 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 27 08:17:39.976498 kubelet[2422]: E1027 08:17:39.976440 2422 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 27 08:17:39.976498 kubelet[2422]: I1027 08:17:39.976491 2422 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 27 08:17:39.977662 kubelet[2422]: E1027 08:17:39.977640 2422 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 27 08:17:40.160571 kubelet[2422]: I1027 08:17:40.160429 2422 apiserver.go:52] "Watching apiserver" Oct 27 08:17:40.170702 kubelet[2422]: I1027 08:17:40.170660 2422 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 27 08:17:41.087645 kubelet[2422]: I1027 08:17:41.087589 2422 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 27 08:17:41.092677 kubelet[2422]: E1027 08:17:41.092640 2422 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:17:41.150767 systemd[1]: Reload requested from client PID 2714 ('systemctl') (unit session-9.scope)... Oct 27 08:17:41.150790 systemd[1]: Reloading... Oct 27 08:17:41.222489 kubelet[2422]: E1027 08:17:41.222085 2422 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:17:41.239256 zram_generator::config[2759]: No configuration found. Oct 27 08:17:41.474383 systemd[1]: Reloading finished in 323 ms. Oct 27 08:17:41.498433 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:17:41.498634 kubelet[2422]: I1027 08:17:41.498324 2422 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 27 08:17:41.550957 systemd[1]: kubelet.service: Deactivated successfully. Oct 27 08:17:41.551336 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:17:41.551397 systemd[1]: kubelet.service: Consumed 1.441s CPU time, 125.3M memory peak. Oct 27 08:17:41.553596 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:17:41.774859 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:17:41.780316 (kubelet)[2803]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 27 08:17:41.824563 kubelet[2803]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 27 08:17:41.824563 kubelet[2803]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 08:17:41.825022 kubelet[2803]: I1027 08:17:41.824589 2803 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 27 08:17:41.832578 kubelet[2803]: I1027 08:17:41.832513 2803 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 27 08:17:41.832578 kubelet[2803]: I1027 08:17:41.832546 2803 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 27 08:17:41.832578 kubelet[2803]: I1027 08:17:41.832577 2803 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 27 08:17:41.832578 kubelet[2803]: I1027 08:17:41.832583 2803 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 27 08:17:41.832895 kubelet[2803]: I1027 08:17:41.832835 2803 server.go:956] "Client rotation is on, will bootstrap in background" Oct 27 08:17:41.834144 kubelet[2803]: I1027 08:17:41.834116 2803 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 27 08:17:41.836368 kubelet[2803]: I1027 08:17:41.836337 2803 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 27 08:17:41.840663 kubelet[2803]: I1027 08:17:41.840622 2803 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 27 08:17:41.845408 kubelet[2803]: I1027 08:17:41.845360 2803 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 27 08:17:41.845642 kubelet[2803]: I1027 08:17:41.845599 2803 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 27 08:17:41.845780 kubelet[2803]: I1027 08:17:41.845626 2803 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 27 08:17:41.845780 kubelet[2803]: I1027 08:17:41.845776 2803 topology_manager.go:138] "Creating topology manager with none policy" Oct 27 08:17:41.845780 kubelet[2803]: I1027 08:17:41.845785 2803 container_manager_linux.go:306] "Creating device plugin manager" Oct 27 08:17:41.845952 kubelet[2803]: I1027 08:17:41.845811 2803 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 27 08:17:41.846703 kubelet[2803]: I1027 08:17:41.846668 2803 state_mem.go:36] "Initialized new in-memory state store" Oct 27 08:17:41.846929 kubelet[2803]: I1027 08:17:41.846863 2803 kubelet.go:475] "Attempting to sync node with API server" Oct 27 08:17:41.846929 kubelet[2803]: I1027 08:17:41.846881 2803 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 27 08:17:41.846929 kubelet[2803]: I1027 08:17:41.846901 2803 kubelet.go:387] "Adding apiserver pod source" Oct 27 08:17:41.846929 kubelet[2803]: I1027 08:17:41.846917 2803 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 27 08:17:41.850238 kubelet[2803]: I1027 08:17:41.848543 2803 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 27 08:17:41.850238 kubelet[2803]: I1027 08:17:41.849052 2803 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 27 08:17:41.850238 kubelet[2803]: I1027 08:17:41.849075 2803 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 27 08:17:41.856486 kubelet[2803]: I1027 08:17:41.856429 2803 server.go:1262] "Started kubelet" Oct 27 08:17:41.856961 kubelet[2803]: I1027 08:17:41.856593 2803 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 27 08:17:41.858070 kubelet[2803]: I1027 08:17:41.858028 2803 server.go:310] "Adding debug handlers to kubelet server" Oct 27 08:17:41.858070 kubelet[2803]: I1027 08:17:41.858058 2803 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 27 08:17:41.859478 kubelet[2803]: I1027 08:17:41.856668 2803 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 27 08:17:41.859478 kubelet[2803]: I1027 08:17:41.858523 2803 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 27 08:17:41.859478 kubelet[2803]: I1027 08:17:41.858776 2803 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 27 08:17:41.860312 kubelet[2803]: I1027 08:17:41.860195 2803 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 27 08:17:41.865343 kubelet[2803]: I1027 08:17:41.865314 2803 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 27 08:17:41.867237 kubelet[2803]: I1027 08:17:41.866510 2803 factory.go:223] Registration of the systemd container factory successfully Oct 27 08:17:41.867299 kubelet[2803]: I1027 08:17:41.867271 2803 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 27 08:17:41.867867 kubelet[2803]: I1027 08:17:41.867541 2803 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 27 08:17:41.867867 kubelet[2803]: E1027 08:17:41.866644 2803 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 27 08:17:41.867867 kubelet[2803]: I1027 08:17:41.867621 2803 reconciler.go:29] "Reconciler: start to sync state" Oct 27 08:17:41.869012 kubelet[2803]: I1027 08:17:41.868997 2803 factory.go:223] Registration of the containerd container factory successfully Oct 27 08:17:41.878881 kubelet[2803]: I1027 08:17:41.878843 2803 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 27 08:17:41.880351 kubelet[2803]: I1027 08:17:41.880323 2803 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 27 08:17:41.880351 kubelet[2803]: I1027 08:17:41.880349 2803 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 27 08:17:41.880427 kubelet[2803]: I1027 08:17:41.880375 2803 kubelet.go:2427] "Starting kubelet main sync loop" Oct 27 08:17:41.880461 kubelet[2803]: E1027 08:17:41.880423 2803 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 27 08:17:41.903185 kubelet[2803]: I1027 08:17:41.903140 2803 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 27 08:17:41.903185 kubelet[2803]: I1027 08:17:41.903167 2803 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 27 08:17:41.903185 kubelet[2803]: I1027 08:17:41.903242 2803 state_mem.go:36] "Initialized new in-memory state store" Oct 27 08:17:41.903487 kubelet[2803]: I1027 08:17:41.903416 2803 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 27 08:17:41.903487 kubelet[2803]: I1027 08:17:41.903426 2803 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 27 08:17:41.903487 kubelet[2803]: I1027 08:17:41.903448 2803 policy_none.go:49] "None policy: Start" Oct 27 08:17:41.903487 kubelet[2803]: I1027 08:17:41.903462 2803 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 27 08:17:41.903487 kubelet[2803]: I1027 08:17:41.903473 2803 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 27 08:17:41.903611 kubelet[2803]: I1027 08:17:41.903559 2803 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Oct 27 08:17:41.903611 kubelet[2803]: I1027 08:17:41.903567 2803 policy_none.go:47] "Start" Oct 27 08:17:41.908534 kubelet[2803]: E1027 08:17:41.908421 2803 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 27 08:17:41.908682 kubelet[2803]: I1027 08:17:41.908663 2803 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 27 08:17:41.908751 kubelet[2803]: I1027 08:17:41.908680 2803 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 27 08:17:41.909096 kubelet[2803]: I1027 08:17:41.909064 2803 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 27 08:17:41.911281 kubelet[2803]: E1027 08:17:41.910556 2803 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 27 08:17:41.981648 kubelet[2803]: I1027 08:17:41.981566 2803 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 27 08:17:41.981785 kubelet[2803]: I1027 08:17:41.981683 2803 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 27 08:17:41.981785 kubelet[2803]: I1027 08:17:41.981577 2803 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 27 08:17:42.015641 kubelet[2803]: I1027 08:17:42.015590 2803 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 08:17:42.069011 kubelet[2803]: I1027 08:17:42.068827 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:17:42.117702 kubelet[2803]: E1027 08:17:42.117652 2803 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 27 08:17:42.121799 kubelet[2803]: I1027 08:17:42.121758 2803 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 27 08:17:42.121974 kubelet[2803]: I1027 08:17:42.121852 2803 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 27 08:17:42.169552 kubelet[2803]: I1027 08:17:42.169494 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:17:42.169552 kubelet[2803]: I1027 08:17:42.169548 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:17:42.169552 kubelet[2803]: I1027 08:17:42.169566 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 27 08:17:42.169726 kubelet[2803]: I1027 08:17:42.169602 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a05d045b36fb1412d8fd4a3351719417-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a05d045b36fb1412d8fd4a3351719417\") " pod="kube-system/kube-apiserver-localhost" Oct 27 08:17:42.169726 kubelet[2803]: I1027 08:17:42.169631 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:17:42.169726 kubelet[2803]: I1027 08:17:42.169648 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:17:42.169726 kubelet[2803]: I1027 08:17:42.169670 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a05d045b36fb1412d8fd4a3351719417-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a05d045b36fb1412d8fd4a3351719417\") " pod="kube-system/kube-apiserver-localhost" Oct 27 08:17:42.169726 kubelet[2803]: I1027 08:17:42.169689 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a05d045b36fb1412d8fd4a3351719417-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a05d045b36fb1412d8fd4a3351719417\") " pod="kube-system/kube-apiserver-localhost" Oct 27 08:17:42.417511 kubelet[2803]: E1027 08:17:42.417032 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:17:42.417511 kubelet[2803]: E1027 08:17:42.416990 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:17:42.418144 kubelet[2803]: E1027 08:17:42.418110 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:17:42.848422 kubelet[2803]: I1027 08:17:42.848273 2803 apiserver.go:52] "Watching apiserver" Oct 27 08:17:42.868446 kubelet[2803]: I1027 08:17:42.868396 2803 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 27 08:17:42.893457 kubelet[2803]: I1027 08:17:42.893433 2803 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 27 08:17:42.893650 kubelet[2803]: I1027 08:17:42.893622 2803 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 27 08:17:42.893793 kubelet[2803]: E1027 08:17:42.893766 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:17:43.013376 kubelet[2803]: E1027 08:17:43.012970 2803 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 27 08:17:43.013376 kubelet[2803]: E1027 08:17:43.013249 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:17:43.013608 kubelet[2803]: I1027 08:17:43.013369 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.013334905 podStartE2EDuration="2.013334905s" podCreationTimestamp="2025-10-27 08:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:17:43.012485691 +0000 UTC m=+1.228347260" watchObservedRunningTime="2025-10-27 08:17:43.013334905 +0000 UTC m=+1.229196484" Oct 27 08:17:43.013608 kubelet[2803]: E1027 08:17:43.013584 2803 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 27 08:17:43.013905 kubelet[2803]: E1027 08:17:43.013838 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:17:43.037820 kubelet[2803]: I1027 08:17:43.037651 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.037628851 podStartE2EDuration="2.037628851s" podCreationTimestamp="2025-10-27 08:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:17:43.037487983 +0000 UTC m=+1.253349572" watchObservedRunningTime="2025-10-27 08:17:43.037628851 +0000 UTC m=+1.253490430" Oct 27 08:17:43.038094 kubelet[2803]: I1027 08:17:43.037753 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.037747988 podStartE2EDuration="2.037747988s" podCreationTimestamp="2025-10-27 08:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:17:43.026757853 +0000 UTC m=+1.242619432" watchObservedRunningTime="2025-10-27 08:17:43.037747988 +0000 UTC m=+1.253609567" Oct 27 08:17:43.894647 kubelet[2803]: E1027 08:17:43.894605 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:17:43.895087 kubelet[2803]: E1027 08:17:43.894734 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:17:43.898311 update_engine[1611]: I20251027 08:17:43.898260 1611 update_attempter.cc:509] Updating boot flags... Oct 27 08:17:44.896006 kubelet[2803]: E1027 08:17:44.895968 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:17:44.896565 kubelet[2803]: E1027 08:17:44.896179 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:17:46.812850 kubelet[2803]: E1027 08:17:46.812794 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:17:47.468172 kubelet[2803]: I1027 08:17:47.468127 2803 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 27 08:17:47.468539 containerd[1622]: time="2025-10-27T08:17:47.468503617Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 27 08:17:47.468912 kubelet[2803]: I1027 08:17:47.468702 2803 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 27 08:17:48.406394 systemd[1]: Created slice kubepods-besteffort-pod6748db2e_cdc5_484d_ae31_af72189f5f62.slice - libcontainer container kubepods-besteffort-pod6748db2e_cdc5_484d_ae31_af72189f5f62.slice. Oct 27 08:17:48.511512 kubelet[2803]: I1027 08:17:48.511466 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6748db2e-cdc5-484d-ae31-af72189f5f62-kube-proxy\") pod \"kube-proxy-hcnm6\" (UID: \"6748db2e-cdc5-484d-ae31-af72189f5f62\") " pod="kube-system/kube-proxy-hcnm6" Oct 27 08:17:48.511512 kubelet[2803]: I1027 08:17:48.511501 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6748db2e-cdc5-484d-ae31-af72189f5f62-xtables-lock\") pod \"kube-proxy-hcnm6\" (UID: \"6748db2e-cdc5-484d-ae31-af72189f5f62\") " pod="kube-system/kube-proxy-hcnm6" Oct 27 08:17:48.512013 kubelet[2803]: I1027 08:17:48.511525 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g698\" (UniqueName: \"kubernetes.io/projected/6748db2e-cdc5-484d-ae31-af72189f5f62-kube-api-access-4g698\") pod \"kube-proxy-hcnm6\" (UID: \"6748db2e-cdc5-484d-ae31-af72189f5f62\") " pod="kube-system/kube-proxy-hcnm6" Oct 27 08:17:48.512013 kubelet[2803]: I1027 08:17:48.511546 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6748db2e-cdc5-484d-ae31-af72189f5f62-lib-modules\") pod \"kube-proxy-hcnm6\" (UID: \"6748db2e-cdc5-484d-ae31-af72189f5f62\") " pod="kube-system/kube-proxy-hcnm6" Oct 27 08:17:48.517979 systemd[1]: Created slice kubepods-besteffort-pod13c98bf6_07b1_462c_aa81_9b95152cde58.slice - libcontainer container kubepods-besteffort-pod13c98bf6_07b1_462c_aa81_9b95152cde58.slice. Oct 27 08:17:48.611924 kubelet[2803]: I1027 08:17:48.611857 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/13c98bf6-07b1-462c-aa81-9b95152cde58-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-8wprs\" (UID: \"13c98bf6-07b1-462c-aa81-9b95152cde58\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-8wprs" Oct 27 08:17:48.611924 kubelet[2803]: I1027 08:17:48.611893 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf88w\" (UniqueName: \"kubernetes.io/projected/13c98bf6-07b1-462c-aa81-9b95152cde58-kube-api-access-sf88w\") pod \"tigera-operator-65cdcdfd6d-8wprs\" (UID: \"13c98bf6-07b1-462c-aa81-9b95152cde58\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-8wprs" Oct 27 08:17:48.721318 kubelet[2803]: E1027 08:17:48.721008 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:17:48.722985 containerd[1622]: time="2025-10-27T08:17:48.722667851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hcnm6,Uid:6748db2e-cdc5-484d-ae31-af72189f5f62,Namespace:kube-system,Attempt:0,}" Oct 27 08:17:48.743659 containerd[1622]: time="2025-10-27T08:17:48.743601517Z" level=info msg="connecting to shim a7333bbd8e8096f56ffb349f43919b87344887e90e1b165fd2b15818af716185" address="unix:///run/containerd/s/bceaf325a96e507118f3c6aac2d341f2ba0d0cec1d8f29702578cb5bf3124e80" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:17:48.778360 systemd[1]: Started cri-containerd-a7333bbd8e8096f56ffb349f43919b87344887e90e1b165fd2b15818af716185.scope - libcontainer container a7333bbd8e8096f56ffb349f43919b87344887e90e1b165fd2b15818af716185. Oct 27 08:17:48.810459 containerd[1622]: time="2025-10-27T08:17:48.810406698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hcnm6,Uid:6748db2e-cdc5-484d-ae31-af72189f5f62,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7333bbd8e8096f56ffb349f43919b87344887e90e1b165fd2b15818af716185\"" Oct 27 08:17:48.811026 kubelet[2803]: E1027 08:17:48.811000 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:17:48.817336 containerd[1622]: time="2025-10-27T08:17:48.817275975Z" level=info msg="CreateContainer within sandbox \"a7333bbd8e8096f56ffb349f43919b87344887e90e1b165fd2b15818af716185\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 27 08:17:48.827915 containerd[1622]: time="2025-10-27T08:17:48.827860445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-8wprs,Uid:13c98bf6-07b1-462c-aa81-9b95152cde58,Namespace:tigera-operator,Attempt:0,}" Oct 27 08:17:48.830240 containerd[1622]: time="2025-10-27T08:17:48.829881372Z" level=info msg="Container c2d6a730cb19f516f0da82d190d125f305b019a13ae3e6d76693464c6f4eef1a: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:17:48.844124 containerd[1622]: time="2025-10-27T08:17:48.844076619Z" level=info msg="CreateContainer within sandbox \"a7333bbd8e8096f56ffb349f43919b87344887e90e1b165fd2b15818af716185\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c2d6a730cb19f516f0da82d190d125f305b019a13ae3e6d76693464c6f4eef1a\"" Oct 27 08:17:48.845011 containerd[1622]: time="2025-10-27T08:17:48.844753280Z" level=info msg="StartContainer for \"c2d6a730cb19f516f0da82d190d125f305b019a13ae3e6d76693464c6f4eef1a\"" Oct 27 08:17:48.850033 containerd[1622]: time="2025-10-27T08:17:48.849984305Z" level=info msg="connecting to shim c2d6a730cb19f516f0da82d190d125f305b019a13ae3e6d76693464c6f4eef1a" address="unix:///run/containerd/s/bceaf325a96e507118f3c6aac2d341f2ba0d0cec1d8f29702578cb5bf3124e80" protocol=ttrpc version=3 Oct 27 08:17:48.867372 containerd[1622]: time="2025-10-27T08:17:48.866898912Z" level=info msg="connecting to shim 0a38eb8eb015261569ead36f179b8765c0b5216aa02505b0ca700364bc709a42" address="unix:///run/containerd/s/0566a99bdf53b73262b917a02290e4f4e9388262abad9122e7331e4d7356e38c" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:17:48.873354 systemd[1]: Started cri-containerd-c2d6a730cb19f516f0da82d190d125f305b019a13ae3e6d76693464c6f4eef1a.scope - libcontainer container c2d6a730cb19f516f0da82d190d125f305b019a13ae3e6d76693464c6f4eef1a. Oct 27 08:17:48.895515 systemd[1]: Started cri-containerd-0a38eb8eb015261569ead36f179b8765c0b5216aa02505b0ca700364bc709a42.scope - libcontainer container 0a38eb8eb015261569ead36f179b8765c0b5216aa02505b0ca700364bc709a42. Oct 27 08:17:48.935992 containerd[1622]: time="2025-10-27T08:17:48.935935027Z" level=info msg="StartContainer for \"c2d6a730cb19f516f0da82d190d125f305b019a13ae3e6d76693464c6f4eef1a\" returns successfully" Oct 27 08:17:48.960751 containerd[1622]: time="2025-10-27T08:17:48.960700349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-8wprs,Uid:13c98bf6-07b1-462c-aa81-9b95152cde58,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0a38eb8eb015261569ead36f179b8765c0b5216aa02505b0ca700364bc709a42\"" Oct 27 08:17:48.964746 containerd[1622]: time="2025-10-27T08:17:48.964706964Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 27 08:17:49.665688 kubelet[2803]: E1027 08:17:49.665630 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:17:49.910202 kubelet[2803]: E1027 08:17:49.910146 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:17:49.910984 kubelet[2803]: E1027 08:17:49.910924 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:17:50.236275 kubelet[2803]: I1027 08:17:50.236132 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hcnm6" podStartSLOduration=2.236107568 podStartE2EDuration="2.236107568s" podCreationTimestamp="2025-10-27 08:17:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:17:50.05816151 +0000 UTC m=+8.274023089" watchObservedRunningTime="2025-10-27 08:17:50.236107568 +0000 UTC m=+8.451969147" Oct 27 08:17:50.773701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3782930702.mount: Deactivated successfully. Oct 27 08:17:50.912865 kubelet[2803]: E1027 08:17:50.912802 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:17:51.204411 containerd[1622]: time="2025-10-27T08:17:51.204270887Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:17:51.205072 containerd[1622]: time="2025-10-27T08:17:51.205046082Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 27 08:17:51.206501 containerd[1622]: time="2025-10-27T08:17:51.206450076Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:17:51.209530 containerd[1622]: time="2025-10-27T08:17:51.209487507Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:17:51.209949 containerd[1622]: time="2025-10-27T08:17:51.209921979Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.245040164s" Oct 27 08:17:51.209993 containerd[1622]: time="2025-10-27T08:17:51.209950453Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 27 08:17:51.219132 containerd[1622]: time="2025-10-27T08:17:51.219079018Z" level=info msg="CreateContainer within sandbox \"0a38eb8eb015261569ead36f179b8765c0b5216aa02505b0ca700364bc709a42\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 27 08:17:51.227255 containerd[1622]: time="2025-10-27T08:17:51.227204527Z" level=info msg="Container adfd4b8286a003dd46c0e75e6276c41583216bdaf244a99b295c9a018120c2be: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:17:51.239073 containerd[1622]: time="2025-10-27T08:17:51.239019611Z" level=info msg="CreateContainer within sandbox \"0a38eb8eb015261569ead36f179b8765c0b5216aa02505b0ca700364bc709a42\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"adfd4b8286a003dd46c0e75e6276c41583216bdaf244a99b295c9a018120c2be\"" Oct 27 08:17:51.239600 containerd[1622]: time="2025-10-27T08:17:51.239566084Z" level=info msg="StartContainer for \"adfd4b8286a003dd46c0e75e6276c41583216bdaf244a99b295c9a018120c2be\"" Oct 27 08:17:51.240579 containerd[1622]: time="2025-10-27T08:17:51.240546678Z" level=info msg="connecting to shim adfd4b8286a003dd46c0e75e6276c41583216bdaf244a99b295c9a018120c2be" address="unix:///run/containerd/s/0566a99bdf53b73262b917a02290e4f4e9388262abad9122e7331e4d7356e38c" protocol=ttrpc version=3 Oct 27 08:17:51.300452 systemd[1]: Started cri-containerd-adfd4b8286a003dd46c0e75e6276c41583216bdaf244a99b295c9a018120c2be.scope - libcontainer container adfd4b8286a003dd46c0e75e6276c41583216bdaf244a99b295c9a018120c2be. Oct 27 08:17:51.333640 containerd[1622]: time="2025-10-27T08:17:51.333602085Z" level=info msg="StartContainer for \"adfd4b8286a003dd46c0e75e6276c41583216bdaf244a99b295c9a018120c2be\" returns successfully" Oct 27 08:17:51.922942 kubelet[2803]: I1027 08:17:51.922841 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-8wprs" podStartSLOduration=1.67000224 podStartE2EDuration="3.922820483s" podCreationTimestamp="2025-10-27 08:17:48 +0000 UTC" firstStartedPulling="2025-10-27 08:17:48.96250228 +0000 UTC m=+7.178363859" lastFinishedPulling="2025-10-27 08:17:51.215320523 +0000 UTC m=+9.431182102" observedRunningTime="2025-10-27 08:17:51.922577064 +0000 UTC m=+10.138438663" watchObservedRunningTime="2025-10-27 08:17:51.922820483 +0000 UTC m=+10.138682062" Oct 27 08:17:54.390257 kubelet[2803]: E1027 08:17:54.388981 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:17:54.920566 kubelet[2803]: E1027 08:17:54.920480 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:17:56.819935 kubelet[2803]: E1027 08:17:56.819544 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:17:56.924067 kubelet[2803]: E1027 08:17:56.923996 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:17:58.122947 sudo[1854]: pam_unix(sudo:session): session closed for user root Oct 27 08:17:58.125317 sshd[1853]: Connection closed by 10.0.0.1 port 55606 Oct 27 08:17:58.125744 sshd-session[1850]: pam_unix(sshd:session): session closed for user core Oct 27 08:17:58.149786 systemd[1]: sshd@8-10.0.0.23:22-10.0.0.1:55606.service: Deactivated successfully. Oct 27 08:17:58.152585 systemd[1]: session-9.scope: Deactivated successfully. Oct 27 08:17:58.152833 systemd[1]: session-9.scope: Consumed 5.147s CPU time, 229.4M memory peak. Oct 27 08:17:58.154431 systemd-logind[1605]: Session 9 logged out. Waiting for processes to exit. Oct 27 08:17:58.156170 systemd-logind[1605]: Removed session 9. Oct 27 08:18:02.139413 systemd[1]: Created slice kubepods-besteffort-pod0311c2d6_64d2_47ae_a5b3_e6d6214317fe.slice - libcontainer container kubepods-besteffort-pod0311c2d6_64d2_47ae_a5b3_e6d6214317fe.slice. Oct 27 08:18:02.198803 kubelet[2803]: I1027 08:18:02.198706 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq7lm\" (UniqueName: \"kubernetes.io/projected/0311c2d6-64d2-47ae-a5b3-e6d6214317fe-kube-api-access-jq7lm\") pod \"calico-typha-5765dbdb9b-kjfnd\" (UID: \"0311c2d6-64d2-47ae-a5b3-e6d6214317fe\") " pod="calico-system/calico-typha-5765dbdb9b-kjfnd" Oct 27 08:18:02.198803 kubelet[2803]: I1027 08:18:02.198784 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0311c2d6-64d2-47ae-a5b3-e6d6214317fe-tigera-ca-bundle\") pod \"calico-typha-5765dbdb9b-kjfnd\" (UID: \"0311c2d6-64d2-47ae-a5b3-e6d6214317fe\") " pod="calico-system/calico-typha-5765dbdb9b-kjfnd" Oct 27 08:18:02.198803 kubelet[2803]: I1027 08:18:02.198799 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0311c2d6-64d2-47ae-a5b3-e6d6214317fe-typha-certs\") pod \"calico-typha-5765dbdb9b-kjfnd\" (UID: \"0311c2d6-64d2-47ae-a5b3-e6d6214317fe\") " pod="calico-system/calico-typha-5765dbdb9b-kjfnd" Oct 27 08:18:02.330277 systemd[1]: Created slice kubepods-besteffort-podc4d06f78_947d_4d60_abad_9f45cbc7b08f.slice - libcontainer container kubepods-besteffort-podc4d06f78_947d_4d60_abad_9f45cbc7b08f.slice. Oct 27 08:18:02.401250 kubelet[2803]: I1027 08:18:02.400900 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c4d06f78-947d-4d60-abad-9f45cbc7b08f-cni-net-dir\") pod \"calico-node-47zlw\" (UID: \"c4d06f78-947d-4d60-abad-9f45cbc7b08f\") " pod="calico-system/calico-node-47zlw" Oct 27 08:18:02.401250 kubelet[2803]: I1027 08:18:02.400934 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4d06f78-947d-4d60-abad-9f45cbc7b08f-lib-modules\") pod \"calico-node-47zlw\" (UID: \"c4d06f78-947d-4d60-abad-9f45cbc7b08f\") " pod="calico-system/calico-node-47zlw" Oct 27 08:18:02.401250 kubelet[2803]: I1027 08:18:02.400950 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkfwk\" (UniqueName: \"kubernetes.io/projected/c4d06f78-947d-4d60-abad-9f45cbc7b08f-kube-api-access-kkfwk\") pod \"calico-node-47zlw\" (UID: \"c4d06f78-947d-4d60-abad-9f45cbc7b08f\") " pod="calico-system/calico-node-47zlw" Oct 27 08:18:02.401250 kubelet[2803]: I1027 08:18:02.400968 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c4d06f78-947d-4d60-abad-9f45cbc7b08f-var-lib-calico\") pod \"calico-node-47zlw\" (UID: \"c4d06f78-947d-4d60-abad-9f45cbc7b08f\") " pod="calico-system/calico-node-47zlw" Oct 27 08:18:02.401250 kubelet[2803]: I1027 08:18:02.400990 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c4d06f78-947d-4d60-abad-9f45cbc7b08f-flexvol-driver-host\") pod \"calico-node-47zlw\" (UID: \"c4d06f78-947d-4d60-abad-9f45cbc7b08f\") " pod="calico-system/calico-node-47zlw" Oct 27 08:18:02.401559 kubelet[2803]: I1027 08:18:02.401006 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c4d06f78-947d-4d60-abad-9f45cbc7b08f-node-certs\") pod \"calico-node-47zlw\" (UID: \"c4d06f78-947d-4d60-abad-9f45cbc7b08f\") " pod="calico-system/calico-node-47zlw" Oct 27 08:18:02.401559 kubelet[2803]: I1027 08:18:02.401143 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4d06f78-947d-4d60-abad-9f45cbc7b08f-tigera-ca-bundle\") pod \"calico-node-47zlw\" (UID: \"c4d06f78-947d-4d60-abad-9f45cbc7b08f\") " pod="calico-system/calico-node-47zlw" Oct 27 08:18:02.401559 kubelet[2803]: I1027 08:18:02.401199 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c4d06f78-947d-4d60-abad-9f45cbc7b08f-var-run-calico\") pod \"calico-node-47zlw\" (UID: \"c4d06f78-947d-4d60-abad-9f45cbc7b08f\") " pod="calico-system/calico-node-47zlw" Oct 27 08:18:02.401559 kubelet[2803]: I1027 08:18:02.401232 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4d06f78-947d-4d60-abad-9f45cbc7b08f-xtables-lock\") pod \"calico-node-47zlw\" (UID: \"c4d06f78-947d-4d60-abad-9f45cbc7b08f\") " pod="calico-system/calico-node-47zlw" Oct 27 08:18:02.401559 kubelet[2803]: I1027 08:18:02.401264 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c4d06f78-947d-4d60-abad-9f45cbc7b08f-cni-bin-dir\") pod \"calico-node-47zlw\" (UID: \"c4d06f78-947d-4d60-abad-9f45cbc7b08f\") " pod="calico-system/calico-node-47zlw" Oct 27 08:18:02.401688 kubelet[2803]: I1027 08:18:02.401343 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c4d06f78-947d-4d60-abad-9f45cbc7b08f-cni-log-dir\") pod \"calico-node-47zlw\" (UID: \"c4d06f78-947d-4d60-abad-9f45cbc7b08f\") " pod="calico-system/calico-node-47zlw" Oct 27 08:18:02.401688 kubelet[2803]: I1027 08:18:02.401413 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c4d06f78-947d-4d60-abad-9f45cbc7b08f-policysync\") pod \"calico-node-47zlw\" (UID: \"c4d06f78-947d-4d60-abad-9f45cbc7b08f\") " pod="calico-system/calico-node-47zlw" Oct 27 08:18:02.452286 kubelet[2803]: E1027 08:18:02.452199 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:18:02.452837 containerd[1622]: time="2025-10-27T08:18:02.452798896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5765dbdb9b-kjfnd,Uid:0311c2d6-64d2-47ae-a5b3-e6d6214317fe,Namespace:calico-system,Attempt:0,}" Oct 27 08:18:02.496322 containerd[1622]: time="2025-10-27T08:18:02.496258647Z" level=info msg="connecting to shim ec81608d3f643e900c4692b715f42a180bd093de163dde24119e6541370d9e68" address="unix:///run/containerd/s/9ae5b75e1e11680519534f4a81696f2a887fe842d8001b1338d89ada814d5c7b" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:18:02.509889 kubelet[2803]: E1027 08:18:02.509840 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.509889 kubelet[2803]: W1027 08:18:02.509900 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.510062 kubelet[2803]: E1027 08:18:02.509924 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.516534 kubelet[2803]: E1027 08:18:02.516157 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6bqk" podUID="2e0d791c-1eec-4e51-af5e-ee7c86a5bb94" Oct 27 08:18:02.523229 kubelet[2803]: E1027 08:18:02.523103 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.523229 kubelet[2803]: W1027 08:18:02.523127 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.523229 kubelet[2803]: E1027 08:18:02.523147 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.539368 systemd[1]: Started cri-containerd-ec81608d3f643e900c4692b715f42a180bd093de163dde24119e6541370d9e68.scope - libcontainer container ec81608d3f643e900c4692b715f42a180bd093de163dde24119e6541370d9e68. Oct 27 08:18:02.583743 kubelet[2803]: E1027 08:18:02.583693 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.583743 kubelet[2803]: W1027 08:18:02.583722 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.583743 kubelet[2803]: E1027 08:18:02.583747 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.583960 kubelet[2803]: E1027 08:18:02.583940 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.583960 kubelet[2803]: W1027 08:18:02.583956 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.584050 kubelet[2803]: E1027 08:18:02.583967 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.584637 kubelet[2803]: E1027 08:18:02.584456 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.584637 kubelet[2803]: W1027 08:18:02.584471 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.584637 kubelet[2803]: E1027 08:18:02.584480 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.585911 kubelet[2803]: E1027 08:18:02.585748 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.585911 kubelet[2803]: W1027 08:18:02.585763 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.585911 kubelet[2803]: E1027 08:18:02.585775 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.587397 kubelet[2803]: E1027 08:18:02.587347 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.588300 kubelet[2803]: W1027 08:18:02.588262 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.588300 kubelet[2803]: E1027 08:18:02.588296 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.588586 kubelet[2803]: E1027 08:18:02.588544 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.588586 kubelet[2803]: W1027 08:18:02.588558 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.588586 kubelet[2803]: E1027 08:18:02.588568 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.588914 kubelet[2803]: E1027 08:18:02.588849 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.588914 kubelet[2803]: W1027 08:18:02.588866 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.588914 kubelet[2803]: E1027 08:18:02.588879 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.589143 kubelet[2803]: E1027 08:18:02.589119 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.589143 kubelet[2803]: W1027 08:18:02.589134 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.589143 kubelet[2803]: E1027 08:18:02.589143 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.589397 kubelet[2803]: E1027 08:18:02.589366 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.589397 kubelet[2803]: W1027 08:18:02.589375 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.589397 kubelet[2803]: E1027 08:18:02.589384 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.589550 kubelet[2803]: E1027 08:18:02.589533 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.589550 kubelet[2803]: W1027 08:18:02.589545 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.589599 kubelet[2803]: E1027 08:18:02.589553 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.589720 kubelet[2803]: E1027 08:18:02.589696 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.589720 kubelet[2803]: W1027 08:18:02.589714 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.589720 kubelet[2803]: E1027 08:18:02.589722 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.589897 kubelet[2803]: E1027 08:18:02.589881 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.589897 kubelet[2803]: W1027 08:18:02.589892 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.589961 kubelet[2803]: E1027 08:18:02.589901 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.590130 kubelet[2803]: E1027 08:18:02.590078 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.590130 kubelet[2803]: W1027 08:18:02.590095 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.590130 kubelet[2803]: E1027 08:18:02.590106 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.590322 kubelet[2803]: E1027 08:18:02.590302 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.590322 kubelet[2803]: W1027 08:18:02.590319 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.590407 kubelet[2803]: E1027 08:18:02.590330 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.590537 kubelet[2803]: E1027 08:18:02.590514 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.590537 kubelet[2803]: W1027 08:18:02.590531 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.590584 kubelet[2803]: E1027 08:18:02.590549 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.591264 kubelet[2803]: E1027 08:18:02.590794 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.591264 kubelet[2803]: W1027 08:18:02.590818 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.591264 kubelet[2803]: E1027 08:18:02.590835 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.591264 kubelet[2803]: E1027 08:18:02.591110 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.591264 kubelet[2803]: W1027 08:18:02.591125 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.591264 kubelet[2803]: E1027 08:18:02.591144 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.591550 kubelet[2803]: E1027 08:18:02.591431 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.591550 kubelet[2803]: W1027 08:18:02.591459 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.591550 kubelet[2803]: E1027 08:18:02.591522 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.591882 kubelet[2803]: E1027 08:18:02.591803 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.591882 kubelet[2803]: W1027 08:18:02.591813 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.591882 kubelet[2803]: E1027 08:18:02.591859 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.592344 kubelet[2803]: E1027 08:18:02.592140 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.592344 kubelet[2803]: W1027 08:18:02.592151 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.592344 kubelet[2803]: E1027 08:18:02.592186 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.595361 containerd[1622]: time="2025-10-27T08:18:02.595317974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5765dbdb9b-kjfnd,Uid:0311c2d6-64d2-47ae-a5b3-e6d6214317fe,Namespace:calico-system,Attempt:0,} returns sandbox id \"ec81608d3f643e900c4692b715f42a180bd093de163dde24119e6541370d9e68\"" Oct 27 08:18:02.596545 kubelet[2803]: E1027 08:18:02.596513 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:18:02.598045 containerd[1622]: time="2025-10-27T08:18:02.598011254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 27 08:18:02.605988 kubelet[2803]: E1027 08:18:02.605938 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.606104 kubelet[2803]: W1027 08:18:02.606016 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.606104 kubelet[2803]: E1027 08:18:02.606052 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.606567 kubelet[2803]: I1027 08:18:02.606540 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e0d791c-1eec-4e51-af5e-ee7c86a5bb94-kubelet-dir\") pod \"csi-node-driver-h6bqk\" (UID: \"2e0d791c-1eec-4e51-af5e-ee7c86a5bb94\") " pod="calico-system/csi-node-driver-h6bqk" Oct 27 08:18:02.608338 kubelet[2803]: E1027 08:18:02.608308 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.608338 kubelet[2803]: W1027 08:18:02.608330 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.608338 kubelet[2803]: E1027 08:18:02.608341 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.608842 kubelet[2803]: E1027 08:18:02.608619 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.608842 kubelet[2803]: W1027 08:18:02.608644 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.608842 kubelet[2803]: E1027 08:18:02.608671 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.609099 kubelet[2803]: E1027 08:18:02.609018 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.609099 kubelet[2803]: W1027 08:18:02.609083 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.609181 kubelet[2803]: E1027 08:18:02.609141 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.609523 kubelet[2803]: I1027 08:18:02.609495 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2e0d791c-1eec-4e51-af5e-ee7c86a5bb94-registration-dir\") pod \"csi-node-driver-h6bqk\" (UID: \"2e0d791c-1eec-4e51-af5e-ee7c86a5bb94\") " pod="calico-system/csi-node-driver-h6bqk" Oct 27 08:18:02.609603 kubelet[2803]: E1027 08:18:02.609544 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.609603 kubelet[2803]: W1027 08:18:02.609557 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.609603 kubelet[2803]: E1027 08:18:02.609567 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.609999 kubelet[2803]: E1027 08:18:02.609978 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.609999 kubelet[2803]: W1027 08:18:02.609995 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.610084 kubelet[2803]: E1027 08:18:02.610007 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.611123 kubelet[2803]: E1027 08:18:02.611105 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.611123 kubelet[2803]: W1027 08:18:02.611117 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.611225 kubelet[2803]: E1027 08:18:02.611128 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.611262 kubelet[2803]: I1027 08:18:02.611232 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2e0d791c-1eec-4e51-af5e-ee7c86a5bb94-varrun\") pod \"csi-node-driver-h6bqk\" (UID: \"2e0d791c-1eec-4e51-af5e-ee7c86a5bb94\") " pod="calico-system/csi-node-driver-h6bqk" Oct 27 08:18:02.611735 kubelet[2803]: E1027 08:18:02.611711 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.611806 kubelet[2803]: W1027 08:18:02.611750 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.611806 kubelet[2803]: E1027 08:18:02.611764 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.611806 kubelet[2803]: I1027 08:18:02.611789 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qkk6\" (UniqueName: \"kubernetes.io/projected/2e0d791c-1eec-4e51-af5e-ee7c86a5bb94-kube-api-access-4qkk6\") pod \"csi-node-driver-h6bqk\" (UID: \"2e0d791c-1eec-4e51-af5e-ee7c86a5bb94\") " pod="calico-system/csi-node-driver-h6bqk" Oct 27 08:18:02.612751 kubelet[2803]: E1027 08:18:02.612730 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.612751 kubelet[2803]: W1027 08:18:02.612745 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.612751 kubelet[2803]: E1027 08:18:02.612755 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.612854 kubelet[2803]: I1027 08:18:02.612777 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2e0d791c-1eec-4e51-af5e-ee7c86a5bb94-socket-dir\") pod \"csi-node-driver-h6bqk\" (UID: \"2e0d791c-1eec-4e51-af5e-ee7c86a5bb94\") " pod="calico-system/csi-node-driver-h6bqk" Oct 27 08:18:02.613396 kubelet[2803]: E1027 08:18:02.613377 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.613396 kubelet[2803]: W1027 08:18:02.613392 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.613474 kubelet[2803]: E1027 08:18:02.613403 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.614005 kubelet[2803]: E1027 08:18:02.613955 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.614005 kubelet[2803]: W1027 08:18:02.613978 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.614005 kubelet[2803]: E1027 08:18:02.613988 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.614957 kubelet[2803]: E1027 08:18:02.614934 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.614957 kubelet[2803]: W1027 08:18:02.614952 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.615079 kubelet[2803]: E1027 08:18:02.614963 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.615500 kubelet[2803]: E1027 08:18:02.615462 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.615500 kubelet[2803]: W1027 08:18:02.615477 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.615500 kubelet[2803]: E1027 08:18:02.615488 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.615892 kubelet[2803]: E1027 08:18:02.615874 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.615892 kubelet[2803]: W1027 08:18:02.615887 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.615892 kubelet[2803]: E1027 08:18:02.615898 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.616985 kubelet[2803]: E1027 08:18:02.616947 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.616985 kubelet[2803]: W1027 08:18:02.616963 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.616985 kubelet[2803]: E1027 08:18:02.616982 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.635688 kubelet[2803]: E1027 08:18:02.635640 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:18:02.636320 containerd[1622]: time="2025-10-27T08:18:02.636261687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-47zlw,Uid:c4d06f78-947d-4d60-abad-9f45cbc7b08f,Namespace:calico-system,Attempt:0,}" Oct 27 08:18:02.659695 containerd[1622]: time="2025-10-27T08:18:02.658771817Z" level=info msg="connecting to shim f778507d78f7dd6ff870b68cb091cb3aa2097923c56cb8c9591e5ea9bb79b7a2" address="unix:///run/containerd/s/0e551dfda2301b841ed958866c8880bb395400a0551659a528d9826cd9a36d74" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:18:02.684379 systemd[1]: Started cri-containerd-f778507d78f7dd6ff870b68cb091cb3aa2097923c56cb8c9591e5ea9bb79b7a2.scope - libcontainer container f778507d78f7dd6ff870b68cb091cb3aa2097923c56cb8c9591e5ea9bb79b7a2. Oct 27 08:18:02.713647 kubelet[2803]: E1027 08:18:02.713609 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.713647 kubelet[2803]: W1027 08:18:02.713631 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.713647 kubelet[2803]: E1027 08:18:02.713654 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.713880 kubelet[2803]: E1027 08:18:02.713859 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.713880 kubelet[2803]: W1027 08:18:02.713871 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.713880 kubelet[2803]: E1027 08:18:02.713880 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.714151 kubelet[2803]: E1027 08:18:02.714120 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.714151 kubelet[2803]: W1027 08:18:02.714134 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.714151 kubelet[2803]: E1027 08:18:02.714143 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.714476 kubelet[2803]: E1027 08:18:02.714422 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.714476 kubelet[2803]: W1027 08:18:02.714434 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.714476 kubelet[2803]: E1027 08:18:02.714444 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.714679 containerd[1622]: time="2025-10-27T08:18:02.714467424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-47zlw,Uid:c4d06f78-947d-4d60-abad-9f45cbc7b08f,Namespace:calico-system,Attempt:0,} returns sandbox id \"f778507d78f7dd6ff870b68cb091cb3aa2097923c56cb8c9591e5ea9bb79b7a2\"" Oct 27 08:18:02.715057 kubelet[2803]: E1027 08:18:02.714884 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.715057 kubelet[2803]: W1027 08:18:02.714912 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.715057 kubelet[2803]: E1027 08:18:02.714941 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.715175 kubelet[2803]: E1027 08:18:02.715163 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:18:02.715347 kubelet[2803]: E1027 08:18:02.715326 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.715347 kubelet[2803]: W1027 08:18:02.715340 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.715347 kubelet[2803]: E1027 08:18:02.715350 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.715678 kubelet[2803]: E1027 08:18:02.715607 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.715678 kubelet[2803]: W1027 08:18:02.715634 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.715790 kubelet[2803]: E1027 08:18:02.715763 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.716100 kubelet[2803]: E1027 08:18:02.716002 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.716100 kubelet[2803]: W1027 08:18:02.716017 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.716100 kubelet[2803]: E1027 08:18:02.716027 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.716370 kubelet[2803]: E1027 08:18:02.716344 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.716370 kubelet[2803]: W1027 08:18:02.716364 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.716463 kubelet[2803]: E1027 08:18:02.716387 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.716718 kubelet[2803]: E1027 08:18:02.716688 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.716718 kubelet[2803]: W1027 08:18:02.716701 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.716718 kubelet[2803]: E1027 08:18:02.716714 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.717082 kubelet[2803]: E1027 08:18:02.716943 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.717082 kubelet[2803]: W1027 08:18:02.716957 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.717082 kubelet[2803]: E1027 08:18:02.716979 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.717243 kubelet[2803]: E1027 08:18:02.717223 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.717243 kubelet[2803]: W1027 08:18:02.717238 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.717328 kubelet[2803]: E1027 08:18:02.717251 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.717503 kubelet[2803]: E1027 08:18:02.717486 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.717503 kubelet[2803]: W1027 08:18:02.717499 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.717591 kubelet[2803]: E1027 08:18:02.717509 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.717784 kubelet[2803]: E1027 08:18:02.717767 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.717784 kubelet[2803]: W1027 08:18:02.717780 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.717839 kubelet[2803]: E1027 08:18:02.717789 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.718090 kubelet[2803]: E1027 08:18:02.718036 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.718090 kubelet[2803]: W1027 08:18:02.718068 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.718090 kubelet[2803]: E1027 08:18:02.718078 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.718401 kubelet[2803]: E1027 08:18:02.718371 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.718401 kubelet[2803]: W1027 08:18:02.718398 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.718488 kubelet[2803]: E1027 08:18:02.718408 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.718655 kubelet[2803]: E1027 08:18:02.718639 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.718655 kubelet[2803]: W1027 08:18:02.718650 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.718707 kubelet[2803]: E1027 08:18:02.718660 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.718887 kubelet[2803]: E1027 08:18:02.718871 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.718887 kubelet[2803]: W1027 08:18:02.718882 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.718942 kubelet[2803]: E1027 08:18:02.718891 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.719114 kubelet[2803]: E1027 08:18:02.719097 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.719114 kubelet[2803]: W1027 08:18:02.719109 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.719171 kubelet[2803]: E1027 08:18:02.719118 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.719346 kubelet[2803]: E1027 08:18:02.719329 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.719346 kubelet[2803]: W1027 08:18:02.719341 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.719399 kubelet[2803]: E1027 08:18:02.719351 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.719595 kubelet[2803]: E1027 08:18:02.719575 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.719595 kubelet[2803]: W1027 08:18:02.719591 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.719636 kubelet[2803]: E1027 08:18:02.719605 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.719808 kubelet[2803]: E1027 08:18:02.719791 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.719808 kubelet[2803]: W1027 08:18:02.719802 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.719808 kubelet[2803]: E1027 08:18:02.719811 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.720025 kubelet[2803]: E1027 08:18:02.720009 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.720025 kubelet[2803]: W1027 08:18:02.720020 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.720087 kubelet[2803]: E1027 08:18:02.720030 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.720450 kubelet[2803]: E1027 08:18:02.720430 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.720450 kubelet[2803]: W1027 08:18:02.720447 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.720523 kubelet[2803]: E1027 08:18:02.720460 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.720743 kubelet[2803]: E1027 08:18:02.720724 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.720743 kubelet[2803]: W1027 08:18:02.720737 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.720807 kubelet[2803]: E1027 08:18:02.720748 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:02.729128 kubelet[2803]: E1027 08:18:02.729101 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:02.729128 kubelet[2803]: W1027 08:18:02.729116 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:02.729128 kubelet[2803]: E1027 08:18:02.729129 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:03.884577 kubelet[2803]: E1027 08:18:03.884508 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6bqk" podUID="2e0d791c-1eec-4e51-af5e-ee7c86a5bb94" Oct 27 08:18:04.493638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2373901864.mount: Deactivated successfully. Oct 27 08:18:05.599303 containerd[1622]: time="2025-10-27T08:18:05.599241762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:18:05.600086 containerd[1622]: time="2025-10-27T08:18:05.600051816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Oct 27 08:18:05.601449 containerd[1622]: time="2025-10-27T08:18:05.601395885Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:18:05.605037 containerd[1622]: time="2025-10-27T08:18:05.604994906Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.006945049s" Oct 27 08:18:05.605196 containerd[1622]: time="2025-10-27T08:18:05.605145349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 27 08:18:05.605303 containerd[1622]: time="2025-10-27T08:18:05.605265715Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:18:05.607710 containerd[1622]: time="2025-10-27T08:18:05.607608694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 27 08:18:05.624227 containerd[1622]: time="2025-10-27T08:18:05.624167968Z" level=info msg="CreateContainer within sandbox \"ec81608d3f643e900c4692b715f42a180bd093de163dde24119e6541370d9e68\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 27 08:18:05.632274 containerd[1622]: time="2025-10-27T08:18:05.632228883Z" level=info msg="Container fed9dfc0faea85f7ce8a0932794fba829cbb78a6986195d58e55387285626d5b: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:18:05.640535 containerd[1622]: time="2025-10-27T08:18:05.640492721Z" level=info msg="CreateContainer within sandbox \"ec81608d3f643e900c4692b715f42a180bd093de163dde24119e6541370d9e68\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fed9dfc0faea85f7ce8a0932794fba829cbb78a6986195d58e55387285626d5b\"" Oct 27 08:18:05.641044 containerd[1622]: time="2025-10-27T08:18:05.641019001Z" level=info msg="StartContainer for \"fed9dfc0faea85f7ce8a0932794fba829cbb78a6986195d58e55387285626d5b\"" Oct 27 08:18:05.642116 containerd[1622]: time="2025-10-27T08:18:05.642093994Z" level=info msg="connecting to shim fed9dfc0faea85f7ce8a0932794fba829cbb78a6986195d58e55387285626d5b" address="unix:///run/containerd/s/9ae5b75e1e11680519534f4a81696f2a887fe842d8001b1338d89ada814d5c7b" protocol=ttrpc version=3 Oct 27 08:18:05.674477 systemd[1]: Started cri-containerd-fed9dfc0faea85f7ce8a0932794fba829cbb78a6986195d58e55387285626d5b.scope - libcontainer container fed9dfc0faea85f7ce8a0932794fba829cbb78a6986195d58e55387285626d5b. Oct 27 08:18:05.747806 containerd[1622]: time="2025-10-27T08:18:05.747686183Z" level=info msg="StartContainer for \"fed9dfc0faea85f7ce8a0932794fba829cbb78a6986195d58e55387285626d5b\" returns successfully" Oct 27 08:18:05.882021 kubelet[2803]: E1027 08:18:05.881562 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6bqk" podUID="2e0d791c-1eec-4e51-af5e-ee7c86a5bb94" Oct 27 08:18:05.949528 kubelet[2803]: E1027 08:18:05.949466 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:18:05.968080 kubelet[2803]: I1027 08:18:05.967845 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5765dbdb9b-kjfnd" podStartSLOduration=0.958219286 podStartE2EDuration="3.967827666s" podCreationTimestamp="2025-10-27 08:18:02 +0000 UTC" firstStartedPulling="2025-10-27 08:18:02.5974883 +0000 UTC m=+20.813349879" lastFinishedPulling="2025-10-27 08:18:05.607096679 +0000 UTC m=+23.822958259" observedRunningTime="2025-10-27 08:18:05.967341421 +0000 UTC m=+24.183203010" watchObservedRunningTime="2025-10-27 08:18:05.967827666 +0000 UTC m=+24.183689245" Oct 27 08:18:06.011088 kubelet[2803]: E1027 08:18:06.011026 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:06.011088 kubelet[2803]: W1027 08:18:06.011064 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:06.011088 kubelet[2803]: E1027 08:18:06.011095 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:06.011369 kubelet[2803]: E1027 08:18:06.011359 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:06.011369 kubelet[2803]: W1027 08:18:06.011369 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:06.011416 kubelet[2803]: E1027 08:18:06.011378 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:06.011593 kubelet[2803]: E1027 08:18:06.011560 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:06.011593 kubelet[2803]: W1027 08:18:06.011572 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:06.011593 kubelet[2803]: E1027 08:18:06.011581 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:06.012027 kubelet[2803]: E1027 08:18:06.011984 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:06.012027 kubelet[2803]: W1027 08:18:06.012016 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:06.012111 kubelet[2803]: E1027 08:18:06.012050 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:06.013079 kubelet[2803]: E1027 08:18:06.013051 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:06.013079 kubelet[2803]: W1027 08:18:06.013070 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:06.013079 kubelet[2803]: E1027 08:18:06.013082 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:06.013341 kubelet[2803]: E1027 08:18:06.013319 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:06.013341 kubelet[2803]: W1027 08:18:06.013333 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:06.013341 kubelet[2803]: E1027 08:18:06.013343 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:06.013686 kubelet[2803]: E1027 08:18:06.013658 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:06.013686 kubelet[2803]: W1027 08:18:06.013671 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:06.013686 kubelet[2803]: E1027 08:18:06.013683 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:06.014293 kubelet[2803]: E1027 08:18:06.014273 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:06.014293 kubelet[2803]: W1027 08:18:06.014288 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:06.014380 kubelet[2803]: E1027 08:18:06.014299 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:06.014547 kubelet[2803]: E1027 08:18:06.014531 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:06.014547 kubelet[2803]: W1027 08:18:06.014543 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:06.014606 kubelet[2803]: E1027 08:18:06.014553 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:06.014836 kubelet[2803]: E1027 08:18:06.014814 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:06.014871 kubelet[2803]: W1027 08:18:06.014834 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:06.014871 kubelet[2803]: E1027 08:18:06.014853 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:06.015118 kubelet[2803]: E1027 08:18:06.015100 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:06.015118 kubelet[2803]: W1027 08:18:06.015114 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:06.015175 kubelet[2803]: E1027 08:18:06.015126 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:06.015341 kubelet[2803]: E1027 08:18:06.015324 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:06.015341 kubelet[2803]: W1027 08:18:06.015336 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:06.015409 kubelet[2803]: E1027 08:18:06.015345 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:06.015580 kubelet[2803]: E1027 08:18:06.015561 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:06.015580 kubelet[2803]: W1027 08:18:06.015577 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:06.015639 kubelet[2803]: E1027 08:18:06.015589 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:06.015785 kubelet[2803]: E1027 08:18:06.015768 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:06.015785 kubelet[2803]: W1027 08:18:06.015779 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:06.015785 kubelet[2803]: E1027 08:18:06.015787 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:06.015985 kubelet[2803]: E1027 08:18:06.015968 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:06.016015 kubelet[2803]: W1027 08:18:06.015992 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:06.016015 kubelet[2803]: E1027 08:18:06.016003 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:06.037637 kubelet[2803]: E1027 08:18:06.037597 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:06.037637 kubelet[2803]: W1027 08:18:06.037633 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:06.037860 kubelet[2803]: E1027 08:18:06.037661 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:06.038159 kubelet[2803]: E1027 08:18:06.038092 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:06.038159 kubelet[2803]: W1027 08:18:06.038114 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:06.038159 kubelet[2803]: E1027 08:18:06.038138 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:06.038895 kubelet[2803]: E1027 08:18:06.038852 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:06.038977 kubelet[2803]: W1027 08:18:06.038892 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:06.038977 kubelet[2803]: E1027 08:18:06.038943 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:06.039342 kubelet[2803]: E1027 08:18:06.039319 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:06.039384 kubelet[2803]: W1027 08:18:06.039341 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:06.039384 kubelet[2803]: E1027 08:18:06.039360 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:06.039714 kubelet[2803]: E1027 08:18:06.039686 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:06.039714 kubelet[2803]: W1027 08:18:06.039706 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:06.039830 kubelet[2803]: E1027 08:18:06.039731 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:06.040065 kubelet[2803]: E1027 08:18:06.040037 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:06.040065 kubelet[2803]: W1027 08:18:06.040058 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:06.040138 kubelet[2803]: E1027 08:18:06.040078 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:06.040483 kubelet[2803]: E1027 08:18:06.040452 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:06.040483 kubelet[2803]: W1027 08:18:06.040473 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:06.040557 kubelet[2803]: E1027 08:18:06.040492 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:06.041005 kubelet[2803]: E1027 08:18:06.040981 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:06.041048 kubelet[2803]: W1027 08:18:06.041002 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:06.041048 kubelet[2803]: E1027 08:18:06.041023 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:06.041906 kubelet[2803]: E1027 08:18:06.041851 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:06.041906 kubelet[2803]: W1027 08:18:06.041875 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:06.041996 kubelet[2803]: E1027 08:18:06.041905 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:06.042259 kubelet[2803]: E1027 08:18:06.042236 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:06.042306 kubelet[2803]: W1027 08:18:06.042258 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:06.042306 kubelet[2803]: E1027 08:18:06.042279 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:06.042624 kubelet[2803]: E1027 08:18:06.042602 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:06.042677 kubelet[2803]: W1027 08:18:06.042624 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:06.042677 kubelet[2803]: E1027 08:18:06.042644 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:06.043346 kubelet[2803]: E1027 08:18:06.043314 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:06.043346 kubelet[2803]: W1027 08:18:06.043336 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:06.043482 kubelet[2803]: E1027 08:18:06.043355 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:06.043682 kubelet[2803]: E1027 08:18:06.043662 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:06.043731 kubelet[2803]: W1027 08:18:06.043712 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:06.043765 kubelet[2803]: E1027 08:18:06.043735 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:06.044087 kubelet[2803]: E1027 08:18:06.044068 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:06.044122 kubelet[2803]: W1027 08:18:06.044086 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:06.044122 kubelet[2803]: E1027 08:18:06.044105 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:06.044471 kubelet[2803]: E1027 08:18:06.044434 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:06.044471 kubelet[2803]: W1027 08:18:06.044456 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:06.044471 kubelet[2803]: E1027 08:18:06.044475 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:06.044812 kubelet[2803]: E1027 08:18:06.044748 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:06.044812 kubelet[2803]: W1027 08:18:06.044762 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:06.044812 kubelet[2803]: E1027 08:18:06.044785 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:06.045269 kubelet[2803]: E1027 08:18:06.045058 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:06.045269 kubelet[2803]: W1027 08:18:06.045073 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:06.045269 kubelet[2803]: E1027 08:18:06.045091 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:06.045785 kubelet[2803]: E1027 08:18:06.045761 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:06.045836 kubelet[2803]: W1027 08:18:06.045783 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:06.045836 kubelet[2803]: E1027 08:18:06.045803 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:06.950663 kubelet[2803]: I1027 08:18:06.950621 2803 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 27 08:18:06.951140 kubelet[2803]: E1027 08:18:06.950978 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:18:07.021844 kubelet[2803]: E1027 08:18:07.021813 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:07.021844 kubelet[2803]: W1027 08:18:07.021835 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:07.022022 kubelet[2803]: E1027 08:18:07.021858 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:07.022057 kubelet[2803]: E1027 08:18:07.022044 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:07.022057 kubelet[2803]: W1027 08:18:07.022053 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:07.022113 kubelet[2803]: E1027 08:18:07.022063 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:07.022273 kubelet[2803]: E1027 08:18:07.022257 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:07.022273 kubelet[2803]: W1027 08:18:07.022268 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:07.022273 kubelet[2803]: E1027 08:18:07.022278 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:07.022463 kubelet[2803]: E1027 08:18:07.022448 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:07.022463 kubelet[2803]: W1027 08:18:07.022458 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:07.022529 kubelet[2803]: E1027 08:18:07.022468 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:07.022645 kubelet[2803]: E1027 08:18:07.022630 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:07.022645 kubelet[2803]: W1027 08:18:07.022640 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:07.022705 kubelet[2803]: E1027 08:18:07.022649 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:07.022821 kubelet[2803]: E1027 08:18:07.022805 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:07.022821 kubelet[2803]: W1027 08:18:07.022818 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:07.022895 kubelet[2803]: E1027 08:18:07.022829 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:07.023047 kubelet[2803]: E1027 08:18:07.023020 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:07.023047 kubelet[2803]: W1027 08:18:07.023033 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:07.023047 kubelet[2803]: E1027 08:18:07.023044 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:07.023260 kubelet[2803]: E1027 08:18:07.023243 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:07.023260 kubelet[2803]: W1027 08:18:07.023254 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:07.023336 kubelet[2803]: E1027 08:18:07.023264 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:07.023457 kubelet[2803]: E1027 08:18:07.023441 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:07.023457 kubelet[2803]: W1027 08:18:07.023452 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:07.023532 kubelet[2803]: E1027 08:18:07.023461 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:07.023642 kubelet[2803]: E1027 08:18:07.023626 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:07.023642 kubelet[2803]: W1027 08:18:07.023638 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:07.023701 kubelet[2803]: E1027 08:18:07.023647 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:07.023838 kubelet[2803]: E1027 08:18:07.023805 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:07.023838 kubelet[2803]: W1027 08:18:07.023815 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:07.023838 kubelet[2803]: E1027 08:18:07.023824 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:07.024023 kubelet[2803]: E1027 08:18:07.024003 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:07.024023 kubelet[2803]: W1027 08:18:07.024015 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:07.024084 kubelet[2803]: E1027 08:18:07.024025 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:07.024222 kubelet[2803]: E1027 08:18:07.024194 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:07.024222 kubelet[2803]: W1027 08:18:07.024204 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:07.024301 kubelet[2803]: E1027 08:18:07.024235 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:07.024420 kubelet[2803]: E1027 08:18:07.024404 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:07.024420 kubelet[2803]: W1027 08:18:07.024414 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:07.024482 kubelet[2803]: E1027 08:18:07.024424 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:07.024599 kubelet[2803]: E1027 08:18:07.024583 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:07.024599 kubelet[2803]: W1027 08:18:07.024593 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:07.024681 kubelet[2803]: E1027 08:18:07.024603 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:07.048170 kubelet[2803]: E1027 08:18:07.048110 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:07.048170 kubelet[2803]: W1027 08:18:07.048135 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:07.048170 kubelet[2803]: E1027 08:18:07.048157 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:07.048487 kubelet[2803]: E1027 08:18:07.048452 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:07.048487 kubelet[2803]: W1027 08:18:07.048469 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:07.048487 kubelet[2803]: E1027 08:18:07.048481 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:07.048774 kubelet[2803]: E1027 08:18:07.048744 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:07.048774 kubelet[2803]: W1027 08:18:07.048759 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:07.048774 kubelet[2803]: E1027 08:18:07.048774 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:07.049074 kubelet[2803]: E1027 08:18:07.049056 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:07.049074 kubelet[2803]: W1027 08:18:07.049069 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:07.049145 kubelet[2803]: E1027 08:18:07.049078 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:07.049326 kubelet[2803]: E1027 08:18:07.049308 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:07.049326 kubelet[2803]: W1027 08:18:07.049320 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:07.049396 kubelet[2803]: E1027 08:18:07.049330 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:07.049535 kubelet[2803]: E1027 08:18:07.049522 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:07.049535 kubelet[2803]: W1027 08:18:07.049530 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:07.049596 kubelet[2803]: E1027 08:18:07.049538 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:07.049786 kubelet[2803]: E1027 08:18:07.049768 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:07.049786 kubelet[2803]: W1027 08:18:07.049779 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:07.049860 kubelet[2803]: E1027 08:18:07.049788 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:07.050022 kubelet[2803]: E1027 08:18:07.050002 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:07.050022 kubelet[2803]: W1027 08:18:07.050015 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:07.050081 kubelet[2803]: E1027 08:18:07.050026 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:07.050288 kubelet[2803]: E1027 08:18:07.050270 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:07.050288 kubelet[2803]: W1027 08:18:07.050282 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:07.050364 kubelet[2803]: E1027 08:18:07.050292 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:07.050753 kubelet[2803]: E1027 08:18:07.050703 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:07.050753 kubelet[2803]: W1027 08:18:07.050737 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:07.050823 kubelet[2803]: E1027 08:18:07.050772 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:07.051078 kubelet[2803]: E1027 08:18:07.051045 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:07.051078 kubelet[2803]: W1027 08:18:07.051059 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:07.051078 kubelet[2803]: E1027 08:18:07.051074 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:07.051415 kubelet[2803]: E1027 08:18:07.051384 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:07.051415 kubelet[2803]: W1027 08:18:07.051400 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:07.051415 kubelet[2803]: E1027 08:18:07.051411 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:07.051663 kubelet[2803]: E1027 08:18:07.051635 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:07.051663 kubelet[2803]: W1027 08:18:07.051650 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:07.051663 kubelet[2803]: E1027 08:18:07.051660 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:07.051922 kubelet[2803]: E1027 08:18:07.051901 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:07.051922 kubelet[2803]: W1027 08:18:07.051917 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:07.051993 kubelet[2803]: E1027 08:18:07.051929 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:07.052157 kubelet[2803]: E1027 08:18:07.052138 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:07.052157 kubelet[2803]: W1027 08:18:07.052155 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:07.052237 kubelet[2803]: E1027 08:18:07.052165 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:07.052479 kubelet[2803]: E1027 08:18:07.052450 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:07.052479 kubelet[2803]: W1027 08:18:07.052466 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:07.052548 kubelet[2803]: E1027 08:18:07.052479 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:07.052694 kubelet[2803]: E1027 08:18:07.052675 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:07.052694 kubelet[2803]: W1027 08:18:07.052688 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:07.052755 kubelet[2803]: E1027 08:18:07.052698 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:07.053176 kubelet[2803]: E1027 08:18:07.053147 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:18:07.053176 kubelet[2803]: W1027 08:18:07.053160 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:18:07.053176 kubelet[2803]: E1027 08:18:07.053171 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:18:07.513456 containerd[1622]: time="2025-10-27T08:18:07.513389338Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:18:07.514369 containerd[1622]: time="2025-10-27T08:18:07.514337321Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Oct 27 08:18:07.515726 containerd[1622]: time="2025-10-27T08:18:07.515696287Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:18:07.518101 containerd[1622]: time="2025-10-27T08:18:07.518068819Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:18:07.518901 containerd[1622]: time="2025-10-27T08:18:07.518853725Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.911211459s" Oct 27 08:18:07.518937 containerd[1622]: time="2025-10-27T08:18:07.518910371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 27 08:18:07.524907 containerd[1622]: time="2025-10-27T08:18:07.524828081Z" level=info msg="CreateContainer within sandbox \"f778507d78f7dd6ff870b68cb091cb3aa2097923c56cb8c9591e5ea9bb79b7a2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 27 08:18:07.534929 containerd[1622]: time="2025-10-27T08:18:07.534857342Z" level=info msg="Container 747178bef8a3abf26edbec93ad97c5d95cf3e6cac10ad1520785f3be29aea74b: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:18:07.542625 containerd[1622]: time="2025-10-27T08:18:07.542571190Z" level=info msg="CreateContainer within sandbox \"f778507d78f7dd6ff870b68cb091cb3aa2097923c56cb8c9591e5ea9bb79b7a2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"747178bef8a3abf26edbec93ad97c5d95cf3e6cac10ad1520785f3be29aea74b\"" Oct 27 08:18:07.543260 containerd[1622]: time="2025-10-27T08:18:07.543194852Z" level=info msg="StartContainer for \"747178bef8a3abf26edbec93ad97c5d95cf3e6cac10ad1520785f3be29aea74b\"" Oct 27 08:18:07.544966 containerd[1622]: time="2025-10-27T08:18:07.544935206Z" level=info msg="connecting to shim 747178bef8a3abf26edbec93ad97c5d95cf3e6cac10ad1520785f3be29aea74b" address="unix:///run/containerd/s/0e551dfda2301b841ed958866c8880bb395400a0551659a528d9826cd9a36d74" protocol=ttrpc version=3 Oct 27 08:18:07.572415 systemd[1]: Started cri-containerd-747178bef8a3abf26edbec93ad97c5d95cf3e6cac10ad1520785f3be29aea74b.scope - libcontainer container 747178bef8a3abf26edbec93ad97c5d95cf3e6cac10ad1520785f3be29aea74b. Oct 27 08:18:07.619440 containerd[1622]: time="2025-10-27T08:18:07.619396802Z" level=info msg="StartContainer for \"747178bef8a3abf26edbec93ad97c5d95cf3e6cac10ad1520785f3be29aea74b\" returns successfully" Oct 27 08:18:07.631481 systemd[1]: cri-containerd-747178bef8a3abf26edbec93ad97c5d95cf3e6cac10ad1520785f3be29aea74b.scope: Deactivated successfully. Oct 27 08:18:07.633057 containerd[1622]: time="2025-10-27T08:18:07.633008221Z" level=info msg="received exit event container_id:\"747178bef8a3abf26edbec93ad97c5d95cf3e6cac10ad1520785f3be29aea74b\" id:\"747178bef8a3abf26edbec93ad97c5d95cf3e6cac10ad1520785f3be29aea74b\" pid:3544 exited_at:{seconds:1761553087 nanos:632609601}" Oct 27 08:18:07.633129 containerd[1622]: time="2025-10-27T08:18:07.633098431Z" level=info msg="TaskExit event in podsandbox handler container_id:\"747178bef8a3abf26edbec93ad97c5d95cf3e6cac10ad1520785f3be29aea74b\" id:\"747178bef8a3abf26edbec93ad97c5d95cf3e6cac10ad1520785f3be29aea74b\" pid:3544 exited_at:{seconds:1761553087 nanos:632609601}" Oct 27 08:18:07.661517 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-747178bef8a3abf26edbec93ad97c5d95cf3e6cac10ad1520785f3be29aea74b-rootfs.mount: Deactivated successfully. Oct 27 08:18:07.881763 kubelet[2803]: E1027 08:18:07.881583 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6bqk" podUID="2e0d791c-1eec-4e51-af5e-ee7c86a5bb94" Oct 27 08:18:07.954186 kubelet[2803]: E1027 08:18:07.954151 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:18:08.958945 kubelet[2803]: E1027 08:18:08.958870 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:18:08.960386 containerd[1622]: time="2025-10-27T08:18:08.960312852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 27 08:18:09.881266 kubelet[2803]: E1027 08:18:09.881145 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6bqk" podUID="2e0d791c-1eec-4e51-af5e-ee7c86a5bb94" Oct 27 08:18:11.881623 kubelet[2803]: E1027 08:18:11.881552 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6bqk" podUID="2e0d791c-1eec-4e51-af5e-ee7c86a5bb94" Oct 27 08:18:13.435582 containerd[1622]: time="2025-10-27T08:18:13.435518522Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:18:13.436410 containerd[1622]: time="2025-10-27T08:18:13.436375872Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 27 08:18:13.437501 containerd[1622]: time="2025-10-27T08:18:13.437463346Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:18:13.440004 containerd[1622]: time="2025-10-27T08:18:13.439972721Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:18:13.440753 containerd[1622]: time="2025-10-27T08:18:13.440706831Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.480355126s" Oct 27 08:18:13.440753 containerd[1622]: time="2025-10-27T08:18:13.440743139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 27 08:18:13.445870 containerd[1622]: time="2025-10-27T08:18:13.445824286Z" level=info msg="CreateContainer within sandbox \"f778507d78f7dd6ff870b68cb091cb3aa2097923c56cb8c9591e5ea9bb79b7a2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 27 08:18:13.454709 containerd[1622]: time="2025-10-27T08:18:13.454668874Z" level=info msg="Container 192e82685bbea6c118a406c5a69935bcf372e3b5f4415079386e2fa7c8162cbf: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:18:13.464824 containerd[1622]: time="2025-10-27T08:18:13.464778220Z" level=info msg="CreateContainer within sandbox \"f778507d78f7dd6ff870b68cb091cb3aa2097923c56cb8c9591e5ea9bb79b7a2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"192e82685bbea6c118a406c5a69935bcf372e3b5f4415079386e2fa7c8162cbf\"" Oct 27 08:18:13.465430 containerd[1622]: time="2025-10-27T08:18:13.465395059Z" level=info msg="StartContainer for \"192e82685bbea6c118a406c5a69935bcf372e3b5f4415079386e2fa7c8162cbf\"" Oct 27 08:18:13.467125 containerd[1622]: time="2025-10-27T08:18:13.467095373Z" level=info msg="connecting to shim 192e82685bbea6c118a406c5a69935bcf372e3b5f4415079386e2fa7c8162cbf" address="unix:///run/containerd/s/0e551dfda2301b841ed958866c8880bb395400a0551659a528d9826cd9a36d74" protocol=ttrpc version=3 Oct 27 08:18:13.491371 systemd[1]: Started cri-containerd-192e82685bbea6c118a406c5a69935bcf372e3b5f4415079386e2fa7c8162cbf.scope - libcontainer container 192e82685bbea6c118a406c5a69935bcf372e3b5f4415079386e2fa7c8162cbf. Oct 27 08:18:13.542246 containerd[1622]: time="2025-10-27T08:18:13.539975412Z" level=info msg="StartContainer for \"192e82685bbea6c118a406c5a69935bcf372e3b5f4415079386e2fa7c8162cbf\" returns successfully" Oct 27 08:18:13.881373 kubelet[2803]: E1027 08:18:13.881308 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6bqk" podUID="2e0d791c-1eec-4e51-af5e-ee7c86a5bb94" Oct 27 08:18:13.975698 kubelet[2803]: E1027 08:18:13.975635 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:18:14.602989 systemd[1]: cri-containerd-192e82685bbea6c118a406c5a69935bcf372e3b5f4415079386e2fa7c8162cbf.scope: Deactivated successfully. Oct 27 08:18:14.603561 systemd[1]: cri-containerd-192e82685bbea6c118a406c5a69935bcf372e3b5f4415079386e2fa7c8162cbf.scope: Consumed 653ms CPU time, 177.4M memory peak, 3.5M read from disk, 171.3M written to disk. Oct 27 08:18:14.605412 containerd[1622]: time="2025-10-27T08:18:14.605363926Z" level=info msg="received exit event container_id:\"192e82685bbea6c118a406c5a69935bcf372e3b5f4415079386e2fa7c8162cbf\" id:\"192e82685bbea6c118a406c5a69935bcf372e3b5f4415079386e2fa7c8162cbf\" pid:3607 exited_at:{seconds:1761553094 nanos:604929779}" Oct 27 08:18:14.605952 containerd[1622]: time="2025-10-27T08:18:14.605413679Z" level=info msg="TaskExit event in podsandbox handler container_id:\"192e82685bbea6c118a406c5a69935bcf372e3b5f4415079386e2fa7c8162cbf\" id:\"192e82685bbea6c118a406c5a69935bcf372e3b5f4415079386e2fa7c8162cbf\" pid:3607 exited_at:{seconds:1761553094 nanos:604929779}" Oct 27 08:18:14.628664 kubelet[2803]: I1027 08:18:14.628338 2803 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 27 08:18:14.634956 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-192e82685bbea6c118a406c5a69935bcf372e3b5f4415079386e2fa7c8162cbf-rootfs.mount: Deactivated successfully. Oct 27 08:18:14.977504 kubelet[2803]: E1027 08:18:14.977371 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:18:15.007521 systemd[1]: Created slice kubepods-besteffort-podcd97c86b_d16c_499b_928c_ee1afbc3c575.slice - libcontainer container kubepods-besteffort-podcd97c86b_d16c_499b_928c_ee1afbc3c575.slice. Oct 27 08:18:15.015905 systemd[1]: Created slice kubepods-besteffort-podbce4a7cd_a2cf_439e_a5d4_f335be73a306.slice - libcontainer container kubepods-besteffort-podbce4a7cd_a2cf_439e_a5d4_f335be73a306.slice. Oct 27 08:18:15.022497 systemd[1]: Created slice kubepods-burstable-pod52472afe_0ee8_40cc_8073_d9351a66e2e8.slice - libcontainer container kubepods-burstable-pod52472afe_0ee8_40cc_8073_d9351a66e2e8.slice. Oct 27 08:18:15.028923 systemd[1]: Created slice kubepods-besteffort-pod9eca1f3c_6640_4b80_93fd_4cf14826a563.slice - libcontainer container kubepods-besteffort-pod9eca1f3c_6640_4b80_93fd_4cf14826a563.slice. Oct 27 08:18:15.034711 systemd[1]: Created slice kubepods-burstable-podefeba680_c11b_470e_be50_8994147e2b12.slice - libcontainer container kubepods-burstable-podefeba680_c11b_470e_be50_8994147e2b12.slice. Oct 27 08:18:15.039300 systemd[1]: Created slice kubepods-besteffort-podeb96c43d_718e_4752_922d_cb8f671d414c.slice - libcontainer container kubepods-besteffort-podeb96c43d_718e_4752_922d_cb8f671d414c.slice. Oct 27 08:18:15.046405 systemd[1]: Created slice kubepods-besteffort-pod77100d96_e703_4e0a_b71a_6946f424cbfa.slice - libcontainer container kubepods-besteffort-pod77100d96_e703_4e0a_b71a_6946f424cbfa.slice. Oct 27 08:18:15.106098 kubelet[2803]: I1027 08:18:15.106032 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52472afe-0ee8-40cc-8073-d9351a66e2e8-config-volume\") pod \"coredns-66bc5c9577-x2z4l\" (UID: \"52472afe-0ee8-40cc-8073-d9351a66e2e8\") " pod="kube-system/coredns-66bc5c9577-x2z4l" Oct 27 08:18:15.106098 kubelet[2803]: I1027 08:18:15.106087 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bce4a7cd-a2cf-439e-a5d4-f335be73a306-calico-apiserver-certs\") pod \"calico-apiserver-66b74c9c6f-rnnck\" (UID: \"bce4a7cd-a2cf-439e-a5d4-f335be73a306\") " pod="calico-apiserver/calico-apiserver-66b74c9c6f-rnnck" Oct 27 08:18:15.106098 kubelet[2803]: I1027 08:18:15.106111 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79pgq\" (UniqueName: \"kubernetes.io/projected/77100d96-e703-4e0a-b71a-6946f424cbfa-kube-api-access-79pgq\") pod \"calico-apiserver-66b74c9c6f-jk2nn\" (UID: \"77100d96-e703-4e0a-b71a-6946f424cbfa\") " pod="calico-apiserver/calico-apiserver-66b74c9c6f-jk2nn" Oct 27 08:18:15.106341 kubelet[2803]: I1027 08:18:15.106126 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stdm9\" (UniqueName: \"kubernetes.io/projected/cd97c86b-d16c-499b-928c-ee1afbc3c575-kube-api-access-stdm9\") pod \"goldmane-7c778bb748-lc29c\" (UID: \"cd97c86b-d16c-499b-928c-ee1afbc3c575\") " pod="calico-system/goldmane-7c778bb748-lc29c" Oct 27 08:18:15.106341 kubelet[2803]: I1027 08:18:15.106159 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2wsc\" (UniqueName: \"kubernetes.io/projected/bce4a7cd-a2cf-439e-a5d4-f335be73a306-kube-api-access-z2wsc\") pod \"calico-apiserver-66b74c9c6f-rnnck\" (UID: \"bce4a7cd-a2cf-439e-a5d4-f335be73a306\") " pod="calico-apiserver/calico-apiserver-66b74c9c6f-rnnck" Oct 27 08:18:15.106341 kubelet[2803]: I1027 08:18:15.106196 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/77100d96-e703-4e0a-b71a-6946f424cbfa-calico-apiserver-certs\") pod \"calico-apiserver-66b74c9c6f-jk2nn\" (UID: \"77100d96-e703-4e0a-b71a-6946f424cbfa\") " pod="calico-apiserver/calico-apiserver-66b74c9c6f-jk2nn" Oct 27 08:18:15.106341 kubelet[2803]: I1027 08:18:15.106290 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb96c43d-718e-4752-922d-cb8f671d414c-tigera-ca-bundle\") pod \"calico-kube-controllers-8cb769c6-rbvs2\" (UID: \"eb96c43d-718e-4752-922d-cb8f671d414c\") " pod="calico-system/calico-kube-controllers-8cb769c6-rbvs2" Oct 27 08:18:15.106341 kubelet[2803]: I1027 08:18:15.106307 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9eca1f3c-6640-4b80-93fd-4cf14826a563-whisker-ca-bundle\") pod \"whisker-6d8b75798c-qvwrh\" (UID: \"9eca1f3c-6640-4b80-93fd-4cf14826a563\") " pod="calico-system/whisker-6d8b75798c-qvwrh" Oct 27 08:18:15.106479 kubelet[2803]: I1027 08:18:15.106323 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd97c86b-d16c-499b-928c-ee1afbc3c575-config\") pod \"goldmane-7c778bb748-lc29c\" (UID: \"cd97c86b-d16c-499b-928c-ee1afbc3c575\") " pod="calico-system/goldmane-7c778bb748-lc29c" Oct 27 08:18:15.106479 kubelet[2803]: I1027 08:18:15.106337 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd97c86b-d16c-499b-928c-ee1afbc3c575-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-lc29c\" (UID: \"cd97c86b-d16c-499b-928c-ee1afbc3c575\") " pod="calico-system/goldmane-7c778bb748-lc29c" Oct 27 08:18:15.106479 kubelet[2803]: I1027 08:18:15.106356 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgm5c\" (UniqueName: \"kubernetes.io/projected/52472afe-0ee8-40cc-8073-d9351a66e2e8-kube-api-access-fgm5c\") pod \"coredns-66bc5c9577-x2z4l\" (UID: \"52472afe-0ee8-40cc-8073-d9351a66e2e8\") " pod="kube-system/coredns-66bc5c9577-x2z4l" Oct 27 08:18:15.106479 kubelet[2803]: I1027 08:18:15.106371 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/efeba680-c11b-470e-be50-8994147e2b12-config-volume\") pod \"coredns-66bc5c9577-ls4jw\" (UID: \"efeba680-c11b-470e-be50-8994147e2b12\") " pod="kube-system/coredns-66bc5c9577-ls4jw" Oct 27 08:18:15.106479 kubelet[2803]: I1027 08:18:15.106413 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9eca1f3c-6640-4b80-93fd-4cf14826a563-whisker-backend-key-pair\") pod \"whisker-6d8b75798c-qvwrh\" (UID: \"9eca1f3c-6640-4b80-93fd-4cf14826a563\") " pod="calico-system/whisker-6d8b75798c-qvwrh" Oct 27 08:18:15.106600 kubelet[2803]: I1027 08:18:15.106435 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/cd97c86b-d16c-499b-928c-ee1afbc3c575-goldmane-key-pair\") pod \"goldmane-7c778bb748-lc29c\" (UID: \"cd97c86b-d16c-499b-928c-ee1afbc3c575\") " pod="calico-system/goldmane-7c778bb748-lc29c" Oct 27 08:18:15.106600 kubelet[2803]: I1027 08:18:15.106462 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfpnj\" (UniqueName: \"kubernetes.io/projected/eb96c43d-718e-4752-922d-cb8f671d414c-kube-api-access-xfpnj\") pod \"calico-kube-controllers-8cb769c6-rbvs2\" (UID: \"eb96c43d-718e-4752-922d-cb8f671d414c\") " pod="calico-system/calico-kube-controllers-8cb769c6-rbvs2" Oct 27 08:18:15.106600 kubelet[2803]: I1027 08:18:15.106481 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n26jg\" (UniqueName: \"kubernetes.io/projected/efeba680-c11b-470e-be50-8994147e2b12-kube-api-access-n26jg\") pod \"coredns-66bc5c9577-ls4jw\" (UID: \"efeba680-c11b-470e-be50-8994147e2b12\") " pod="kube-system/coredns-66bc5c9577-ls4jw" Oct 27 08:18:15.106600 kubelet[2803]: I1027 08:18:15.106504 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt25v\" (UniqueName: \"kubernetes.io/projected/9eca1f3c-6640-4b80-93fd-4cf14826a563-kube-api-access-mt25v\") pod \"whisker-6d8b75798c-qvwrh\" (UID: \"9eca1f3c-6640-4b80-93fd-4cf14826a563\") " pod="calico-system/whisker-6d8b75798c-qvwrh" Oct 27 08:18:15.316465 containerd[1622]: time="2025-10-27T08:18:15.316413620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-lc29c,Uid:cd97c86b-d16c-499b-928c-ee1afbc3c575,Namespace:calico-system,Attempt:0,}" Oct 27 08:18:15.322240 containerd[1622]: time="2025-10-27T08:18:15.322189159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b74c9c6f-rnnck,Uid:bce4a7cd-a2cf-439e-a5d4-f335be73a306,Namespace:calico-apiserver,Attempt:0,}" Oct 27 08:18:15.330771 kubelet[2803]: E1027 08:18:15.330731 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:18:15.331551 containerd[1622]: time="2025-10-27T08:18:15.331487116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-x2z4l,Uid:52472afe-0ee8-40cc-8073-d9351a66e2e8,Namespace:kube-system,Attempt:0,}" Oct 27 08:18:15.335514 containerd[1622]: time="2025-10-27T08:18:15.335458346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d8b75798c-qvwrh,Uid:9eca1f3c-6640-4b80-93fd-4cf14826a563,Namespace:calico-system,Attempt:0,}" Oct 27 08:18:15.340519 kubelet[2803]: E1027 08:18:15.340151 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:18:15.340880 containerd[1622]: time="2025-10-27T08:18:15.340849583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ls4jw,Uid:efeba680-c11b-470e-be50-8994147e2b12,Namespace:kube-system,Attempt:0,}" Oct 27 08:18:15.346084 containerd[1622]: time="2025-10-27T08:18:15.346047789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8cb769c6-rbvs2,Uid:eb96c43d-718e-4752-922d-cb8f671d414c,Namespace:calico-system,Attempt:0,}" Oct 27 08:18:15.355615 containerd[1622]: time="2025-10-27T08:18:15.355572852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b74c9c6f-jk2nn,Uid:77100d96-e703-4e0a-b71a-6946f424cbfa,Namespace:calico-apiserver,Attempt:0,}" Oct 27 08:18:15.466201 containerd[1622]: time="2025-10-27T08:18:15.466120647Z" level=error msg="Failed to destroy network for sandbox \"ca712427aa9a57876773cc30462ce0d233e9eb21bc965d4db457a052da088f8d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:18:15.469323 containerd[1622]: time="2025-10-27T08:18:15.469077122Z" level=error msg="Failed to destroy network for sandbox \"0a61dd142baf783bd0877a91a0fa31ba39a4be5aec15fe4448331dccc95540c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:18:15.478032 containerd[1622]: time="2025-10-27T08:18:15.477190332Z" level=error msg="Failed to destroy network for sandbox \"fde5886248ff82fbace1b50387e48f80c5ca44225281eda75fb9efc20c89b755\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:18:15.478474 containerd[1622]: time="2025-10-27T08:18:15.478442304Z" level=error msg="Failed to destroy network for sandbox \"27561c9a22475973761cdac51ac667efba5b8b18f357783afeb9f87f5f1412f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:18:15.478742 containerd[1622]: time="2025-10-27T08:18:15.478721088Z" level=error msg="Failed to destroy network for sandbox \"91e6bbb2b3249d03d732938799e69aeaae9e4d260d97f80661cb059e419f2f58\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:18:15.502493 containerd[1622]: time="2025-10-27T08:18:15.502450685Z" level=error msg="Failed to destroy network for sandbox \"3aedd124d743e65613b7a843ae5b9cab481179b89d69dd1ea9c629e54c0e6a82\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:18:15.502761 containerd[1622]: time="2025-10-27T08:18:15.502720952Z" level=error msg="Failed to destroy network for sandbox \"4bc7ae90772721a4e4774e735973512c37b4d30ad56607da261fa20ead9aa40d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:18:15.517968 containerd[1622]: time="2025-10-27T08:18:15.511783106Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b74c9c6f-rnnck,Uid:bce4a7cd-a2cf-439e-a5d4-f335be73a306,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca712427aa9a57876773cc30462ce0d233e9eb21bc965d4db457a052da088f8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:18:15.518066 containerd[1622]: time="2025-10-27T08:18:15.511825615Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d8b75798c-qvwrh,Uid:9eca1f3c-6640-4b80-93fd-4cf14826a563,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a61dd142baf783bd0877a91a0fa31ba39a4be5aec15fe4448331dccc95540c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:18:15.518117 containerd[1622]: time="2025-10-27T08:18:15.511832318Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-lc29c,Uid:cd97c86b-d16c-499b-928c-ee1afbc3c575,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"91e6bbb2b3249d03d732938799e69aeaae9e4d260d97f80661cb059e419f2f58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:18:15.518117 containerd[1622]: time="2025-10-27T08:18:15.511843419Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b74c9c6f-jk2nn,Uid:77100d96-e703-4e0a-b71a-6946f424cbfa,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"27561c9a22475973761cdac51ac667efba5b8b18f357783afeb9f87f5f1412f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:18:15.518185 containerd[1622]: time="2025-10-27T08:18:15.511844531Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ls4jw,Uid:efeba680-c11b-470e-be50-8994147e2b12,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fde5886248ff82fbace1b50387e48f80c5ca44225281eda75fb9efc20c89b755\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:18:15.518185 containerd[1622]: time="2025-10-27T08:18:15.512945149Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-x2z4l,Uid:52472afe-0ee8-40cc-8073-d9351a66e2e8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3aedd124d743e65613b7a843ae5b9cab481179b89d69dd1ea9c629e54c0e6a82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:18:15.518185 containerd[1622]: time="2025-10-27T08:18:15.513856050Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8cb769c6-rbvs2,Uid:eb96c43d-718e-4752-922d-cb8f671d414c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bc7ae90772721a4e4774e735973512c37b4d30ad56607da261fa20ead9aa40d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:18:15.527876 kubelet[2803]: E1027 08:18:15.527786 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91e6bbb2b3249d03d732938799e69aeaae9e4d260d97f80661cb059e419f2f58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:18:15.527876 kubelet[2803]: E1027 08:18:15.527781 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fde5886248ff82fbace1b50387e48f80c5ca44225281eda75fb9efc20c89b755\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:18:15.527959 kubelet[2803]: E1027 08:18:15.527888 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27561c9a22475973761cdac51ac667efba5b8b18f357783afeb9f87f5f1412f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:18:15.527959 kubelet[2803]: E1027 08:18:15.527795 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3aedd124d743e65613b7a843ae5b9cab481179b89d69dd1ea9c629e54c0e6a82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:18:15.527959 kubelet[2803]: E1027 08:18:15.527920 2803 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fde5886248ff82fbace1b50387e48f80c5ca44225281eda75fb9efc20c89b755\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ls4jw" Oct 27 08:18:15.527959 kubelet[2803]: E1027 08:18:15.527935 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bc7ae90772721a4e4774e735973512c37b4d30ad56607da261fa20ead9aa40d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:18:15.528063 kubelet[2803]: E1027 08:18:15.527945 2803 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fde5886248ff82fbace1b50387e48f80c5ca44225281eda75fb9efc20c89b755\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ls4jw" Oct 27 08:18:15.528063 kubelet[2803]: E1027 08:18:15.527950 2803 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bc7ae90772721a4e4774e735973512c37b4d30ad56607da261fa20ead9aa40d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8cb769c6-rbvs2" Oct 27 08:18:15.528063 kubelet[2803]: E1027 08:18:15.527963 2803 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bc7ae90772721a4e4774e735973512c37b4d30ad56607da261fa20ead9aa40d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8cb769c6-rbvs2" Oct 27 08:18:15.528063 kubelet[2803]: E1027 08:18:15.527968 2803 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3aedd124d743e65613b7a843ae5b9cab481179b89d69dd1ea9c629e54c0e6a82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-x2z4l" Oct 27 08:18:15.528243 kubelet[2803]: E1027 08:18:15.527993 2803 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3aedd124d743e65613b7a843ae5b9cab481179b89d69dd1ea9c629e54c0e6a82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-x2z4l" Oct 27 08:18:15.528243 kubelet[2803]: E1027 08:18:15.528003 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-ls4jw_kube-system(efeba680-c11b-470e-be50-8994147e2b12)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-ls4jw_kube-system(efeba680-c11b-470e-be50-8994147e2b12)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fde5886248ff82fbace1b50387e48f80c5ca44225281eda75fb9efc20c89b755\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-ls4jw" podUID="efeba680-c11b-470e-be50-8994147e2b12" Oct 27 08:18:15.528243 kubelet[2803]: E1027 08:18:15.528003 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8cb769c6-rbvs2_calico-system(eb96c43d-718e-4752-922d-cb8f671d414c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8cb769c6-rbvs2_calico-system(eb96c43d-718e-4752-922d-cb8f671d414c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4bc7ae90772721a4e4774e735973512c37b4d30ad56607da261fa20ead9aa40d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8cb769c6-rbvs2" podUID="eb96c43d-718e-4752-922d-cb8f671d414c" Oct 27 08:18:15.528359 kubelet[2803]: E1027 08:18:15.527913 2803 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27561c9a22475973761cdac51ac667efba5b8b18f357783afeb9f87f5f1412f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66b74c9c6f-jk2nn" Oct 27 08:18:15.528359 kubelet[2803]: E1027 08:18:15.528041 2803 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27561c9a22475973761cdac51ac667efba5b8b18f357783afeb9f87f5f1412f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66b74c9c6f-jk2nn" Oct 27 08:18:15.528359 kubelet[2803]: E1027 08:18:15.528028 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a61dd142baf783bd0877a91a0fa31ba39a4be5aec15fe4448331dccc95540c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:18:15.528433 kubelet[2803]: E1027 08:18:15.528058 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-x2z4l_kube-system(52472afe-0ee8-40cc-8073-d9351a66e2e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-x2z4l_kube-system(52472afe-0ee8-40cc-8073-d9351a66e2e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3aedd124d743e65613b7a843ae5b9cab481179b89d69dd1ea9c629e54c0e6a82\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-x2z4l" podUID="52472afe-0ee8-40cc-8073-d9351a66e2e8" Oct 27 08:18:15.528433 kubelet[2803]: E1027 08:18:15.528088 2803 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a61dd142baf783bd0877a91a0fa31ba39a4be5aec15fe4448331dccc95540c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d8b75798c-qvwrh" Oct 27 08:18:15.528433 kubelet[2803]: E1027 08:18:15.528122 2803 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a61dd142baf783bd0877a91a0fa31ba39a4be5aec15fe4448331dccc95540c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d8b75798c-qvwrh" Oct 27 08:18:15.528525 kubelet[2803]: E1027 08:18:15.528083 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66b74c9c6f-jk2nn_calico-apiserver(77100d96-e703-4e0a-b71a-6946f424cbfa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66b74c9c6f-jk2nn_calico-apiserver(77100d96-e703-4e0a-b71a-6946f424cbfa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27561c9a22475973761cdac51ac667efba5b8b18f357783afeb9f87f5f1412f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66b74c9c6f-jk2nn" podUID="77100d96-e703-4e0a-b71a-6946f424cbfa" Oct 27 08:18:15.528525 kubelet[2803]: E1027 08:18:15.528165 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6d8b75798c-qvwrh_calico-system(9eca1f3c-6640-4b80-93fd-4cf14826a563)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6d8b75798c-qvwrh_calico-system(9eca1f3c-6640-4b80-93fd-4cf14826a563)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a61dd142baf783bd0877a91a0fa31ba39a4be5aec15fe4448331dccc95540c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d8b75798c-qvwrh" podUID="9eca1f3c-6640-4b80-93fd-4cf14826a563" Oct 27 08:18:15.528525 kubelet[2803]: E1027 08:18:15.527906 2803 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91e6bbb2b3249d03d732938799e69aeaae9e4d260d97f80661cb059e419f2f58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-lc29c" Oct 27 08:18:15.528625 kubelet[2803]: E1027 08:18:15.528199 2803 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91e6bbb2b3249d03d732938799e69aeaae9e4d260d97f80661cb059e419f2f58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-lc29c" Oct 27 08:18:15.528625 kubelet[2803]: E1027 08:18:15.527783 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca712427aa9a57876773cc30462ce0d233e9eb21bc965d4db457a052da088f8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:18:15.528625 kubelet[2803]: E1027 08:18:15.528285 2803 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca712427aa9a57876773cc30462ce0d233e9eb21bc965d4db457a052da088f8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66b74c9c6f-rnnck" Oct 27 08:18:15.528704 kubelet[2803]: E1027 08:18:15.528287 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-lc29c_calico-system(cd97c86b-d16c-499b-928c-ee1afbc3c575)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-lc29c_calico-system(cd97c86b-d16c-499b-928c-ee1afbc3c575)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91e6bbb2b3249d03d732938799e69aeaae9e4d260d97f80661cb059e419f2f58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-lc29c" podUID="cd97c86b-d16c-499b-928c-ee1afbc3c575" Oct 27 08:18:15.528704 kubelet[2803]: E1027 08:18:15.528300 2803 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca712427aa9a57876773cc30462ce0d233e9eb21bc965d4db457a052da088f8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66b74c9c6f-rnnck" Oct 27 08:18:15.528704 kubelet[2803]: E1027 08:18:15.528348 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66b74c9c6f-rnnck_calico-apiserver(bce4a7cd-a2cf-439e-a5d4-f335be73a306)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66b74c9c6f-rnnck_calico-apiserver(bce4a7cd-a2cf-439e-a5d4-f335be73a306)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca712427aa9a57876773cc30462ce0d233e9eb21bc965d4db457a052da088f8d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66b74c9c6f-rnnck" podUID="bce4a7cd-a2cf-439e-a5d4-f335be73a306" Oct 27 08:18:15.888145 systemd[1]: Created slice kubepods-besteffort-pod2e0d791c_1eec_4e51_af5e_ee7c86a5bb94.slice - libcontainer container kubepods-besteffort-pod2e0d791c_1eec_4e51_af5e_ee7c86a5bb94.slice. Oct 27 08:18:15.893957 containerd[1622]: time="2025-10-27T08:18:15.893896474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6bqk,Uid:2e0d791c-1eec-4e51-af5e-ee7c86a5bb94,Namespace:calico-system,Attempt:0,}" Oct 27 08:18:15.949316 containerd[1622]: time="2025-10-27T08:18:15.949243605Z" level=error msg="Failed to destroy network for sandbox \"138e041102ef095d4a0b5c608c4e0fa092d7fd567e40bd3e49bb6a2279739cfc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:18:15.952027 systemd[1]: run-netns-cni\x2dc4e9dad4\x2d9e34\x2d11da\x2db5c8\x2dbfca89fa13cc.mount: Deactivated successfully. Oct 27 08:18:15.953315 containerd[1622]: time="2025-10-27T08:18:15.953265510Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6bqk,Uid:2e0d791c-1eec-4e51-af5e-ee7c86a5bb94,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"138e041102ef095d4a0b5c608c4e0fa092d7fd567e40bd3e49bb6a2279739cfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:18:15.953586 kubelet[2803]: E1027 08:18:15.953534 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"138e041102ef095d4a0b5c608c4e0fa092d7fd567e40bd3e49bb6a2279739cfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:18:15.953652 kubelet[2803]: E1027 08:18:15.953600 2803 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"138e041102ef095d4a0b5c608c4e0fa092d7fd567e40bd3e49bb6a2279739cfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h6bqk" Oct 27 08:18:15.953652 kubelet[2803]: E1027 08:18:15.953621 2803 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"138e041102ef095d4a0b5c608c4e0fa092d7fd567e40bd3e49bb6a2279739cfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h6bqk" Oct 27 08:18:15.953734 kubelet[2803]: E1027 08:18:15.953703 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h6bqk_calico-system(2e0d791c-1eec-4e51-af5e-ee7c86a5bb94)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h6bqk_calico-system(2e0d791c-1eec-4e51-af5e-ee7c86a5bb94)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"138e041102ef095d4a0b5c608c4e0fa092d7fd567e40bd3e49bb6a2279739cfc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h6bqk" podUID="2e0d791c-1eec-4e51-af5e-ee7c86a5bb94" Oct 27 08:18:15.982859 kubelet[2803]: E1027 08:18:15.982798 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:18:15.983674 containerd[1622]: time="2025-10-27T08:18:15.983629690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 27 08:18:22.073127 systemd[1]: Started sshd@9-10.0.0.23:22-10.0.0.1:36688.service - OpenSSH per-connection server daemon (10.0.0.1:36688). Oct 27 08:18:22.150759 sshd[3908]: Accepted publickey for core from 10.0.0.1 port 36688 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:18:22.152700 sshd-session[3908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:18:22.158634 systemd-logind[1605]: New session 10 of user core. Oct 27 08:18:22.167370 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 27 08:18:22.506138 sshd[3911]: Connection closed by 10.0.0.1 port 36688 Oct 27 08:18:22.507272 sshd-session[3908]: pam_unix(sshd:session): session closed for user core Oct 27 08:18:22.514903 systemd[1]: sshd@9-10.0.0.23:22-10.0.0.1:36688.service: Deactivated successfully. Oct 27 08:18:22.520122 systemd[1]: session-10.scope: Deactivated successfully. Oct 27 08:18:22.521864 systemd-logind[1605]: Session 10 logged out. Waiting for processes to exit. Oct 27 08:18:22.523914 systemd-logind[1605]: Removed session 10. Oct 27 08:18:23.868612 kubelet[2803]: I1027 08:18:23.868542 2803 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 27 08:18:23.870068 kubelet[2803]: E1027 08:18:23.870046 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:18:24.000021 kubelet[2803]: E1027 08:18:23.999971 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:18:25.663298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1276749031.mount: Deactivated successfully. Oct 27 08:18:26.644500 containerd[1622]: time="2025-10-27T08:18:26.644408633Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:18:26.645471 containerd[1622]: time="2025-10-27T08:18:26.645435299Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 27 08:18:26.648509 containerd[1622]: time="2025-10-27T08:18:26.648439739Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:18:26.650557 containerd[1622]: time="2025-10-27T08:18:26.650514423Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:18:26.651122 containerd[1622]: time="2025-10-27T08:18:26.651087949Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 10.667412033s" Oct 27 08:18:26.651183 containerd[1622]: time="2025-10-27T08:18:26.651126812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 27 08:18:26.670230 containerd[1622]: time="2025-10-27T08:18:26.670157161Z" level=info msg="CreateContainer within sandbox \"f778507d78f7dd6ff870b68cb091cb3aa2097923c56cb8c9591e5ea9bb79b7a2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 27 08:18:26.681864 containerd[1622]: time="2025-10-27T08:18:26.681806586Z" level=info msg="Container 68eba6e469ebe6f6e3b3bcec94e5c2b29333e860be99d6609cf0efd3b58566fa: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:18:26.709815 containerd[1622]: time="2025-10-27T08:18:26.709741246Z" level=info msg="CreateContainer within sandbox \"f778507d78f7dd6ff870b68cb091cb3aa2097923c56cb8c9591e5ea9bb79b7a2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"68eba6e469ebe6f6e3b3bcec94e5c2b29333e860be99d6609cf0efd3b58566fa\"" Oct 27 08:18:26.726572 containerd[1622]: time="2025-10-27T08:18:26.726511502Z" level=info msg="StartContainer for \"68eba6e469ebe6f6e3b3bcec94e5c2b29333e860be99d6609cf0efd3b58566fa\"" Oct 27 08:18:26.729479 containerd[1622]: time="2025-10-27T08:18:26.729446050Z" level=info msg="connecting to shim 68eba6e469ebe6f6e3b3bcec94e5c2b29333e860be99d6609cf0efd3b58566fa" address="unix:///run/containerd/s/0e551dfda2301b841ed958866c8880bb395400a0551659a528d9826cd9a36d74" protocol=ttrpc version=3 Oct 27 08:18:26.759477 systemd[1]: Started cri-containerd-68eba6e469ebe6f6e3b3bcec94e5c2b29333e860be99d6609cf0efd3b58566fa.scope - libcontainer container 68eba6e469ebe6f6e3b3bcec94e5c2b29333e860be99d6609cf0efd3b58566fa. Oct 27 08:18:26.951466 containerd[1622]: time="2025-10-27T08:18:26.951345716Z" level=info msg="StartContainer for \"68eba6e469ebe6f6e3b3bcec94e5c2b29333e860be99d6609cf0efd3b58566fa\" returns successfully" Oct 27 08:18:26.972735 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 27 08:18:26.973604 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 27 08:18:26.975128 containerd[1622]: time="2025-10-27T08:18:26.975089060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b74c9c6f-jk2nn,Uid:77100d96-e703-4e0a-b71a-6946f424cbfa,Namespace:calico-apiserver,Attempt:0,}" Oct 27 08:18:26.977307 containerd[1622]: time="2025-10-27T08:18:26.977279181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-lc29c,Uid:cd97c86b-d16c-499b-928c-ee1afbc3c575,Namespace:calico-system,Attempt:0,}" Oct 27 08:18:26.978781 kubelet[2803]: E1027 08:18:26.978746 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:18:26.979693 containerd[1622]: time="2025-10-27T08:18:26.979663837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ls4jw,Uid:efeba680-c11b-470e-be50-8994147e2b12,Namespace:kube-system,Attempt:0,}" Oct 27 08:18:27.044444 kubelet[2803]: E1027 08:18:27.043990 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:18:27.076012 kubelet[2803]: I1027 08:18:27.074845 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-47zlw" podStartSLOduration=1.138956345 podStartE2EDuration="25.074828736s" podCreationTimestamp="2025-10-27 08:18:02 +0000 UTC" firstStartedPulling="2025-10-27 08:18:02.716081323 +0000 UTC m=+20.931942902" lastFinishedPulling="2025-10-27 08:18:26.651953714 +0000 UTC m=+44.867815293" observedRunningTime="2025-10-27 08:18:27.073774978 +0000 UTC m=+45.289636577" watchObservedRunningTime="2025-10-27 08:18:27.074828736 +0000 UTC m=+45.290690315" Oct 27 08:18:27.189018 kubelet[2803]: I1027 08:18:27.185942 2803 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mt25v\" (UniqueName: \"kubernetes.io/projected/9eca1f3c-6640-4b80-93fd-4cf14826a563-kube-api-access-mt25v\") pod \"9eca1f3c-6640-4b80-93fd-4cf14826a563\" (UID: \"9eca1f3c-6640-4b80-93fd-4cf14826a563\") " Oct 27 08:18:27.189018 kubelet[2803]: I1027 08:18:27.186016 2803 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9eca1f3c-6640-4b80-93fd-4cf14826a563-whisker-ca-bundle\") pod \"9eca1f3c-6640-4b80-93fd-4cf14826a563\" (UID: \"9eca1f3c-6640-4b80-93fd-4cf14826a563\") " Oct 27 08:18:27.189018 kubelet[2803]: I1027 08:18:27.186039 2803 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9eca1f3c-6640-4b80-93fd-4cf14826a563-whisker-backend-key-pair\") pod \"9eca1f3c-6640-4b80-93fd-4cf14826a563\" (UID: \"9eca1f3c-6640-4b80-93fd-4cf14826a563\") " Oct 27 08:18:27.200237 kubelet[2803]: I1027 08:18:27.199509 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eca1f3c-6640-4b80-93fd-4cf14826a563-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9eca1f3c-6640-4b80-93fd-4cf14826a563" (UID: "9eca1f3c-6640-4b80-93fd-4cf14826a563"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 27 08:18:27.207755 kubelet[2803]: I1027 08:18:27.200720 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eca1f3c-6640-4b80-93fd-4cf14826a563-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9eca1f3c-6640-4b80-93fd-4cf14826a563" (UID: "9eca1f3c-6640-4b80-93fd-4cf14826a563"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 27 08:18:27.216867 kubelet[2803]: I1027 08:18:27.201874 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eca1f3c-6640-4b80-93fd-4cf14826a563-kube-api-access-mt25v" (OuterVolumeSpecName: "kube-api-access-mt25v") pod "9eca1f3c-6640-4b80-93fd-4cf14826a563" (UID: "9eca1f3c-6640-4b80-93fd-4cf14826a563"). InnerVolumeSpecName "kube-api-access-mt25v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 27 08:18:27.287269 kubelet[2803]: I1027 08:18:27.286978 2803 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9eca1f3c-6640-4b80-93fd-4cf14826a563-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 27 08:18:27.287531 kubelet[2803]: I1027 08:18:27.287516 2803 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9eca1f3c-6640-4b80-93fd-4cf14826a563-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 27 08:18:27.287598 kubelet[2803]: I1027 08:18:27.287588 2803 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mt25v\" (UniqueName: \"kubernetes.io/projected/9eca1f3c-6640-4b80-93fd-4cf14826a563-kube-api-access-mt25v\") on node \"localhost\" DevicePath \"\"" Oct 27 08:18:27.316696 containerd[1622]: 2025-10-27 08:18:27.158 [INFO][4041] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5b866acc7b4eb81108b4a47b16ced9adca2c880f0176f43b66b2c6f91212210d" Oct 27 08:18:27.316696 containerd[1622]: 2025-10-27 08:18:27.159 [INFO][4041] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5b866acc7b4eb81108b4a47b16ced9adca2c880f0176f43b66b2c6f91212210d" iface="eth0" netns="/var/run/netns/cni-b0d88519-ed28-061e-f7a4-bb6ac914b111" Oct 27 08:18:27.316696 containerd[1622]: 2025-10-27 08:18:27.160 [INFO][4041] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5b866acc7b4eb81108b4a47b16ced9adca2c880f0176f43b66b2c6f91212210d" iface="eth0" netns="/var/run/netns/cni-b0d88519-ed28-061e-f7a4-bb6ac914b111" Oct 27 08:18:27.316696 containerd[1622]: 2025-10-27 08:18:27.161 [INFO][4041] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5b866acc7b4eb81108b4a47b16ced9adca2c880f0176f43b66b2c6f91212210d" iface="eth0" netns="/var/run/netns/cni-b0d88519-ed28-061e-f7a4-bb6ac914b111" Oct 27 08:18:27.316696 containerd[1622]: 2025-10-27 08:18:27.161 [INFO][4041] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5b866acc7b4eb81108b4a47b16ced9adca2c880f0176f43b66b2c6f91212210d" Oct 27 08:18:27.316696 containerd[1622]: 2025-10-27 08:18:27.161 [INFO][4041] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5b866acc7b4eb81108b4a47b16ced9adca2c880f0176f43b66b2c6f91212210d" Oct 27 08:18:27.316696 containerd[1622]: 2025-10-27 08:18:27.281 [INFO][4076] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5b866acc7b4eb81108b4a47b16ced9adca2c880f0176f43b66b2c6f91212210d" HandleID="k8s-pod-network.5b866acc7b4eb81108b4a47b16ced9adca2c880f0176f43b66b2c6f91212210d" Workload="localhost-k8s-calico--apiserver--66b74c9c6f--jk2nn-eth0" Oct 27 08:18:27.316696 containerd[1622]: 2025-10-27 08:18:27.284 [INFO][4076] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:18:27.316696 containerd[1622]: 2025-10-27 08:18:27.285 [INFO][4076] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:18:27.317022 containerd[1622]: 2025-10-27 08:18:27.307 [WARNING][4076] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5b866acc7b4eb81108b4a47b16ced9adca2c880f0176f43b66b2c6f91212210d" HandleID="k8s-pod-network.5b866acc7b4eb81108b4a47b16ced9adca2c880f0176f43b66b2c6f91212210d" Workload="localhost-k8s-calico--apiserver--66b74c9c6f--jk2nn-eth0" Oct 27 08:18:27.317022 containerd[1622]: 2025-10-27 08:18:27.307 [INFO][4076] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5b866acc7b4eb81108b4a47b16ced9adca2c880f0176f43b66b2c6f91212210d" HandleID="k8s-pod-network.5b866acc7b4eb81108b4a47b16ced9adca2c880f0176f43b66b2c6f91212210d" Workload="localhost-k8s-calico--apiserver--66b74c9c6f--jk2nn-eth0" Oct 27 08:18:27.317022 containerd[1622]: 2025-10-27 08:18:27.308 [INFO][4076] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:18:27.317022 containerd[1622]: 2025-10-27 08:18:27.313 [INFO][4041] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5b866acc7b4eb81108b4a47b16ced9adca2c880f0176f43b66b2c6f91212210d" Oct 27 08:18:27.321433 containerd[1622]: time="2025-10-27T08:18:27.320011689Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b74c9c6f-jk2nn,Uid:77100d96-e703-4e0a-b71a-6946f424cbfa,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b866acc7b4eb81108b4a47b16ced9adca2c880f0176f43b66b2c6f91212210d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:18:27.324227 kubelet[2803]: E1027 08:18:27.322999 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b866acc7b4eb81108b4a47b16ced9adca2c880f0176f43b66b2c6f91212210d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:18:27.324227 kubelet[2803]: E1027 08:18:27.323092 2803 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b866acc7b4eb81108b4a47b16ced9adca2c880f0176f43b66b2c6f91212210d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66b74c9c6f-jk2nn" Oct 27 08:18:27.324227 kubelet[2803]: E1027 08:18:27.323115 2803 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b866acc7b4eb81108b4a47b16ced9adca2c880f0176f43b66b2c6f91212210d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66b74c9c6f-jk2nn" Oct 27 08:18:27.324352 kubelet[2803]: E1027 08:18:27.323179 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66b74c9c6f-jk2nn_calico-apiserver(77100d96-e703-4e0a-b71a-6946f424cbfa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66b74c9c6f-jk2nn_calico-apiserver(77100d96-e703-4e0a-b71a-6946f424cbfa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b866acc7b4eb81108b4a47b16ced9adca2c880f0176f43b66b2c6f91212210d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66b74c9c6f-jk2nn" podUID="77100d96-e703-4e0a-b71a-6946f424cbfa" Oct 27 08:18:27.331144 containerd[1622]: 2025-10-27 08:18:27.256 [INFO][4046] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dd76edc570a573a7e5fe397a508e3fc744a4c055c0b50ce3f6cc985a7406fac7" Oct 27 08:18:27.331144 containerd[1622]: 2025-10-27 08:18:27.256 [INFO][4046] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dd76edc570a573a7e5fe397a508e3fc744a4c055c0b50ce3f6cc985a7406fac7" iface="eth0" netns="/var/run/netns/cni-b8f66eca-3d47-7ef1-6839-8424da49a764" Oct 27 08:18:27.331144 containerd[1622]: 2025-10-27 08:18:27.257 [INFO][4046] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dd76edc570a573a7e5fe397a508e3fc744a4c055c0b50ce3f6cc985a7406fac7" iface="eth0" netns="/var/run/netns/cni-b8f66eca-3d47-7ef1-6839-8424da49a764" Oct 27 08:18:27.331144 containerd[1622]: 2025-10-27 08:18:27.258 [INFO][4046] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dd76edc570a573a7e5fe397a508e3fc744a4c055c0b50ce3f6cc985a7406fac7" iface="eth0" netns="/var/run/netns/cni-b8f66eca-3d47-7ef1-6839-8424da49a764" Oct 27 08:18:27.331144 containerd[1622]: 2025-10-27 08:18:27.258 [INFO][4046] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dd76edc570a573a7e5fe397a508e3fc744a4c055c0b50ce3f6cc985a7406fac7" Oct 27 08:18:27.331144 containerd[1622]: 2025-10-27 08:18:27.258 [INFO][4046] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd76edc570a573a7e5fe397a508e3fc744a4c055c0b50ce3f6cc985a7406fac7" Oct 27 08:18:27.331144 containerd[1622]: 2025-10-27 08:18:27.285 [INFO][4116] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dd76edc570a573a7e5fe397a508e3fc744a4c055c0b50ce3f6cc985a7406fac7" HandleID="k8s-pod-network.dd76edc570a573a7e5fe397a508e3fc744a4c055c0b50ce3f6cc985a7406fac7" Workload="localhost-k8s-coredns--66bc5c9577--ls4jw-eth0" Oct 27 08:18:27.331144 containerd[1622]: 2025-10-27 08:18:27.285 [INFO][4116] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:18:27.331144 containerd[1622]: 2025-10-27 08:18:27.309 [INFO][4116] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:18:27.331674 containerd[1622]: 2025-10-27 08:18:27.317 [WARNING][4116] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="dd76edc570a573a7e5fe397a508e3fc744a4c055c0b50ce3f6cc985a7406fac7" HandleID="k8s-pod-network.dd76edc570a573a7e5fe397a508e3fc744a4c055c0b50ce3f6cc985a7406fac7" Workload="localhost-k8s-coredns--66bc5c9577--ls4jw-eth0" Oct 27 08:18:27.331674 containerd[1622]: 2025-10-27 08:18:27.317 [INFO][4116] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dd76edc570a573a7e5fe397a508e3fc744a4c055c0b50ce3f6cc985a7406fac7" HandleID="k8s-pod-network.dd76edc570a573a7e5fe397a508e3fc744a4c055c0b50ce3f6cc985a7406fac7" Workload="localhost-k8s-coredns--66bc5c9577--ls4jw-eth0" Oct 27 08:18:27.331674 containerd[1622]: 2025-10-27 08:18:27.321 [INFO][4116] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:18:27.331674 containerd[1622]: 2025-10-27 08:18:27.325 [INFO][4046] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dd76edc570a573a7e5fe397a508e3fc744a4c055c0b50ce3f6cc985a7406fac7" Oct 27 08:18:27.333771 containerd[1622]: time="2025-10-27T08:18:27.333728663Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ls4jw,Uid:efeba680-c11b-470e-be50-8994147e2b12,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd76edc570a573a7e5fe397a508e3fc744a4c055c0b50ce3f6cc985a7406fac7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:18:27.334478 kubelet[2803]: E1027 08:18:27.334098 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd76edc570a573a7e5fe397a508e3fc744a4c055c0b50ce3f6cc985a7406fac7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:18:27.334478 kubelet[2803]: E1027 08:18:27.334149 2803 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd76edc570a573a7e5fe397a508e3fc744a4c055c0b50ce3f6cc985a7406fac7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ls4jw" Oct 27 08:18:27.334478 kubelet[2803]: E1027 08:18:27.334167 2803 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd76edc570a573a7e5fe397a508e3fc744a4c055c0b50ce3f6cc985a7406fac7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ls4jw" Oct 27 08:18:27.334593 kubelet[2803]: E1027 08:18:27.334232 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-ls4jw_kube-system(efeba680-c11b-470e-be50-8994147e2b12)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-ls4jw_kube-system(efeba680-c11b-470e-be50-8994147e2b12)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd76edc570a573a7e5fe397a508e3fc744a4c055c0b50ce3f6cc985a7406fac7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-ls4jw" podUID="efeba680-c11b-470e-be50-8994147e2b12" Oct 27 08:18:27.338740 containerd[1622]: 2025-10-27 08:18:27.253 [INFO][4057] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="976b0aed780241a6c0bc92782dd16e228c2fd8f7c813e867c6e188772144e8cd" Oct 27 08:18:27.338740 containerd[1622]: 2025-10-27 08:18:27.257 [INFO][4057] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="976b0aed780241a6c0bc92782dd16e228c2fd8f7c813e867c6e188772144e8cd" iface="eth0" netns="/var/run/netns/cni-4799f9c8-ec9c-2986-835f-e0c158474d0b" Oct 27 08:18:27.338740 containerd[1622]: 2025-10-27 08:18:27.257 [INFO][4057] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="976b0aed780241a6c0bc92782dd16e228c2fd8f7c813e867c6e188772144e8cd" iface="eth0" netns="/var/run/netns/cni-4799f9c8-ec9c-2986-835f-e0c158474d0b" Oct 27 08:18:27.338740 containerd[1622]: 2025-10-27 08:18:27.257 [INFO][4057] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="976b0aed780241a6c0bc92782dd16e228c2fd8f7c813e867c6e188772144e8cd" iface="eth0" netns="/var/run/netns/cni-4799f9c8-ec9c-2986-835f-e0c158474d0b" Oct 27 08:18:27.338740 containerd[1622]: 2025-10-27 08:18:27.257 [INFO][4057] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="976b0aed780241a6c0bc92782dd16e228c2fd8f7c813e867c6e188772144e8cd" Oct 27 08:18:27.338740 containerd[1622]: 2025-10-27 08:18:27.257 [INFO][4057] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="976b0aed780241a6c0bc92782dd16e228c2fd8f7c813e867c6e188772144e8cd" Oct 27 08:18:27.338740 containerd[1622]: 2025-10-27 08:18:27.317 [INFO][4114] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="976b0aed780241a6c0bc92782dd16e228c2fd8f7c813e867c6e188772144e8cd" HandleID="k8s-pod-network.976b0aed780241a6c0bc92782dd16e228c2fd8f7c813e867c6e188772144e8cd" Workload="localhost-k8s-goldmane--7c778bb748--lc29c-eth0" Oct 27 08:18:27.338740 containerd[1622]: 2025-10-27 08:18:27.318 [INFO][4114] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:18:27.338740 containerd[1622]: 2025-10-27 08:18:27.321 [INFO][4114] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:18:27.338951 containerd[1622]: 2025-10-27 08:18:27.327 [WARNING][4114] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="976b0aed780241a6c0bc92782dd16e228c2fd8f7c813e867c6e188772144e8cd" HandleID="k8s-pod-network.976b0aed780241a6c0bc92782dd16e228c2fd8f7c813e867c6e188772144e8cd" Workload="localhost-k8s-goldmane--7c778bb748--lc29c-eth0" Oct 27 08:18:27.338951 containerd[1622]: 2025-10-27 08:18:27.327 [INFO][4114] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="976b0aed780241a6c0bc92782dd16e228c2fd8f7c813e867c6e188772144e8cd" HandleID="k8s-pod-network.976b0aed780241a6c0bc92782dd16e228c2fd8f7c813e867c6e188772144e8cd" Workload="localhost-k8s-goldmane--7c778bb748--lc29c-eth0" Oct 27 08:18:27.338951 containerd[1622]: 2025-10-27 08:18:27.329 [INFO][4114] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:18:27.338951 containerd[1622]: 2025-10-27 08:18:27.335 [INFO][4057] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="976b0aed780241a6c0bc92782dd16e228c2fd8f7c813e867c6e188772144e8cd" Oct 27 08:18:27.340352 containerd[1622]: time="2025-10-27T08:18:27.340293755Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-lc29c,Uid:cd97c86b-d16c-499b-928c-ee1afbc3c575,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"976b0aed780241a6c0bc92782dd16e228c2fd8f7c813e867c6e188772144e8cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:18:27.340655 kubelet[2803]: E1027 08:18:27.340622 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"976b0aed780241a6c0bc92782dd16e228c2fd8f7c813e867c6e188772144e8cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:18:27.340750 kubelet[2803]: E1027 08:18:27.340734 2803 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"976b0aed780241a6c0bc92782dd16e228c2fd8f7c813e867c6e188772144e8cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-lc29c" Oct 27 08:18:27.340819 kubelet[2803]: E1027 08:18:27.340805 2803 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"976b0aed780241a6c0bc92782dd16e228c2fd8f7c813e867c6e188772144e8cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-lc29c" Oct 27 08:18:27.340922 kubelet[2803]: E1027 08:18:27.340902 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-lc29c_calico-system(cd97c86b-d16c-499b-928c-ee1afbc3c575)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-lc29c_calico-system(cd97c86b-d16c-499b-928c-ee1afbc3c575)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"976b0aed780241a6c0bc92782dd16e228c2fd8f7c813e867c6e188772144e8cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-lc29c" podUID="cd97c86b-d16c-499b-928c-ee1afbc3c575" Oct 27 08:18:27.353944 containerd[1622]: time="2025-10-27T08:18:27.353901664Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68eba6e469ebe6f6e3b3bcec94e5c2b29333e860be99d6609cf0efd3b58566fa\" id:\"21f41980799183ecd6231c72be129a77ce136d7877ca23619dad405c24c60ece\" pid:4106 exit_status:1 exited_at:{seconds:1761553107 nanos:353508236}" Oct 27 08:18:27.523565 systemd[1]: Started sshd@10-10.0.0.23:22-10.0.0.1:36692.service - OpenSSH per-connection server daemon (10.0.0.1:36692). Oct 27 08:18:27.603037 sshd[4152]: Accepted publickey for core from 10.0.0.1 port 36692 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:18:27.604928 sshd-session[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:18:27.609852 systemd-logind[1605]: New session 11 of user core. Oct 27 08:18:27.624379 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 27 08:18:27.665379 systemd[1]: run-netns-cni\x2db0d88519\x2ded28\x2d061e\x2df7a4\x2dbb6ac914b111.mount: Deactivated successfully. Oct 27 08:18:27.665513 systemd[1]: var-lib-kubelet-pods-9eca1f3c\x2d6640\x2d4b80\x2d93fd\x2d4cf14826a563-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmt25v.mount: Deactivated successfully. Oct 27 08:18:27.665613 systemd[1]: var-lib-kubelet-pods-9eca1f3c\x2d6640\x2d4b80\x2d93fd\x2d4cf14826a563-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 27 08:18:27.756585 sshd[4155]: Connection closed by 10.0.0.1 port 36692 Oct 27 08:18:27.756920 sshd-session[4152]: pam_unix(sshd:session): session closed for user core Oct 27 08:18:27.762111 systemd[1]: sshd@10-10.0.0.23:22-10.0.0.1:36692.service: Deactivated successfully. Oct 27 08:18:27.764339 systemd[1]: session-11.scope: Deactivated successfully. Oct 27 08:18:27.765100 systemd-logind[1605]: Session 11 logged out. Waiting for processes to exit. Oct 27 08:18:27.766470 systemd-logind[1605]: Removed session 11. Oct 27 08:18:27.884769 containerd[1622]: time="2025-10-27T08:18:27.884629022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b74c9c6f-rnnck,Uid:bce4a7cd-a2cf-439e-a5d4-f335be73a306,Namespace:calico-apiserver,Attempt:0,}" Oct 27 08:18:27.894338 systemd[1]: Removed slice kubepods-besteffort-pod9eca1f3c_6640_4b80_93fd_4cf14826a563.slice - libcontainer container kubepods-besteffort-pod9eca1f3c_6640_4b80_93fd_4cf14826a563.slice. Oct 27 08:18:28.033845 systemd-networkd[1531]: cali79d87b22345: Link UP Oct 27 08:18:28.034661 systemd-networkd[1531]: cali79d87b22345: Gained carrier Oct 27 08:18:28.050410 kubelet[2803]: E1027 08:18:28.049903 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:18:28.083532 containerd[1622]: time="2025-10-27T08:18:28.083456497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-lc29c,Uid:cd97c86b-d16c-499b-928c-ee1afbc3c575,Namespace:calico-system,Attempt:0,}" Oct 27 08:18:28.107318 containerd[1622]: 2025-10-27 08:18:27.908 [INFO][4170] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 27 08:18:28.107318 containerd[1622]: 2025-10-27 08:18:27.920 [INFO][4170] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--66b74c9c6f--rnnck-eth0 calico-apiserver-66b74c9c6f- calico-apiserver bce4a7cd-a2cf-439e-a5d4-f335be73a306 843 0 2025-10-27 08:17:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66b74c9c6f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-66b74c9c6f-rnnck eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali79d87b22345 [] [] }} ContainerID="c0d5db7bb733e7d3928b4f89b1be6fe8eb00d100f24e3ded08ee2d629af95f80" Namespace="calico-apiserver" Pod="calico-apiserver-66b74c9c6f-rnnck" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b74c9c6f--rnnck-" Oct 27 08:18:28.107318 containerd[1622]: 2025-10-27 08:18:27.920 [INFO][4170] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c0d5db7bb733e7d3928b4f89b1be6fe8eb00d100f24e3ded08ee2d629af95f80" Namespace="calico-apiserver" Pod="calico-apiserver-66b74c9c6f-rnnck" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b74c9c6f--rnnck-eth0" Oct 27 08:18:28.107318 containerd[1622]: 2025-10-27 08:18:27.946 [INFO][4183] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0d5db7bb733e7d3928b4f89b1be6fe8eb00d100f24e3ded08ee2d629af95f80" HandleID="k8s-pod-network.c0d5db7bb733e7d3928b4f89b1be6fe8eb00d100f24e3ded08ee2d629af95f80" Workload="localhost-k8s-calico--apiserver--66b74c9c6f--rnnck-eth0" Oct 27 08:18:28.107607 containerd[1622]: 2025-10-27 08:18:27.946 [INFO][4183] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c0d5db7bb733e7d3928b4f89b1be6fe8eb00d100f24e3ded08ee2d629af95f80" HandleID="k8s-pod-network.c0d5db7bb733e7d3928b4f89b1be6fe8eb00d100f24e3ded08ee2d629af95f80" Workload="localhost-k8s-calico--apiserver--66b74c9c6f--rnnck-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000597030), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-66b74c9c6f-rnnck", "timestamp":"2025-10-27 08:18:27.946475606 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:18:28.107607 containerd[1622]: 2025-10-27 08:18:27.946 [INFO][4183] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:18:28.107607 containerd[1622]: 2025-10-27 08:18:27.946 [INFO][4183] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:18:28.107607 containerd[1622]: 2025-10-27 08:18:27.946 [INFO][4183] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:18:28.107607 containerd[1622]: 2025-10-27 08:18:27.953 [INFO][4183] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c0d5db7bb733e7d3928b4f89b1be6fe8eb00d100f24e3ded08ee2d629af95f80" host="localhost" Oct 27 08:18:28.107607 containerd[1622]: 2025-10-27 08:18:27.959 [INFO][4183] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:18:28.107607 containerd[1622]: 2025-10-27 08:18:27.963 [INFO][4183] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:18:28.107607 containerd[1622]: 2025-10-27 08:18:27.964 [INFO][4183] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:18:28.107607 containerd[1622]: 2025-10-27 08:18:27.966 [INFO][4183] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:18:28.107607 containerd[1622]: 2025-10-27 08:18:27.966 [INFO][4183] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c0d5db7bb733e7d3928b4f89b1be6fe8eb00d100f24e3ded08ee2d629af95f80" host="localhost" Oct 27 08:18:28.107830 containerd[1622]: 2025-10-27 08:18:27.967 [INFO][4183] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c0d5db7bb733e7d3928b4f89b1be6fe8eb00d100f24e3ded08ee2d629af95f80 Oct 27 08:18:28.107830 containerd[1622]: 2025-10-27 08:18:28.004 [INFO][4183] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c0d5db7bb733e7d3928b4f89b1be6fe8eb00d100f24e3ded08ee2d629af95f80" host="localhost" Oct 27 08:18:28.107830 containerd[1622]: 2025-10-27 08:18:28.023 [INFO][4183] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.c0d5db7bb733e7d3928b4f89b1be6fe8eb00d100f24e3ded08ee2d629af95f80" host="localhost" Oct 27 08:18:28.107830 containerd[1622]: 2025-10-27 08:18:28.023 [INFO][4183] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.c0d5db7bb733e7d3928b4f89b1be6fe8eb00d100f24e3ded08ee2d629af95f80" host="localhost" Oct 27 08:18:28.107830 containerd[1622]: 2025-10-27 08:18:28.023 [INFO][4183] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:18:28.107830 containerd[1622]: 2025-10-27 08:18:28.023 [INFO][4183] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="c0d5db7bb733e7d3928b4f89b1be6fe8eb00d100f24e3ded08ee2d629af95f80" HandleID="k8s-pod-network.c0d5db7bb733e7d3928b4f89b1be6fe8eb00d100f24e3ded08ee2d629af95f80" Workload="localhost-k8s-calico--apiserver--66b74c9c6f--rnnck-eth0" Oct 27 08:18:28.107984 containerd[1622]: 2025-10-27 08:18:28.026 [INFO][4170] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c0d5db7bb733e7d3928b4f89b1be6fe8eb00d100f24e3ded08ee2d629af95f80" Namespace="calico-apiserver" Pod="calico-apiserver-66b74c9c6f-rnnck" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b74c9c6f--rnnck-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66b74c9c6f--rnnck-eth0", GenerateName:"calico-apiserver-66b74c9c6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"bce4a7cd-a2cf-439e-a5d4-f335be73a306", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 17, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66b74c9c6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-66b74c9c6f-rnnck", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali79d87b22345", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:18:28.108044 containerd[1622]: 2025-10-27 08:18:28.026 [INFO][4170] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="c0d5db7bb733e7d3928b4f89b1be6fe8eb00d100f24e3ded08ee2d629af95f80" Namespace="calico-apiserver" Pod="calico-apiserver-66b74c9c6f-rnnck" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b74c9c6f--rnnck-eth0" Oct 27 08:18:28.108044 containerd[1622]: 2025-10-27 08:18:28.026 [INFO][4170] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali79d87b22345 ContainerID="c0d5db7bb733e7d3928b4f89b1be6fe8eb00d100f24e3ded08ee2d629af95f80" Namespace="calico-apiserver" Pod="calico-apiserver-66b74c9c6f-rnnck" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b74c9c6f--rnnck-eth0" Oct 27 08:18:28.108044 containerd[1622]: 2025-10-27 08:18:28.034 [INFO][4170] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0d5db7bb733e7d3928b4f89b1be6fe8eb00d100f24e3ded08ee2d629af95f80" Namespace="calico-apiserver" Pod="calico-apiserver-66b74c9c6f-rnnck" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b74c9c6f--rnnck-eth0" Oct 27 08:18:28.108108 containerd[1622]: 2025-10-27 08:18:28.035 [INFO][4170] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c0d5db7bb733e7d3928b4f89b1be6fe8eb00d100f24e3ded08ee2d629af95f80" Namespace="calico-apiserver" Pod="calico-apiserver-66b74c9c6f-rnnck" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b74c9c6f--rnnck-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66b74c9c6f--rnnck-eth0", GenerateName:"calico-apiserver-66b74c9c6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"bce4a7cd-a2cf-439e-a5d4-f335be73a306", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 17, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66b74c9c6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c0d5db7bb733e7d3928b4f89b1be6fe8eb00d100f24e3ded08ee2d629af95f80", Pod:"calico-apiserver-66b74c9c6f-rnnck", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali79d87b22345", MAC:"76:d4:44:d4:78:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:18:28.108158 containerd[1622]: 2025-10-27 08:18:28.090 [INFO][4170] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c0d5db7bb733e7d3928b4f89b1be6fe8eb00d100f24e3ded08ee2d629af95f80" Namespace="calico-apiserver" Pod="calico-apiserver-66b74c9c6f-rnnck" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b74c9c6f--rnnck-eth0" Oct 27 08:18:28.129326 containerd[1622]: time="2025-10-27T08:18:28.129282489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b74c9c6f-jk2nn,Uid:77100d96-e703-4e0a-b71a-6946f424cbfa,Namespace:calico-apiserver,Attempt:0,}" Oct 27 08:18:28.135321 kubelet[2803]: E1027 08:18:28.134088 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:18:28.139241 containerd[1622]: time="2025-10-27T08:18:28.136254514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ls4jw,Uid:efeba680-c11b-470e-be50-8994147e2b12,Namespace:kube-system,Attempt:0,}" Oct 27 08:18:28.208551 systemd[1]: Created slice kubepods-besteffort-pod5deac30e_2d61_4e43_811d_2847f4300da3.slice - libcontainer container kubepods-besteffort-pod5deac30e_2d61_4e43_811d_2847f4300da3.slice. Oct 27 08:18:28.264899 containerd[1622]: time="2025-10-27T08:18:28.264442978Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68eba6e469ebe6f6e3b3bcec94e5c2b29333e860be99d6609cf0efd3b58566fa\" id:\"74b5167ec9c44600a6273468eb88649d4c5dc6e1fbdff25dfd251f5b6a9d3f0d\" pid:4205 exit_status:1 exited_at:{seconds:1761553108 nanos:263720242}" Oct 27 08:18:28.300483 kubelet[2803]: I1027 08:18:28.300409 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5deac30e-2d61-4e43-811d-2847f4300da3-whisker-backend-key-pair\") pod \"whisker-5b8889988c-mxjjc\" (UID: \"5deac30e-2d61-4e43-811d-2847f4300da3\") " pod="calico-system/whisker-5b8889988c-mxjjc" Oct 27 08:18:28.300483 kubelet[2803]: I1027 08:18:28.300460 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5deac30e-2d61-4e43-811d-2847f4300da3-whisker-ca-bundle\") pod \"whisker-5b8889988c-mxjjc\" (UID: \"5deac30e-2d61-4e43-811d-2847f4300da3\") " pod="calico-system/whisker-5b8889988c-mxjjc" Oct 27 08:18:28.300483 kubelet[2803]: I1027 08:18:28.300488 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvnsp\" (UniqueName: \"kubernetes.io/projected/5deac30e-2d61-4e43-811d-2847f4300da3-kube-api-access-lvnsp\") pod \"whisker-5b8889988c-mxjjc\" (UID: \"5deac30e-2d61-4e43-811d-2847f4300da3\") " pod="calico-system/whisker-5b8889988c-mxjjc" Oct 27 08:18:28.304144 containerd[1622]: time="2025-10-27T08:18:28.304082170Z" level=info msg="connecting to shim c0d5db7bb733e7d3928b4f89b1be6fe8eb00d100f24e3ded08ee2d629af95f80" address="unix:///run/containerd/s/dd3b68d404498ead6cb3015dd876c3a0c06df3156586969e6c3f156bb779845c" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:18:28.335732 systemd-networkd[1531]: calib4ceeaf49b4: Link UP Oct 27 08:18:28.340273 systemd-networkd[1531]: calib4ceeaf49b4: Gained carrier Oct 27 08:18:28.349280 containerd[1622]: 2025-10-27 08:18:28.231 [INFO][4253] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 27 08:18:28.349280 containerd[1622]: 2025-10-27 08:18:28.242 [INFO][4253] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--ls4jw-eth0 coredns-66bc5c9577- kube-system efeba680-c11b-470e-be50-8994147e2b12 950 0 2025-10-27 08:17:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-ls4jw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib4ceeaf49b4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="9ce7bdb3357a2fe7faf545269fbfeaf15b92cb2b2a3e84867229293d13f91263" Namespace="kube-system" Pod="coredns-66bc5c9577-ls4jw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ls4jw-" Oct 27 08:18:28.349280 containerd[1622]: 2025-10-27 08:18:28.242 [INFO][4253] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9ce7bdb3357a2fe7faf545269fbfeaf15b92cb2b2a3e84867229293d13f91263" Namespace="kube-system" Pod="coredns-66bc5c9577-ls4jw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ls4jw-eth0" Oct 27 08:18:28.349280 containerd[1622]: 2025-10-27 08:18:28.280 [INFO][4285] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ce7bdb3357a2fe7faf545269fbfeaf15b92cb2b2a3e84867229293d13f91263" HandleID="k8s-pod-network.9ce7bdb3357a2fe7faf545269fbfeaf15b92cb2b2a3e84867229293d13f91263" Workload="localhost-k8s-coredns--66bc5c9577--ls4jw-eth0" Oct 27 08:18:28.349483 containerd[1622]: 2025-10-27 08:18:28.280 [INFO][4285] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9ce7bdb3357a2fe7faf545269fbfeaf15b92cb2b2a3e84867229293d13f91263" HandleID="k8s-pod-network.9ce7bdb3357a2fe7faf545269fbfeaf15b92cb2b2a3e84867229293d13f91263" Workload="localhost-k8s-coredns--66bc5c9577--ls4jw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035eff0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-ls4jw", "timestamp":"2025-10-27 08:18:28.280064625 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:18:28.349483 containerd[1622]: 2025-10-27 08:18:28.280 [INFO][4285] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:18:28.349483 containerd[1622]: 2025-10-27 08:18:28.281 [INFO][4285] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:18:28.349483 containerd[1622]: 2025-10-27 08:18:28.281 [INFO][4285] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:18:28.349483 containerd[1622]: 2025-10-27 08:18:28.292 [INFO][4285] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9ce7bdb3357a2fe7faf545269fbfeaf15b92cb2b2a3e84867229293d13f91263" host="localhost" Oct 27 08:18:28.349483 containerd[1622]: 2025-10-27 08:18:28.299 [INFO][4285] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:18:28.349483 containerd[1622]: 2025-10-27 08:18:28.307 [INFO][4285] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:18:28.349483 containerd[1622]: 2025-10-27 08:18:28.311 [INFO][4285] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:18:28.349483 containerd[1622]: 2025-10-27 08:18:28.312 [INFO][4285] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:18:28.349483 containerd[1622]: 2025-10-27 08:18:28.312 [INFO][4285] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9ce7bdb3357a2fe7faf545269fbfeaf15b92cb2b2a3e84867229293d13f91263" host="localhost" Oct 27 08:18:28.349714 containerd[1622]: 2025-10-27 08:18:28.314 [INFO][4285] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9ce7bdb3357a2fe7faf545269fbfeaf15b92cb2b2a3e84867229293d13f91263 Oct 27 08:18:28.349714 containerd[1622]: 2025-10-27 08:18:28.318 [INFO][4285] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9ce7bdb3357a2fe7faf545269fbfeaf15b92cb2b2a3e84867229293d13f91263" host="localhost" Oct 27 08:18:28.349714 containerd[1622]: 2025-10-27 08:18:28.324 [INFO][4285] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.9ce7bdb3357a2fe7faf545269fbfeaf15b92cb2b2a3e84867229293d13f91263" host="localhost" Oct 27 08:18:28.349714 containerd[1622]: 2025-10-27 08:18:28.324 [INFO][4285] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.9ce7bdb3357a2fe7faf545269fbfeaf15b92cb2b2a3e84867229293d13f91263" host="localhost" Oct 27 08:18:28.349714 containerd[1622]: 2025-10-27 08:18:28.324 [INFO][4285] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:18:28.349714 containerd[1622]: 2025-10-27 08:18:28.324 [INFO][4285] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="9ce7bdb3357a2fe7faf545269fbfeaf15b92cb2b2a3e84867229293d13f91263" HandleID="k8s-pod-network.9ce7bdb3357a2fe7faf545269fbfeaf15b92cb2b2a3e84867229293d13f91263" Workload="localhost-k8s-coredns--66bc5c9577--ls4jw-eth0" Oct 27 08:18:28.349832 containerd[1622]: 2025-10-27 08:18:28.329 [INFO][4253] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9ce7bdb3357a2fe7faf545269fbfeaf15b92cb2b2a3e84867229293d13f91263" Namespace="kube-system" Pod="coredns-66bc5c9577-ls4jw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ls4jw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--ls4jw-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"efeba680-c11b-470e-be50-8994147e2b12", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 17, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-ls4jw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib4ceeaf49b4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:18:28.349832 containerd[1622]: 2025-10-27 08:18:28.330 [INFO][4253] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="9ce7bdb3357a2fe7faf545269fbfeaf15b92cb2b2a3e84867229293d13f91263" Namespace="kube-system" Pod="coredns-66bc5c9577-ls4jw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ls4jw-eth0" Oct 27 08:18:28.349832 containerd[1622]: 2025-10-27 08:18:28.330 [INFO][4253] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib4ceeaf49b4 ContainerID="9ce7bdb3357a2fe7faf545269fbfeaf15b92cb2b2a3e84867229293d13f91263" Namespace="kube-system" Pod="coredns-66bc5c9577-ls4jw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ls4jw-eth0" Oct 27 08:18:28.349832 containerd[1622]: 2025-10-27 08:18:28.335 [INFO][4253] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9ce7bdb3357a2fe7faf545269fbfeaf15b92cb2b2a3e84867229293d13f91263" Namespace="kube-system" Pod="coredns-66bc5c9577-ls4jw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ls4jw-eth0" Oct 27 08:18:28.349832 containerd[1622]: 2025-10-27 08:18:28.336 [INFO][4253] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9ce7bdb3357a2fe7faf545269fbfeaf15b92cb2b2a3e84867229293d13f91263" Namespace="kube-system" Pod="coredns-66bc5c9577-ls4jw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ls4jw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--ls4jw-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"efeba680-c11b-470e-be50-8994147e2b12", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 17, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9ce7bdb3357a2fe7faf545269fbfeaf15b92cb2b2a3e84867229293d13f91263", Pod:"coredns-66bc5c9577-ls4jw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib4ceeaf49b4", MAC:"f2:29:59:bc:cb:a5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:18:28.349832 containerd[1622]: 2025-10-27 08:18:28.345 [INFO][4253] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9ce7bdb3357a2fe7faf545269fbfeaf15b92cb2b2a3e84867229293d13f91263" Namespace="kube-system" Pod="coredns-66bc5c9577-ls4jw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ls4jw-eth0" Oct 27 08:18:28.351403 systemd[1]: Started cri-containerd-c0d5db7bb733e7d3928b4f89b1be6fe8eb00d100f24e3ded08ee2d629af95f80.scope - libcontainer container c0d5db7bb733e7d3928b4f89b1be6fe8eb00d100f24e3ded08ee2d629af95f80. Oct 27 08:18:28.367319 systemd-resolved[1296]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:18:28.375429 containerd[1622]: time="2025-10-27T08:18:28.375369707Z" level=info msg="connecting to shim 9ce7bdb3357a2fe7faf545269fbfeaf15b92cb2b2a3e84867229293d13f91263" address="unix:///run/containerd/s/97cf19f6afaf7734f9c5bf08573ee3d9e159740a50cbad741289bae8f8914578" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:18:28.406449 systemd[1]: Started cri-containerd-9ce7bdb3357a2fe7faf545269fbfeaf15b92cb2b2a3e84867229293d13f91263.scope - libcontainer container 9ce7bdb3357a2fe7faf545269fbfeaf15b92cb2b2a3e84867229293d13f91263. Oct 27 08:18:28.430464 systemd-resolved[1296]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:18:28.475157 containerd[1622]: time="2025-10-27T08:18:28.475099202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ls4jw,Uid:efeba680-c11b-470e-be50-8994147e2b12,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ce7bdb3357a2fe7faf545269fbfeaf15b92cb2b2a3e84867229293d13f91263\"" Oct 27 08:18:28.477233 kubelet[2803]: E1027 08:18:28.476361 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:18:28.482432 containerd[1622]: time="2025-10-27T08:18:28.482367693Z" level=info msg="CreateContainer within sandbox \"9ce7bdb3357a2fe7faf545269fbfeaf15b92cb2b2a3e84867229293d13f91263\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 27 08:18:28.485124 containerd[1622]: time="2025-10-27T08:18:28.484970057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b74c9c6f-rnnck,Uid:bce4a7cd-a2cf-439e-a5d4-f335be73a306,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c0d5db7bb733e7d3928b4f89b1be6fe8eb00d100f24e3ded08ee2d629af95f80\"" Oct 27 08:18:28.487184 containerd[1622]: time="2025-10-27T08:18:28.487139128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:18:28.488417 systemd-networkd[1531]: calicc0aedfd9b6: Link UP Oct 27 08:18:28.488970 systemd-networkd[1531]: calicc0aedfd9b6: Gained carrier Oct 27 08:18:28.512071 containerd[1622]: time="2025-10-27T08:18:28.512008210Z" level=info msg="Container 35cb30cf6557fba5713514bd81ead6f3defdbbd0405d52ba14cb572c405f739f: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:18:28.514826 containerd[1622]: 2025-10-27 08:18:28.209 [INFO][4220] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 27 08:18:28.514826 containerd[1622]: 2025-10-27 08:18:28.233 [INFO][4220] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--lc29c-eth0 goldmane-7c778bb748- calico-system cd97c86b-d16c-499b-928c-ee1afbc3c575 951 0 2025-10-27 08:18:00 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-lc29c eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calicc0aedfd9b6 [] [] }} ContainerID="41e4a7b03e8dfff2fa63edfa72231a8cdcc502bcc8c257afa96d47e5013fd147" Namespace="calico-system" Pod="goldmane-7c778bb748-lc29c" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--lc29c-" Oct 27 08:18:28.514826 containerd[1622]: 2025-10-27 08:18:28.234 [INFO][4220] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="41e4a7b03e8dfff2fa63edfa72231a8cdcc502bcc8c257afa96d47e5013fd147" Namespace="calico-system" Pod="goldmane-7c778bb748-lc29c" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--lc29c-eth0" Oct 27 08:18:28.514826 containerd[1622]: 2025-10-27 08:18:28.305 [INFO][4278] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="41e4a7b03e8dfff2fa63edfa72231a8cdcc502bcc8c257afa96d47e5013fd147" HandleID="k8s-pod-network.41e4a7b03e8dfff2fa63edfa72231a8cdcc502bcc8c257afa96d47e5013fd147" Workload="localhost-k8s-goldmane--7c778bb748--lc29c-eth0" Oct 27 08:18:28.514826 containerd[1622]: 2025-10-27 08:18:28.305 [INFO][4278] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="41e4a7b03e8dfff2fa63edfa72231a8cdcc502bcc8c257afa96d47e5013fd147" HandleID="k8s-pod-network.41e4a7b03e8dfff2fa63edfa72231a8cdcc502bcc8c257afa96d47e5013fd147" Workload="localhost-k8s-goldmane--7c778bb748--lc29c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001394e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-lc29c", "timestamp":"2025-10-27 08:18:28.305551688 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:18:28.514826 containerd[1622]: 2025-10-27 08:18:28.305 [INFO][4278] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:18:28.514826 containerd[1622]: 2025-10-27 08:18:28.325 [INFO][4278] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:18:28.514826 containerd[1622]: 2025-10-27 08:18:28.325 [INFO][4278] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:18:28.514826 containerd[1622]: 2025-10-27 08:18:28.394 [INFO][4278] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.41e4a7b03e8dfff2fa63edfa72231a8cdcc502bcc8c257afa96d47e5013fd147" host="localhost" Oct 27 08:18:28.514826 containerd[1622]: 2025-10-27 08:18:28.400 [INFO][4278] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:18:28.514826 containerd[1622]: 2025-10-27 08:18:28.408 [INFO][4278] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:18:28.514826 containerd[1622]: 2025-10-27 08:18:28.414 [INFO][4278] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:18:28.514826 containerd[1622]: 2025-10-27 08:18:28.416 [INFO][4278] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:18:28.514826 containerd[1622]: 2025-10-27 08:18:28.416 [INFO][4278] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.41e4a7b03e8dfff2fa63edfa72231a8cdcc502bcc8c257afa96d47e5013fd147" host="localhost" Oct 27 08:18:28.514826 containerd[1622]: 2025-10-27 08:18:28.417 [INFO][4278] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.41e4a7b03e8dfff2fa63edfa72231a8cdcc502bcc8c257afa96d47e5013fd147 Oct 27 08:18:28.514826 containerd[1622]: 2025-10-27 08:18:28.472 [INFO][4278] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.41e4a7b03e8dfff2fa63edfa72231a8cdcc502bcc8c257afa96d47e5013fd147" host="localhost" Oct 27 08:18:28.514826 containerd[1622]: 2025-10-27 08:18:28.478 [INFO][4278] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.41e4a7b03e8dfff2fa63edfa72231a8cdcc502bcc8c257afa96d47e5013fd147" host="localhost" Oct 27 08:18:28.514826 containerd[1622]: 2025-10-27 08:18:28.479 [INFO][4278] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.41e4a7b03e8dfff2fa63edfa72231a8cdcc502bcc8c257afa96d47e5013fd147" host="localhost" Oct 27 08:18:28.514826 containerd[1622]: 2025-10-27 08:18:28.479 [INFO][4278] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:18:28.514826 containerd[1622]: 2025-10-27 08:18:28.479 [INFO][4278] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="41e4a7b03e8dfff2fa63edfa72231a8cdcc502bcc8c257afa96d47e5013fd147" HandleID="k8s-pod-network.41e4a7b03e8dfff2fa63edfa72231a8cdcc502bcc8c257afa96d47e5013fd147" Workload="localhost-k8s-goldmane--7c778bb748--lc29c-eth0" Oct 27 08:18:28.515512 containerd[1622]: 2025-10-27 08:18:28.484 [INFO][4220] cni-plugin/k8s.go 418: Populated endpoint ContainerID="41e4a7b03e8dfff2fa63edfa72231a8cdcc502bcc8c257afa96d47e5013fd147" Namespace="calico-system" Pod="goldmane-7c778bb748-lc29c" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--lc29c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--lc29c-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"cd97c86b-d16c-499b-928c-ee1afbc3c575", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 18, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-lc29c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicc0aedfd9b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:18:28.515512 containerd[1622]: 2025-10-27 08:18:28.484 [INFO][4220] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="41e4a7b03e8dfff2fa63edfa72231a8cdcc502bcc8c257afa96d47e5013fd147" Namespace="calico-system" Pod="goldmane-7c778bb748-lc29c" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--lc29c-eth0" Oct 27 08:18:28.515512 containerd[1622]: 2025-10-27 08:18:28.484 [INFO][4220] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicc0aedfd9b6 ContainerID="41e4a7b03e8dfff2fa63edfa72231a8cdcc502bcc8c257afa96d47e5013fd147" Namespace="calico-system" Pod="goldmane-7c778bb748-lc29c" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--lc29c-eth0" Oct 27 08:18:28.515512 containerd[1622]: 2025-10-27 08:18:28.489 [INFO][4220] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="41e4a7b03e8dfff2fa63edfa72231a8cdcc502bcc8c257afa96d47e5013fd147" Namespace="calico-system" Pod="goldmane-7c778bb748-lc29c" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--lc29c-eth0" Oct 27 08:18:28.515512 containerd[1622]: 2025-10-27 08:18:28.490 [INFO][4220] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="41e4a7b03e8dfff2fa63edfa72231a8cdcc502bcc8c257afa96d47e5013fd147" Namespace="calico-system" Pod="goldmane-7c778bb748-lc29c" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--lc29c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--lc29c-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"cd97c86b-d16c-499b-928c-ee1afbc3c575", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 18, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"41e4a7b03e8dfff2fa63edfa72231a8cdcc502bcc8c257afa96d47e5013fd147", Pod:"goldmane-7c778bb748-lc29c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicc0aedfd9b6", MAC:"2a:24:2b:19:67:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:18:28.515512 containerd[1622]: 2025-10-27 08:18:28.503 [INFO][4220] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="41e4a7b03e8dfff2fa63edfa72231a8cdcc502bcc8c257afa96d47e5013fd147" Namespace="calico-system" Pod="goldmane-7c778bb748-lc29c" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--lc29c-eth0" Oct 27 08:18:28.525340 containerd[1622]: time="2025-10-27T08:18:28.525194477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b8889988c-mxjjc,Uid:5deac30e-2d61-4e43-811d-2847f4300da3,Namespace:calico-system,Attempt:0,}" Oct 27 08:18:28.546548 containerd[1622]: time="2025-10-27T08:18:28.546479312Z" level=info msg="CreateContainer within sandbox \"9ce7bdb3357a2fe7faf545269fbfeaf15b92cb2b2a3e84867229293d13f91263\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"35cb30cf6557fba5713514bd81ead6f3defdbbd0405d52ba14cb572c405f739f\"" Oct 27 08:18:28.551694 containerd[1622]: time="2025-10-27T08:18:28.551651499Z" level=info msg="StartContainer for \"35cb30cf6557fba5713514bd81ead6f3defdbbd0405d52ba14cb572c405f739f\"" Oct 27 08:18:28.568263 containerd[1622]: time="2025-10-27T08:18:28.568148380Z" level=info msg="connecting to shim 35cb30cf6557fba5713514bd81ead6f3defdbbd0405d52ba14cb572c405f739f" address="unix:///run/containerd/s/97cf19f6afaf7734f9c5bf08573ee3d9e159740a50cbad741289bae8f8914578" protocol=ttrpc version=3 Oct 27 08:18:28.612988 systemd-networkd[1531]: cali56f9f2aacff: Link UP Oct 27 08:18:28.619256 systemd-networkd[1531]: cali56f9f2aacff: Gained carrier Oct 27 08:18:28.628321 containerd[1622]: 2025-10-27 08:18:28.200 [INFO][4228] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 27 08:18:28.628321 containerd[1622]: 2025-10-27 08:18:28.224 [INFO][4228] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--66b74c9c6f--jk2nn-eth0 calico-apiserver-66b74c9c6f- calico-apiserver 77100d96-e703-4e0a-b71a-6946f424cbfa 946 0 2025-10-27 08:17:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66b74c9c6f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-66b74c9c6f-jk2nn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali56f9f2aacff [] [] }} ContainerID="5ea2b972199d9238db8e693e96e48969621df14c04004fc8199a4c76be47d9de" Namespace="calico-apiserver" Pod="calico-apiserver-66b74c9c6f-jk2nn" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b74c9c6f--jk2nn-" Oct 27 08:18:28.628321 containerd[1622]: 2025-10-27 08:18:28.224 [INFO][4228] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5ea2b972199d9238db8e693e96e48969621df14c04004fc8199a4c76be47d9de" Namespace="calico-apiserver" Pod="calico-apiserver-66b74c9c6f-jk2nn" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b74c9c6f--jk2nn-eth0" Oct 27 08:18:28.628321 containerd[1622]: 2025-10-27 08:18:28.306 [INFO][4267] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5ea2b972199d9238db8e693e96e48969621df14c04004fc8199a4c76be47d9de" HandleID="k8s-pod-network.5ea2b972199d9238db8e693e96e48969621df14c04004fc8199a4c76be47d9de" Workload="localhost-k8s-calico--apiserver--66b74c9c6f--jk2nn-eth0" Oct 27 08:18:28.628321 containerd[1622]: 2025-10-27 08:18:28.307 [INFO][4267] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5ea2b972199d9238db8e693e96e48969621df14c04004fc8199a4c76be47d9de" HandleID="k8s-pod-network.5ea2b972199d9238db8e693e96e48969621df14c04004fc8199a4c76be47d9de" Workload="localhost-k8s-calico--apiserver--66b74c9c6f--jk2nn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b36d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-66b74c9c6f-jk2nn", "timestamp":"2025-10-27 08:18:28.306851137 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:18:28.628321 containerd[1622]: 2025-10-27 08:18:28.307 [INFO][4267] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:18:28.628321 containerd[1622]: 2025-10-27 08:18:28.479 [INFO][4267] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:18:28.628321 containerd[1622]: 2025-10-27 08:18:28.479 [INFO][4267] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:18:28.628321 containerd[1622]: 2025-10-27 08:18:28.493 [INFO][4267] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5ea2b972199d9238db8e693e96e48969621df14c04004fc8199a4c76be47d9de" host="localhost" Oct 27 08:18:28.628321 containerd[1622]: 2025-10-27 08:18:28.517 [INFO][4267] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:18:28.628321 containerd[1622]: 2025-10-27 08:18:28.525 [INFO][4267] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:18:28.628321 containerd[1622]: 2025-10-27 08:18:28.528 [INFO][4267] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:18:28.628321 containerd[1622]: 2025-10-27 08:18:28.533 [INFO][4267] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:18:28.628321 containerd[1622]: 2025-10-27 08:18:28.533 [INFO][4267] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5ea2b972199d9238db8e693e96e48969621df14c04004fc8199a4c76be47d9de" host="localhost" Oct 27 08:18:28.628321 containerd[1622]: 2025-10-27 08:18:28.535 [INFO][4267] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5ea2b972199d9238db8e693e96e48969621df14c04004fc8199a4c76be47d9de Oct 27 08:18:28.628321 containerd[1622]: 2025-10-27 08:18:28.545 [INFO][4267] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5ea2b972199d9238db8e693e96e48969621df14c04004fc8199a4c76be47d9de" host="localhost" Oct 27 08:18:28.628321 containerd[1622]: 2025-10-27 08:18:28.553 [INFO][4267] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.5ea2b972199d9238db8e693e96e48969621df14c04004fc8199a4c76be47d9de" host="localhost" Oct 27 08:18:28.628321 containerd[1622]: 2025-10-27 08:18:28.567 [INFO][4267] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.5ea2b972199d9238db8e693e96e48969621df14c04004fc8199a4c76be47d9de" host="localhost" Oct 27 08:18:28.628321 containerd[1622]: 2025-10-27 08:18:28.567 [INFO][4267] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:18:28.628321 containerd[1622]: 2025-10-27 08:18:28.567 [INFO][4267] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="5ea2b972199d9238db8e693e96e48969621df14c04004fc8199a4c76be47d9de" HandleID="k8s-pod-network.5ea2b972199d9238db8e693e96e48969621df14c04004fc8199a4c76be47d9de" Workload="localhost-k8s-calico--apiserver--66b74c9c6f--jk2nn-eth0" Oct 27 08:18:28.631666 containerd[1622]: 2025-10-27 08:18:28.593 [INFO][4228] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5ea2b972199d9238db8e693e96e48969621df14c04004fc8199a4c76be47d9de" Namespace="calico-apiserver" Pod="calico-apiserver-66b74c9c6f-jk2nn" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b74c9c6f--jk2nn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66b74c9c6f--jk2nn-eth0", GenerateName:"calico-apiserver-66b74c9c6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"77100d96-e703-4e0a-b71a-6946f424cbfa", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 17, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66b74c9c6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-66b74c9c6f-jk2nn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali56f9f2aacff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:18:28.631666 containerd[1622]: 2025-10-27 08:18:28.593 [INFO][4228] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="5ea2b972199d9238db8e693e96e48969621df14c04004fc8199a4c76be47d9de" Namespace="calico-apiserver" Pod="calico-apiserver-66b74c9c6f-jk2nn" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b74c9c6f--jk2nn-eth0" Oct 27 08:18:28.631666 containerd[1622]: 2025-10-27 08:18:28.593 [INFO][4228] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali56f9f2aacff ContainerID="5ea2b972199d9238db8e693e96e48969621df14c04004fc8199a4c76be47d9de" Namespace="calico-apiserver" Pod="calico-apiserver-66b74c9c6f-jk2nn" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b74c9c6f--jk2nn-eth0" Oct 27 08:18:28.631666 containerd[1622]: 2025-10-27 08:18:28.612 [INFO][4228] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5ea2b972199d9238db8e693e96e48969621df14c04004fc8199a4c76be47d9de" Namespace="calico-apiserver" Pod="calico-apiserver-66b74c9c6f-jk2nn" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b74c9c6f--jk2nn-eth0" Oct 27 08:18:28.631666 containerd[1622]: 2025-10-27 08:18:28.617 [INFO][4228] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5ea2b972199d9238db8e693e96e48969621df14c04004fc8199a4c76be47d9de" Namespace="calico-apiserver" Pod="calico-apiserver-66b74c9c6f-jk2nn" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b74c9c6f--jk2nn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66b74c9c6f--jk2nn-eth0", GenerateName:"calico-apiserver-66b74c9c6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"77100d96-e703-4e0a-b71a-6946f424cbfa", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 17, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66b74c9c6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5ea2b972199d9238db8e693e96e48969621df14c04004fc8199a4c76be47d9de", Pod:"calico-apiserver-66b74c9c6f-jk2nn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali56f9f2aacff", MAC:"46:a9:1a:4a:3d:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:18:28.631666 containerd[1622]: 2025-10-27 08:18:28.625 [INFO][4228] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5ea2b972199d9238db8e693e96e48969621df14c04004fc8199a4c76be47d9de" Namespace="calico-apiserver" Pod="calico-apiserver-66b74c9c6f-jk2nn" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b74c9c6f--jk2nn-eth0" Oct 27 08:18:28.641760 containerd[1622]: time="2025-10-27T08:18:28.641254099Z" level=info msg="connecting to shim 41e4a7b03e8dfff2fa63edfa72231a8cdcc502bcc8c257afa96d47e5013fd147" address="unix:///run/containerd/s/7705f7187f72699d3ede99ae6d3ba5b2b64b64deaed785bab8ac15e5bcd43cde" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:18:28.650162 systemd[1]: Started cri-containerd-35cb30cf6557fba5713514bd81ead6f3defdbbd0405d52ba14cb572c405f739f.scope - libcontainer container 35cb30cf6557fba5713514bd81ead6f3defdbbd0405d52ba14cb572c405f739f. Oct 27 08:18:28.729448 systemd[1]: Started cri-containerd-41e4a7b03e8dfff2fa63edfa72231a8cdcc502bcc8c257afa96d47e5013fd147.scope - libcontainer container 41e4a7b03e8dfff2fa63edfa72231a8cdcc502bcc8c257afa96d47e5013fd147. Oct 27 08:18:28.744988 containerd[1622]: time="2025-10-27T08:18:28.744937273Z" level=info msg="connecting to shim 5ea2b972199d9238db8e693e96e48969621df14c04004fc8199a4c76be47d9de" address="unix:///run/containerd/s/16ad554e8932893bef29e055e93c6da3508442e51a9640bb08b5a6ed11cc27e1" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:18:28.787090 systemd-resolved[1296]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:18:28.805675 systemd[1]: Started cri-containerd-5ea2b972199d9238db8e693e96e48969621df14c04004fc8199a4c76be47d9de.scope - libcontainer container 5ea2b972199d9238db8e693e96e48969621df14c04004fc8199a4c76be47d9de. Oct 27 08:18:28.878799 systemd-resolved[1296]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:18:28.900242 containerd[1622]: time="2025-10-27T08:18:28.897585031Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:18:29.001664 containerd[1622]: time="2025-10-27T08:18:29.001316536Z" level=info msg="StartContainer for \"35cb30cf6557fba5713514bd81ead6f3defdbbd0405d52ba14cb572c405f739f\" returns successfully" Oct 27 08:18:29.025289 systemd-networkd[1531]: cali9f3fb5df589: Link UP Oct 27 08:18:29.026070 systemd-networkd[1531]: cali9f3fb5df589: Gained carrier Oct 27 08:18:29.028722 kubelet[2803]: E1027 08:18:29.028685 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:18:29.029583 containerd[1622]: time="2025-10-27T08:18:29.029535624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-x2z4l,Uid:52472afe-0ee8-40cc-8073-d9351a66e2e8,Namespace:kube-system,Attempt:0,}" Oct 27 08:18:29.042523 containerd[1622]: time="2025-10-27T08:18:29.040696579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-lc29c,Uid:cd97c86b-d16c-499b-928c-ee1afbc3c575,Namespace:calico-system,Attempt:0,} returns sandbox id \"41e4a7b03e8dfff2fa63edfa72231a8cdcc502bcc8c257afa96d47e5013fd147\"" Oct 27 08:18:29.053494 kubelet[2803]: E1027 08:18:29.053459 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:18:29.105059 containerd[1622]: time="2025-10-27T08:18:29.104964266Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:18:29.111116 containerd[1622]: time="2025-10-27T08:18:29.111022335Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:18:29.111470 kubelet[2803]: E1027 08:18:29.111423 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:18:29.111567 kubelet[2803]: E1027 08:18:29.111486 2803 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:18:29.111787 kubelet[2803]: E1027 08:18:29.111761 2803 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-66b74c9c6f-rnnck_calico-apiserver(bce4a7cd-a2cf-439e-a5d4-f335be73a306): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:18:29.111841 kubelet[2803]: E1027 08:18:29.111802 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b74c9c6f-rnnck" podUID="bce4a7cd-a2cf-439e-a5d4-f335be73a306" Oct 27 08:18:29.112111 containerd[1622]: time="2025-10-27T08:18:29.112079489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 27 08:18:29.176761 containerd[1622]: time="2025-10-27T08:18:29.176706501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b74c9c6f-jk2nn,Uid:77100d96-e703-4e0a-b71a-6946f424cbfa,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5ea2b972199d9238db8e693e96e48969621df14c04004fc8199a4c76be47d9de\"" Oct 27 08:18:29.191035 containerd[1622]: 2025-10-27 08:18:28.628 [INFO][4453] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 27 08:18:29.191035 containerd[1622]: 2025-10-27 08:18:28.691 [INFO][4453] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5b8889988c--mxjjc-eth0 whisker-5b8889988c- calico-system 5deac30e-2d61-4e43-811d-2847f4300da3 982 0 2025-10-27 08:18:28 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5b8889988c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5b8889988c-mxjjc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9f3fb5df589 [] [] }} ContainerID="489ec9015e578a60ab8f4f3535792490034f71542594f1fc503cecacf5db5de2" Namespace="calico-system" Pod="whisker-5b8889988c-mxjjc" WorkloadEndpoint="localhost-k8s-whisker--5b8889988c--mxjjc-" Oct 27 08:18:29.191035 containerd[1622]: 2025-10-27 08:18:28.691 [INFO][4453] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="489ec9015e578a60ab8f4f3535792490034f71542594f1fc503cecacf5db5de2" Namespace="calico-system" Pod="whisker-5b8889988c-mxjjc" WorkloadEndpoint="localhost-k8s-whisker--5b8889988c--mxjjc-eth0" Oct 27 08:18:29.191035 containerd[1622]: 2025-10-27 08:18:28.850 [INFO][4571] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="489ec9015e578a60ab8f4f3535792490034f71542594f1fc503cecacf5db5de2" HandleID="k8s-pod-network.489ec9015e578a60ab8f4f3535792490034f71542594f1fc503cecacf5db5de2" Workload="localhost-k8s-whisker--5b8889988c--mxjjc-eth0" Oct 27 08:18:29.191035 containerd[1622]: 2025-10-27 08:18:28.851 [INFO][4571] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="489ec9015e578a60ab8f4f3535792490034f71542594f1fc503cecacf5db5de2" HandleID="k8s-pod-network.489ec9015e578a60ab8f4f3535792490034f71542594f1fc503cecacf5db5de2" Workload="localhost-k8s-whisker--5b8889988c--mxjjc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004c5000), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5b8889988c-mxjjc", "timestamp":"2025-10-27 08:18:28.85017739 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:18:29.191035 containerd[1622]: 2025-10-27 08:18:28.851 [INFO][4571] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:18:29.191035 containerd[1622]: 2025-10-27 08:18:28.851 [INFO][4571] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:18:29.191035 containerd[1622]: 2025-10-27 08:18:28.851 [INFO][4571] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:18:29.191035 containerd[1622]: 2025-10-27 08:18:28.872 [INFO][4571] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.489ec9015e578a60ab8f4f3535792490034f71542594f1fc503cecacf5db5de2" host="localhost" Oct 27 08:18:29.191035 containerd[1622]: 2025-10-27 08:18:28.879 [INFO][4571] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:18:29.191035 containerd[1622]: 2025-10-27 08:18:28.887 [INFO][4571] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:18:29.191035 containerd[1622]: 2025-10-27 08:18:28.902 [INFO][4571] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:18:29.191035 containerd[1622]: 2025-10-27 08:18:28.908 [INFO][4571] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:18:29.191035 containerd[1622]: 2025-10-27 08:18:28.909 [INFO][4571] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.489ec9015e578a60ab8f4f3535792490034f71542594f1fc503cecacf5db5de2" host="localhost" Oct 27 08:18:29.191035 containerd[1622]: 2025-10-27 08:18:28.912 [INFO][4571] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.489ec9015e578a60ab8f4f3535792490034f71542594f1fc503cecacf5db5de2 Oct 27 08:18:29.191035 containerd[1622]: 2025-10-27 08:18:28.955 [INFO][4571] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.489ec9015e578a60ab8f4f3535792490034f71542594f1fc503cecacf5db5de2" host="localhost" Oct 27 08:18:29.191035 containerd[1622]: 2025-10-27 08:18:29.005 [INFO][4571] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.489ec9015e578a60ab8f4f3535792490034f71542594f1fc503cecacf5db5de2" host="localhost" Oct 27 08:18:29.191035 containerd[1622]: 2025-10-27 08:18:29.006 [INFO][4571] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.489ec9015e578a60ab8f4f3535792490034f71542594f1fc503cecacf5db5de2" host="localhost" Oct 27 08:18:29.191035 containerd[1622]: 2025-10-27 08:18:29.006 [INFO][4571] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:18:29.191035 containerd[1622]: 2025-10-27 08:18:29.006 [INFO][4571] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="489ec9015e578a60ab8f4f3535792490034f71542594f1fc503cecacf5db5de2" HandleID="k8s-pod-network.489ec9015e578a60ab8f4f3535792490034f71542594f1fc503cecacf5db5de2" Workload="localhost-k8s-whisker--5b8889988c--mxjjc-eth0" Oct 27 08:18:29.192200 containerd[1622]: 2025-10-27 08:18:29.016 [INFO][4453] cni-plugin/k8s.go 418: Populated endpoint ContainerID="489ec9015e578a60ab8f4f3535792490034f71542594f1fc503cecacf5db5de2" Namespace="calico-system" Pod="whisker-5b8889988c-mxjjc" WorkloadEndpoint="localhost-k8s-whisker--5b8889988c--mxjjc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5b8889988c--mxjjc-eth0", GenerateName:"whisker-5b8889988c-", Namespace:"calico-system", SelfLink:"", UID:"5deac30e-2d61-4e43-811d-2847f4300da3", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 18, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b8889988c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5b8889988c-mxjjc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9f3fb5df589", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:18:29.192200 containerd[1622]: 2025-10-27 08:18:29.017 [INFO][4453] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="489ec9015e578a60ab8f4f3535792490034f71542594f1fc503cecacf5db5de2" Namespace="calico-system" Pod="whisker-5b8889988c-mxjjc" WorkloadEndpoint="localhost-k8s-whisker--5b8889988c--mxjjc-eth0" Oct 27 08:18:29.192200 containerd[1622]: 2025-10-27 08:18:29.017 [INFO][4453] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9f3fb5df589 ContainerID="489ec9015e578a60ab8f4f3535792490034f71542594f1fc503cecacf5db5de2" Namespace="calico-system" Pod="whisker-5b8889988c-mxjjc" WorkloadEndpoint="localhost-k8s-whisker--5b8889988c--mxjjc-eth0" Oct 27 08:18:29.192200 containerd[1622]: 2025-10-27 08:18:29.029 [INFO][4453] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="489ec9015e578a60ab8f4f3535792490034f71542594f1fc503cecacf5db5de2" Namespace="calico-system" Pod="whisker-5b8889988c-mxjjc" WorkloadEndpoint="localhost-k8s-whisker--5b8889988c--mxjjc-eth0" Oct 27 08:18:29.192200 containerd[1622]: 2025-10-27 08:18:29.034 [INFO][4453] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="489ec9015e578a60ab8f4f3535792490034f71542594f1fc503cecacf5db5de2" Namespace="calico-system" Pod="whisker-5b8889988c-mxjjc" WorkloadEndpoint="localhost-k8s-whisker--5b8889988c--mxjjc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5b8889988c--mxjjc-eth0", GenerateName:"whisker-5b8889988c-", Namespace:"calico-system", SelfLink:"", UID:"5deac30e-2d61-4e43-811d-2847f4300da3", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 18, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b8889988c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"489ec9015e578a60ab8f4f3535792490034f71542594f1fc503cecacf5db5de2", Pod:"whisker-5b8889988c-mxjjc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9f3fb5df589", MAC:"16:40:63:41:ae:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:18:29.192200 containerd[1622]: 2025-10-27 08:18:29.182 [INFO][4453] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="489ec9015e578a60ab8f4f3535792490034f71542594f1fc503cecacf5db5de2" Namespace="calico-system" Pod="whisker-5b8889988c-mxjjc" WorkloadEndpoint="localhost-k8s-whisker--5b8889988c--mxjjc-eth0" Oct 27 08:18:29.206241 kubelet[2803]: I1027 08:18:29.205935 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ls4jw" podStartSLOduration=41.205908301 podStartE2EDuration="41.205908301s" podCreationTimestamp="2025-10-27 08:17:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:18:29.202343582 +0000 UTC m=+47.418205161" watchObservedRunningTime="2025-10-27 08:18:29.205908301 +0000 UTC m=+47.421769881" Oct 27 08:18:29.222421 systemd-networkd[1531]: cali79d87b22345: Gained IPv6LL Oct 27 08:18:29.263333 containerd[1622]: time="2025-10-27T08:18:29.262471170Z" level=info msg="connecting to shim 489ec9015e578a60ab8f4f3535792490034f71542594f1fc503cecacf5db5de2" address="unix:///run/containerd/s/272053bb69c65679d1eec33e8810e09f7ae4e38a641aad0021704be32515a5e4" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:18:29.268702 systemd-networkd[1531]: vxlan.calico: Link UP Oct 27 08:18:29.268714 systemd-networkd[1531]: vxlan.calico: Gained carrier Oct 27 08:18:29.307201 systemd[1]: Started cri-containerd-489ec9015e578a60ab8f4f3535792490034f71542594f1fc503cecacf5db5de2.scope - libcontainer container 489ec9015e578a60ab8f4f3535792490034f71542594f1fc503cecacf5db5de2. Oct 27 08:18:29.345691 systemd-resolved[1296]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:18:29.395240 containerd[1622]: time="2025-10-27T08:18:29.395178523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b8889988c-mxjjc,Uid:5deac30e-2d61-4e43-811d-2847f4300da3,Namespace:calico-system,Attempt:0,} returns sandbox id \"489ec9015e578a60ab8f4f3535792490034f71542594f1fc503cecacf5db5de2\"" Oct 27 08:18:29.407359 systemd-networkd[1531]: cali8c127d7d355: Link UP Oct 27 08:18:29.409929 systemd-networkd[1531]: cali8c127d7d355: Gained carrier Oct 27 08:18:29.431286 containerd[1622]: 2025-10-27 08:18:29.281 [INFO][4711] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--x2z4l-eth0 coredns-66bc5c9577- kube-system 52472afe-0ee8-40cc-8073-d9351a66e2e8 842 0 2025-10-27 08:17:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-x2z4l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8c127d7d355 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="9baf0bd865dc3afdc74130d13ecaa339504af67b119cfdc7e4fbc41af29c1dbf" Namespace="kube-system" Pod="coredns-66bc5c9577-x2z4l" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--x2z4l-" Oct 27 08:18:29.431286 containerd[1622]: 2025-10-27 08:18:29.282 [INFO][4711] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9baf0bd865dc3afdc74130d13ecaa339504af67b119cfdc7e4fbc41af29c1dbf" Namespace="kube-system" Pod="coredns-66bc5c9577-x2z4l" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--x2z4l-eth0" Oct 27 08:18:29.431286 containerd[1622]: 2025-10-27 08:18:29.352 [INFO][4768] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9baf0bd865dc3afdc74130d13ecaa339504af67b119cfdc7e4fbc41af29c1dbf" HandleID="k8s-pod-network.9baf0bd865dc3afdc74130d13ecaa339504af67b119cfdc7e4fbc41af29c1dbf" Workload="localhost-k8s-coredns--66bc5c9577--x2z4l-eth0" Oct 27 08:18:29.431286 containerd[1622]: 2025-10-27 08:18:29.352 [INFO][4768] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9baf0bd865dc3afdc74130d13ecaa339504af67b119cfdc7e4fbc41af29c1dbf" HandleID="k8s-pod-network.9baf0bd865dc3afdc74130d13ecaa339504af67b119cfdc7e4fbc41af29c1dbf" Workload="localhost-k8s-coredns--66bc5c9577--x2z4l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001751d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-x2z4l", "timestamp":"2025-10-27 08:18:29.352547471 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:18:29.431286 containerd[1622]: 2025-10-27 08:18:29.352 [INFO][4768] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:18:29.431286 containerd[1622]: 2025-10-27 08:18:29.352 [INFO][4768] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:18:29.431286 containerd[1622]: 2025-10-27 08:18:29.353 [INFO][4768] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:18:29.431286 containerd[1622]: 2025-10-27 08:18:29.362 [INFO][4768] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9baf0bd865dc3afdc74130d13ecaa339504af67b119cfdc7e4fbc41af29c1dbf" host="localhost" Oct 27 08:18:29.431286 containerd[1622]: 2025-10-27 08:18:29.368 [INFO][4768] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:18:29.431286 containerd[1622]: 2025-10-27 08:18:29.374 [INFO][4768] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:18:29.431286 containerd[1622]: 2025-10-27 08:18:29.376 [INFO][4768] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:18:29.431286 containerd[1622]: 2025-10-27 08:18:29.378 [INFO][4768] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:18:29.431286 containerd[1622]: 2025-10-27 08:18:29.379 [INFO][4768] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9baf0bd865dc3afdc74130d13ecaa339504af67b119cfdc7e4fbc41af29c1dbf" host="localhost" Oct 27 08:18:29.431286 containerd[1622]: 2025-10-27 08:18:29.380 [INFO][4768] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9baf0bd865dc3afdc74130d13ecaa339504af67b119cfdc7e4fbc41af29c1dbf Oct 27 08:18:29.431286 containerd[1622]: 2025-10-27 08:18:29.386 [INFO][4768] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9baf0bd865dc3afdc74130d13ecaa339504af67b119cfdc7e4fbc41af29c1dbf" host="localhost" Oct 27 08:18:29.431286 containerd[1622]: 2025-10-27 08:18:29.392 [INFO][4768] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.9baf0bd865dc3afdc74130d13ecaa339504af67b119cfdc7e4fbc41af29c1dbf" host="localhost" Oct 27 08:18:29.431286 containerd[1622]: 2025-10-27 08:18:29.392 [INFO][4768] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.9baf0bd865dc3afdc74130d13ecaa339504af67b119cfdc7e4fbc41af29c1dbf" host="localhost" Oct 27 08:18:29.431286 containerd[1622]: 2025-10-27 08:18:29.393 [INFO][4768] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:18:29.431286 containerd[1622]: 2025-10-27 08:18:29.393 [INFO][4768] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="9baf0bd865dc3afdc74130d13ecaa339504af67b119cfdc7e4fbc41af29c1dbf" HandleID="k8s-pod-network.9baf0bd865dc3afdc74130d13ecaa339504af67b119cfdc7e4fbc41af29c1dbf" Workload="localhost-k8s-coredns--66bc5c9577--x2z4l-eth0" Oct 27 08:18:29.432139 containerd[1622]: 2025-10-27 08:18:29.398 [INFO][4711] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9baf0bd865dc3afdc74130d13ecaa339504af67b119cfdc7e4fbc41af29c1dbf" Namespace="kube-system" Pod="coredns-66bc5c9577-x2z4l" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--x2z4l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--x2z4l-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"52472afe-0ee8-40cc-8073-d9351a66e2e8", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 17, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-x2z4l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8c127d7d355", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:18:29.432139 containerd[1622]: 2025-10-27 08:18:29.399 [INFO][4711] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="9baf0bd865dc3afdc74130d13ecaa339504af67b119cfdc7e4fbc41af29c1dbf" Namespace="kube-system" Pod="coredns-66bc5c9577-x2z4l" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--x2z4l-eth0" Oct 27 08:18:29.432139 containerd[1622]: 2025-10-27 08:18:29.399 [INFO][4711] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8c127d7d355 ContainerID="9baf0bd865dc3afdc74130d13ecaa339504af67b119cfdc7e4fbc41af29c1dbf" Namespace="kube-system" Pod="coredns-66bc5c9577-x2z4l" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--x2z4l-eth0" Oct 27 08:18:29.432139 containerd[1622]: 2025-10-27 08:18:29.411 [INFO][4711] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9baf0bd865dc3afdc74130d13ecaa339504af67b119cfdc7e4fbc41af29c1dbf" Namespace="kube-system" Pod="coredns-66bc5c9577-x2z4l" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--x2z4l-eth0" Oct 27 08:18:29.432139 containerd[1622]: 2025-10-27 08:18:29.413 [INFO][4711] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9baf0bd865dc3afdc74130d13ecaa339504af67b119cfdc7e4fbc41af29c1dbf" Namespace="kube-system" Pod="coredns-66bc5c9577-x2z4l" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--x2z4l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--x2z4l-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"52472afe-0ee8-40cc-8073-d9351a66e2e8", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 17, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9baf0bd865dc3afdc74130d13ecaa339504af67b119cfdc7e4fbc41af29c1dbf", Pod:"coredns-66bc5c9577-x2z4l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8c127d7d355", MAC:"b2:e7:ce:82:df:e0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:18:29.432139 containerd[1622]: 2025-10-27 08:18:29.425 [INFO][4711] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9baf0bd865dc3afdc74130d13ecaa339504af67b119cfdc7e4fbc41af29c1dbf" Namespace="kube-system" Pod="coredns-66bc5c9577-x2z4l" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--x2z4l-eth0" Oct 27 08:18:29.461610 containerd[1622]: time="2025-10-27T08:18:29.461456122Z" level=info msg="connecting to shim 9baf0bd865dc3afdc74130d13ecaa339504af67b119cfdc7e4fbc41af29c1dbf" address="unix:///run/containerd/s/dc39c9aaebe15831eab293e4a7152573ef08611c97a9924eaa29b643255233e8" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:18:29.478586 systemd-networkd[1531]: calib4ceeaf49b4: Gained IPv6LL Oct 27 08:18:29.496505 systemd[1]: Started cri-containerd-9baf0bd865dc3afdc74130d13ecaa339504af67b119cfdc7e4fbc41af29c1dbf.scope - libcontainer container 9baf0bd865dc3afdc74130d13ecaa339504af67b119cfdc7e4fbc41af29c1dbf. Oct 27 08:18:29.515196 systemd-resolved[1296]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:18:29.552822 containerd[1622]: time="2025-10-27T08:18:29.552689796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-x2z4l,Uid:52472afe-0ee8-40cc-8073-d9351a66e2e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"9baf0bd865dc3afdc74130d13ecaa339504af67b119cfdc7e4fbc41af29c1dbf\"" Oct 27 08:18:29.554257 kubelet[2803]: E1027 08:18:29.554234 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:18:29.561263 containerd[1622]: time="2025-10-27T08:18:29.561197512Z" level=info msg="CreateContainer within sandbox \"9baf0bd865dc3afdc74130d13ecaa339504af67b119cfdc7e4fbc41af29c1dbf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 27 08:18:29.572878 containerd[1622]: time="2025-10-27T08:18:29.572287043Z" level=info msg="Container a11b61ee1b8596c9e0ef9b5fd836424d59c37af0a6e8346001cb1c1c274cb397: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:18:29.578277 containerd[1622]: time="2025-10-27T08:18:29.578244963Z" level=info msg="CreateContainer within sandbox \"9baf0bd865dc3afdc74130d13ecaa339504af67b119cfdc7e4fbc41af29c1dbf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a11b61ee1b8596c9e0ef9b5fd836424d59c37af0a6e8346001cb1c1c274cb397\"" Oct 27 08:18:29.579269 containerd[1622]: time="2025-10-27T08:18:29.579225163Z" level=info msg="StartContainer for \"a11b61ee1b8596c9e0ef9b5fd836424d59c37af0a6e8346001cb1c1c274cb397\"" Oct 27 08:18:29.580054 containerd[1622]: time="2025-10-27T08:18:29.580016678Z" level=info msg="connecting to shim a11b61ee1b8596c9e0ef9b5fd836424d59c37af0a6e8346001cb1c1c274cb397" address="unix:///run/containerd/s/dc39c9aaebe15831eab293e4a7152573ef08611c97a9924eaa29b643255233e8" protocol=ttrpc version=3 Oct 27 08:18:29.622374 systemd[1]: Started cri-containerd-a11b61ee1b8596c9e0ef9b5fd836424d59c37af0a6e8346001cb1c1c274cb397.scope - libcontainer container a11b61ee1b8596c9e0ef9b5fd836424d59c37af0a6e8346001cb1c1c274cb397. Oct 27 08:18:29.662797 containerd[1622]: time="2025-10-27T08:18:29.662658327Z" level=info msg="StartContainer for \"a11b61ee1b8596c9e0ef9b5fd836424d59c37af0a6e8346001cb1c1c274cb397\" returns successfully" Oct 27 08:18:29.742168 containerd[1622]: time="2025-10-27T08:18:29.742121470Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:18:29.743539 containerd[1622]: time="2025-10-27T08:18:29.743507622Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 27 08:18:29.743626 containerd[1622]: time="2025-10-27T08:18:29.743580539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 27 08:18:29.743862 kubelet[2803]: E1027 08:18:29.743822 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:18:29.743919 kubelet[2803]: E1027 08:18:29.743879 2803 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:18:29.744258 kubelet[2803]: E1027 08:18:29.744051 2803 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-lc29c_calico-system(cd97c86b-d16c-499b-928c-ee1afbc3c575): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 27 08:18:29.744258 kubelet[2803]: E1027 08:18:29.744091 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-lc29c" podUID="cd97c86b-d16c-499b-928c-ee1afbc3c575" Oct 27 08:18:29.744599 containerd[1622]: time="2025-10-27T08:18:29.744552863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:18:29.884740 containerd[1622]: time="2025-10-27T08:18:29.884599169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6bqk,Uid:2e0d791c-1eec-4e51-af5e-ee7c86a5bb94,Namespace:calico-system,Attempt:0,}" Oct 27 08:18:29.884869 kubelet[2803]: I1027 08:18:29.884750 2803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eca1f3c-6640-4b80-93fd-4cf14826a563" path="/var/lib/kubelet/pods/9eca1f3c-6640-4b80-93fd-4cf14826a563/volumes" Oct 27 08:18:29.886369 containerd[1622]: time="2025-10-27T08:18:29.886339786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8cb769c6-rbvs2,Uid:eb96c43d-718e-4752-922d-cb8f671d414c,Namespace:calico-system,Attempt:0,}" Oct 27 08:18:30.015776 systemd-networkd[1531]: calic5bc629f11d: Link UP Oct 27 08:18:30.016351 systemd-networkd[1531]: calic5bc629f11d: Gained carrier Oct 27 08:18:30.031939 containerd[1622]: 2025-10-27 08:18:29.934 [INFO][4938] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--h6bqk-eth0 csi-node-driver- calico-system 2e0d791c-1eec-4e51-af5e-ee7c86a5bb94 723 0 2025-10-27 08:18:02 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-h6bqk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic5bc629f11d [] [] }} ContainerID="0d234bbdc66b01076a2e737c29cea5effb79e343037664dd31f40a963fb0c47c" Namespace="calico-system" Pod="csi-node-driver-h6bqk" WorkloadEndpoint="localhost-k8s-csi--node--driver--h6bqk-" Oct 27 08:18:30.031939 containerd[1622]: 2025-10-27 08:18:29.935 [INFO][4938] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0d234bbdc66b01076a2e737c29cea5effb79e343037664dd31f40a963fb0c47c" Namespace="calico-system" Pod="csi-node-driver-h6bqk" WorkloadEndpoint="localhost-k8s-csi--node--driver--h6bqk-eth0" Oct 27 08:18:30.031939 containerd[1622]: 2025-10-27 08:18:29.966 [INFO][4975] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d234bbdc66b01076a2e737c29cea5effb79e343037664dd31f40a963fb0c47c" HandleID="k8s-pod-network.0d234bbdc66b01076a2e737c29cea5effb79e343037664dd31f40a963fb0c47c" Workload="localhost-k8s-csi--node--driver--h6bqk-eth0" Oct 27 08:18:30.031939 containerd[1622]: 2025-10-27 08:18:29.967 [INFO][4975] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0d234bbdc66b01076a2e737c29cea5effb79e343037664dd31f40a963fb0c47c" HandleID="k8s-pod-network.0d234bbdc66b01076a2e737c29cea5effb79e343037664dd31f40a963fb0c47c" Workload="localhost-k8s-csi--node--driver--h6bqk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a53f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-h6bqk", "timestamp":"2025-10-27 08:18:29.966976041 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:18:30.031939 containerd[1622]: 2025-10-27 08:18:29.967 [INFO][4975] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:18:30.031939 containerd[1622]: 2025-10-27 08:18:29.967 [INFO][4975] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:18:30.031939 containerd[1622]: 2025-10-27 08:18:29.967 [INFO][4975] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:18:30.031939 containerd[1622]: 2025-10-27 08:18:29.979 [INFO][4975] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0d234bbdc66b01076a2e737c29cea5effb79e343037664dd31f40a963fb0c47c" host="localhost" Oct 27 08:18:30.031939 containerd[1622]: 2025-10-27 08:18:29.984 [INFO][4975] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:18:30.031939 containerd[1622]: 2025-10-27 08:18:29.988 [INFO][4975] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:18:30.031939 containerd[1622]: 2025-10-27 08:18:29.990 [INFO][4975] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:18:30.031939 containerd[1622]: 2025-10-27 08:18:29.992 [INFO][4975] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:18:30.031939 containerd[1622]: 2025-10-27 08:18:29.992 [INFO][4975] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0d234bbdc66b01076a2e737c29cea5effb79e343037664dd31f40a963fb0c47c" host="localhost" Oct 27 08:18:30.031939 containerd[1622]: 2025-10-27 08:18:29.996 [INFO][4975] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0d234bbdc66b01076a2e737c29cea5effb79e343037664dd31f40a963fb0c47c Oct 27 08:18:30.031939 containerd[1622]: 2025-10-27 08:18:30.002 [INFO][4975] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0d234bbdc66b01076a2e737c29cea5effb79e343037664dd31f40a963fb0c47c" host="localhost" Oct 27 08:18:30.031939 containerd[1622]: 2025-10-27 08:18:30.008 [INFO][4975] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.0d234bbdc66b01076a2e737c29cea5effb79e343037664dd31f40a963fb0c47c" host="localhost" Oct 27 08:18:30.031939 containerd[1622]: 2025-10-27 08:18:30.008 [INFO][4975] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.0d234bbdc66b01076a2e737c29cea5effb79e343037664dd31f40a963fb0c47c" host="localhost" Oct 27 08:18:30.031939 containerd[1622]: 2025-10-27 08:18:30.008 [INFO][4975] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:18:30.031939 containerd[1622]: 2025-10-27 08:18:30.008 [INFO][4975] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="0d234bbdc66b01076a2e737c29cea5effb79e343037664dd31f40a963fb0c47c" HandleID="k8s-pod-network.0d234bbdc66b01076a2e737c29cea5effb79e343037664dd31f40a963fb0c47c" Workload="localhost-k8s-csi--node--driver--h6bqk-eth0" Oct 27 08:18:30.032862 containerd[1622]: 2025-10-27 08:18:30.012 [INFO][4938] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0d234bbdc66b01076a2e737c29cea5effb79e343037664dd31f40a963fb0c47c" Namespace="calico-system" Pod="csi-node-driver-h6bqk" WorkloadEndpoint="localhost-k8s-csi--node--driver--h6bqk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--h6bqk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2e0d791c-1eec-4e51-af5e-ee7c86a5bb94", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 18, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-h6bqk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic5bc629f11d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:18:30.032862 containerd[1622]: 2025-10-27 08:18:30.012 [INFO][4938] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="0d234bbdc66b01076a2e737c29cea5effb79e343037664dd31f40a963fb0c47c" Namespace="calico-system" Pod="csi-node-driver-h6bqk" WorkloadEndpoint="localhost-k8s-csi--node--driver--h6bqk-eth0" Oct 27 08:18:30.032862 containerd[1622]: 2025-10-27 08:18:30.012 [INFO][4938] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic5bc629f11d ContainerID="0d234bbdc66b01076a2e737c29cea5effb79e343037664dd31f40a963fb0c47c" Namespace="calico-system" Pod="csi-node-driver-h6bqk" WorkloadEndpoint="localhost-k8s-csi--node--driver--h6bqk-eth0" Oct 27 08:18:30.032862 containerd[1622]: 2025-10-27 08:18:30.016 [INFO][4938] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0d234bbdc66b01076a2e737c29cea5effb79e343037664dd31f40a963fb0c47c" Namespace="calico-system" Pod="csi-node-driver-h6bqk" WorkloadEndpoint="localhost-k8s-csi--node--driver--h6bqk-eth0" Oct 27 08:18:30.032862 containerd[1622]: 2025-10-27 08:18:30.016 [INFO][4938] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0d234bbdc66b01076a2e737c29cea5effb79e343037664dd31f40a963fb0c47c" Namespace="calico-system" Pod="csi-node-driver-h6bqk" WorkloadEndpoint="localhost-k8s-csi--node--driver--h6bqk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--h6bqk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2e0d791c-1eec-4e51-af5e-ee7c86a5bb94", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 18, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0d234bbdc66b01076a2e737c29cea5effb79e343037664dd31f40a963fb0c47c", Pod:"csi-node-driver-h6bqk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic5bc629f11d", MAC:"0e:e8:30:9a:22:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:18:30.032862 containerd[1622]: 2025-10-27 08:18:30.027 [INFO][4938] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0d234bbdc66b01076a2e737c29cea5effb79e343037664dd31f40a963fb0c47c" Namespace="calico-system" Pod="csi-node-driver-h6bqk" WorkloadEndpoint="localhost-k8s-csi--node--driver--h6bqk-eth0" Oct 27 08:18:30.054789 containerd[1622]: time="2025-10-27T08:18:30.054726983Z" level=info msg="connecting to shim 0d234bbdc66b01076a2e737c29cea5effb79e343037664dd31f40a963fb0c47c" address="unix:///run/containerd/s/77de6b1b3166910c97748a5b45fd7054ff27f8cf3a8472d50f445305d81adb2e" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:18:30.062180 kubelet[2803]: E1027 08:18:30.062150 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:18:30.073068 kubelet[2803]: E1027 08:18:30.073026 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:18:30.074180 kubelet[2803]: E1027 08:18:30.074153 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-lc29c" podUID="cd97c86b-d16c-499b-928c-ee1afbc3c575" Oct 27 08:18:30.075925 kubelet[2803]: E1027 08:18:30.075885 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b74c9c6f-rnnck" podUID="bce4a7cd-a2cf-439e-a5d4-f335be73a306" Oct 27 08:18:30.082945 kubelet[2803]: I1027 08:18:30.081063 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-x2z4l" podStartSLOduration=42.081035533 podStartE2EDuration="42.081035533s" podCreationTimestamp="2025-10-27 08:17:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:18:30.080328236 +0000 UTC m=+48.296189815" watchObservedRunningTime="2025-10-27 08:18:30.081035533 +0000 UTC m=+48.296897112" Oct 27 08:18:30.092471 systemd[1]: Started cri-containerd-0d234bbdc66b01076a2e737c29cea5effb79e343037664dd31f40a963fb0c47c.scope - libcontainer container 0d234bbdc66b01076a2e737c29cea5effb79e343037664dd31f40a963fb0c47c. Oct 27 08:18:30.115297 systemd-resolved[1296]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:18:30.117409 systemd-networkd[1531]: cali9f3fb5df589: Gained IPv6LL Oct 27 08:18:30.133515 containerd[1622]: time="2025-10-27T08:18:30.133018201Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:18:30.137168 containerd[1622]: time="2025-10-27T08:18:30.136854200Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:18:30.137535 containerd[1622]: time="2025-10-27T08:18:30.137143212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:18:30.137771 kubelet[2803]: E1027 08:18:30.137723 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:18:30.138344 kubelet[2803]: E1027 08:18:30.137956 2803 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:18:30.138344 kubelet[2803]: E1027 08:18:30.138130 2803 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-66b74c9c6f-jk2nn_calico-apiserver(77100d96-e703-4e0a-b71a-6946f424cbfa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:18:30.138344 kubelet[2803]: E1027 08:18:30.138163 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b74c9c6f-jk2nn" podUID="77100d96-e703-4e0a-b71a-6946f424cbfa" Oct 27 08:18:30.140115 containerd[1622]: time="2025-10-27T08:18:30.140077028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 27 08:18:30.152646 containerd[1622]: time="2025-10-27T08:18:30.152539154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6bqk,Uid:2e0d791c-1eec-4e51-af5e-ee7c86a5bb94,Namespace:calico-system,Attempt:0,} returns sandbox id \"0d234bbdc66b01076a2e737c29cea5effb79e343037664dd31f40a963fb0c47c\"" Oct 27 08:18:30.155026 systemd-networkd[1531]: cali660b1e934ef: Link UP Oct 27 08:18:30.156094 systemd-networkd[1531]: cali660b1e934ef: Gained carrier Oct 27 08:18:30.172350 containerd[1622]: 2025-10-27 08:18:29.931 [INFO][4948] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--8cb769c6--rbvs2-eth0 calico-kube-controllers-8cb769c6- calico-system eb96c43d-718e-4752-922d-cb8f671d414c 845 0 2025-10-27 08:18:02 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8cb769c6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-8cb769c6-rbvs2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali660b1e934ef [] [] }} ContainerID="3ea152626494d651e3fcc886c5a9a1e92e644e12d7e2cd3cbb6241ce351f0489" Namespace="calico-system" Pod="calico-kube-controllers-8cb769c6-rbvs2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8cb769c6--rbvs2-" Oct 27 08:18:30.172350 containerd[1622]: 2025-10-27 08:18:29.931 [INFO][4948] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3ea152626494d651e3fcc886c5a9a1e92e644e12d7e2cd3cbb6241ce351f0489" Namespace="calico-system" Pod="calico-kube-controllers-8cb769c6-rbvs2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8cb769c6--rbvs2-eth0" Oct 27 08:18:30.172350 containerd[1622]: 2025-10-27 08:18:29.966 [INFO][4968] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ea152626494d651e3fcc886c5a9a1e92e644e12d7e2cd3cbb6241ce351f0489" HandleID="k8s-pod-network.3ea152626494d651e3fcc886c5a9a1e92e644e12d7e2cd3cbb6241ce351f0489" Workload="localhost-k8s-calico--kube--controllers--8cb769c6--rbvs2-eth0" Oct 27 08:18:30.172350 containerd[1622]: 2025-10-27 08:18:29.967 [INFO][4968] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3ea152626494d651e3fcc886c5a9a1e92e644e12d7e2cd3cbb6241ce351f0489" HandleID="k8s-pod-network.3ea152626494d651e3fcc886c5a9a1e92e644e12d7e2cd3cbb6241ce351f0489" Workload="localhost-k8s-calico--kube--controllers--8cb769c6--rbvs2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e3580), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-8cb769c6-rbvs2", "timestamp":"2025-10-27 08:18:29.966971473 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:18:30.172350 containerd[1622]: 2025-10-27 08:18:29.967 [INFO][4968] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:18:30.172350 containerd[1622]: 2025-10-27 08:18:30.008 [INFO][4968] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:18:30.172350 containerd[1622]: 2025-10-27 08:18:30.009 [INFO][4968] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:18:30.172350 containerd[1622]: 2025-10-27 08:18:30.084 [INFO][4968] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3ea152626494d651e3fcc886c5a9a1e92e644e12d7e2cd3cbb6241ce351f0489" host="localhost" Oct 27 08:18:30.172350 containerd[1622]: 2025-10-27 08:18:30.095 [INFO][4968] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:18:30.172350 containerd[1622]: 2025-10-27 08:18:30.109 [INFO][4968] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:18:30.172350 containerd[1622]: 2025-10-27 08:18:30.113 [INFO][4968] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:18:30.172350 containerd[1622]: 2025-10-27 08:18:30.121 [INFO][4968] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:18:30.172350 containerd[1622]: 2025-10-27 08:18:30.122 [INFO][4968] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3ea152626494d651e3fcc886c5a9a1e92e644e12d7e2cd3cbb6241ce351f0489" host="localhost" Oct 27 08:18:30.172350 containerd[1622]: 2025-10-27 08:18:30.127 [INFO][4968] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3ea152626494d651e3fcc886c5a9a1e92e644e12d7e2cd3cbb6241ce351f0489 Oct 27 08:18:30.172350 containerd[1622]: 2025-10-27 08:18:30.134 [INFO][4968] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3ea152626494d651e3fcc886c5a9a1e92e644e12d7e2cd3cbb6241ce351f0489" host="localhost" Oct 27 08:18:30.172350 containerd[1622]: 2025-10-27 08:18:30.143 [INFO][4968] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.3ea152626494d651e3fcc886c5a9a1e92e644e12d7e2cd3cbb6241ce351f0489" host="localhost" Oct 27 08:18:30.172350 containerd[1622]: 2025-10-27 08:18:30.143 [INFO][4968] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.3ea152626494d651e3fcc886c5a9a1e92e644e12d7e2cd3cbb6241ce351f0489" host="localhost" Oct 27 08:18:30.172350 containerd[1622]: 2025-10-27 08:18:30.144 [INFO][4968] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:18:30.172350 containerd[1622]: 2025-10-27 08:18:30.144 [INFO][4968] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="3ea152626494d651e3fcc886c5a9a1e92e644e12d7e2cd3cbb6241ce351f0489" HandleID="k8s-pod-network.3ea152626494d651e3fcc886c5a9a1e92e644e12d7e2cd3cbb6241ce351f0489" Workload="localhost-k8s-calico--kube--controllers--8cb769c6--rbvs2-eth0" Oct 27 08:18:30.172890 containerd[1622]: 2025-10-27 08:18:30.149 [INFO][4948] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3ea152626494d651e3fcc886c5a9a1e92e644e12d7e2cd3cbb6241ce351f0489" Namespace="calico-system" Pod="calico-kube-controllers-8cb769c6-rbvs2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8cb769c6--rbvs2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8cb769c6--rbvs2-eth0", GenerateName:"calico-kube-controllers-8cb769c6-", Namespace:"calico-system", SelfLink:"", UID:"eb96c43d-718e-4752-922d-cb8f671d414c", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 18, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8cb769c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-8cb769c6-rbvs2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali660b1e934ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:18:30.172890 containerd[1622]: 2025-10-27 08:18:30.149 [INFO][4948] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="3ea152626494d651e3fcc886c5a9a1e92e644e12d7e2cd3cbb6241ce351f0489" Namespace="calico-system" Pod="calico-kube-controllers-8cb769c6-rbvs2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8cb769c6--rbvs2-eth0" Oct 27 08:18:30.172890 containerd[1622]: 2025-10-27 08:18:30.149 [INFO][4948] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali660b1e934ef ContainerID="3ea152626494d651e3fcc886c5a9a1e92e644e12d7e2cd3cbb6241ce351f0489" Namespace="calico-system" Pod="calico-kube-controllers-8cb769c6-rbvs2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8cb769c6--rbvs2-eth0" Oct 27 08:18:30.172890 containerd[1622]: 2025-10-27 08:18:30.157 [INFO][4948] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ea152626494d651e3fcc886c5a9a1e92e644e12d7e2cd3cbb6241ce351f0489" Namespace="calico-system" Pod="calico-kube-controllers-8cb769c6-rbvs2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8cb769c6--rbvs2-eth0" Oct 27 08:18:30.172890 containerd[1622]: 2025-10-27 08:18:30.157 [INFO][4948] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3ea152626494d651e3fcc886c5a9a1e92e644e12d7e2cd3cbb6241ce351f0489" Namespace="calico-system" Pod="calico-kube-controllers-8cb769c6-rbvs2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8cb769c6--rbvs2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8cb769c6--rbvs2-eth0", GenerateName:"calico-kube-controllers-8cb769c6-", Namespace:"calico-system", SelfLink:"", UID:"eb96c43d-718e-4752-922d-cb8f671d414c", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 18, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8cb769c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ea152626494d651e3fcc886c5a9a1e92e644e12d7e2cd3cbb6241ce351f0489", Pod:"calico-kube-controllers-8cb769c6-rbvs2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali660b1e934ef", MAC:"86:7d:b3:82:74:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:18:30.172890 containerd[1622]: 2025-10-27 08:18:30.166 [INFO][4948] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3ea152626494d651e3fcc886c5a9a1e92e644e12d7e2cd3cbb6241ce351f0489" Namespace="calico-system" Pod="calico-kube-controllers-8cb769c6-rbvs2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8cb769c6--rbvs2-eth0" Oct 27 08:18:30.192497 containerd[1622]: time="2025-10-27T08:18:30.192432005Z" level=info msg="connecting to shim 3ea152626494d651e3fcc886c5a9a1e92e644e12d7e2cd3cbb6241ce351f0489" address="unix:///run/containerd/s/bba5e8053302ffdeb95362f5ddf098b1c53765ab9a55fb7285df0aaf3a8c7868" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:18:30.222452 systemd[1]: Started cri-containerd-3ea152626494d651e3fcc886c5a9a1e92e644e12d7e2cd3cbb6241ce351f0489.scope - libcontainer container 3ea152626494d651e3fcc886c5a9a1e92e644e12d7e2cd3cbb6241ce351f0489. Oct 27 08:18:30.249431 systemd-resolved[1296]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:18:30.282587 containerd[1622]: time="2025-10-27T08:18:30.282526582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8cb769c6-rbvs2,Uid:eb96c43d-718e-4752-922d-cb8f671d414c,Namespace:calico-system,Attempt:0,} returns sandbox id \"3ea152626494d651e3fcc886c5a9a1e92e644e12d7e2cd3cbb6241ce351f0489\"" Oct 27 08:18:30.373434 systemd-networkd[1531]: calicc0aedfd9b6: Gained IPv6LL Oct 27 08:18:30.373835 systemd-networkd[1531]: cali56f9f2aacff: Gained IPv6LL Oct 27 08:18:30.501407 systemd-networkd[1531]: cali8c127d7d355: Gained IPv6LL Oct 27 08:18:30.535681 containerd[1622]: time="2025-10-27T08:18:30.535614637Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:18:30.537054 containerd[1622]: time="2025-10-27T08:18:30.536990229Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 27 08:18:30.537131 containerd[1622]: time="2025-10-27T08:18:30.537030965Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 27 08:18:30.537424 kubelet[2803]: E1027 08:18:30.537362 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:18:30.537502 kubelet[2803]: E1027 08:18:30.537429 2803 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:18:30.537862 kubelet[2803]: E1027 08:18:30.537699 2803 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5b8889988c-mxjjc_calico-system(5deac30e-2d61-4e43-811d-2847f4300da3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 27 08:18:30.538366 containerd[1622]: time="2025-10-27T08:18:30.538091014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 27 08:18:30.888549 containerd[1622]: time="2025-10-27T08:18:30.888262923Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:18:30.889925 containerd[1622]: time="2025-10-27T08:18:30.889831116Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 27 08:18:30.890158 containerd[1622]: time="2025-10-27T08:18:30.889835825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 27 08:18:30.890336 kubelet[2803]: E1027 08:18:30.890276 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:18:30.890450 kubelet[2803]: E1027 08:18:30.890344 2803 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:18:30.890573 kubelet[2803]: E1027 08:18:30.890544 2803 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-h6bqk_calico-system(2e0d791c-1eec-4e51-af5e-ee7c86a5bb94): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 27 08:18:30.890998 containerd[1622]: time="2025-10-27T08:18:30.890965083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 27 08:18:30.949463 systemd-networkd[1531]: vxlan.calico: Gained IPv6LL Oct 27 08:18:31.077831 kubelet[2803]: E1027 08:18:31.077673 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:18:31.077831 kubelet[2803]: E1027 08:18:31.077673 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:18:31.078763 kubelet[2803]: E1027 08:18:31.078452 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b74c9c6f-jk2nn" podUID="77100d96-e703-4e0a-b71a-6946f424cbfa" Oct 27 08:18:31.250389 containerd[1622]: time="2025-10-27T08:18:31.250182498Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:18:31.269496 systemd-networkd[1531]: cali660b1e934ef: Gained IPv6LL Oct 27 08:18:31.366784 containerd[1622]: time="2025-10-27T08:18:31.366684040Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 27 08:18:31.366984 containerd[1622]: time="2025-10-27T08:18:31.366737370Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 27 08:18:31.367138 kubelet[2803]: E1027 08:18:31.367058 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:18:31.367244 kubelet[2803]: E1027 08:18:31.367169 2803 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:18:31.367425 kubelet[2803]: E1027 08:18:31.367364 2803 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-8cb769c6-rbvs2_calico-system(eb96c43d-718e-4752-922d-cb8f671d414c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 27 08:18:31.367932 kubelet[2803]: E1027 08:18:31.367767 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8cb769c6-rbvs2" podUID="eb96c43d-718e-4752-922d-cb8f671d414c" Oct 27 08:18:31.368293 containerd[1622]: time="2025-10-27T08:18:31.367690288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 27 08:18:31.461509 systemd-networkd[1531]: calic5bc629f11d: Gained IPv6LL Oct 27 08:18:31.799016 containerd[1622]: time="2025-10-27T08:18:31.798935283Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:18:31.800275 containerd[1622]: time="2025-10-27T08:18:31.800225324Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 27 08:18:31.800337 containerd[1622]: time="2025-10-27T08:18:31.800229462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 27 08:18:31.800647 kubelet[2803]: E1027 08:18:31.800567 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:18:31.800734 kubelet[2803]: E1027 08:18:31.800665 2803 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:18:31.800945 kubelet[2803]: E1027 08:18:31.800908 2803 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5b8889988c-mxjjc_calico-system(5deac30e-2d61-4e43-811d-2847f4300da3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 27 08:18:31.801043 kubelet[2803]: E1027 08:18:31.800981 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b8889988c-mxjjc" podUID="5deac30e-2d61-4e43-811d-2847f4300da3" Oct 27 08:18:31.801500 containerd[1622]: time="2025-10-27T08:18:31.801317824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 27 08:18:32.080836 kubelet[2803]: E1027 08:18:32.080359 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8cb769c6-rbvs2" podUID="eb96c43d-718e-4752-922d-cb8f671d414c" Oct 27 08:18:32.082636 kubelet[2803]: E1027 08:18:32.081770 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b8889988c-mxjjc" podUID="5deac30e-2d61-4e43-811d-2847f4300da3" Oct 27 08:18:32.141809 containerd[1622]: time="2025-10-27T08:18:32.141745792Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:18:32.143080 containerd[1622]: time="2025-10-27T08:18:32.143040391Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 27 08:18:32.143130 containerd[1622]: time="2025-10-27T08:18:32.143106545Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 27 08:18:32.143467 kubelet[2803]: E1027 08:18:32.143383 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:18:32.143519 kubelet[2803]: E1027 08:18:32.143480 2803 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:18:32.143661 kubelet[2803]: E1027 08:18:32.143622 2803 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-h6bqk_calico-system(2e0d791c-1eec-4e51-af5e-ee7c86a5bb94): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 27 08:18:32.143766 kubelet[2803]: E1027 08:18:32.143690 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h6bqk" podUID="2e0d791c-1eec-4e51-af5e-ee7c86a5bb94" Oct 27 08:18:32.775553 systemd[1]: Started sshd@11-10.0.0.23:22-10.0.0.1:54570.service - OpenSSH per-connection server daemon (10.0.0.1:54570). Oct 27 08:18:32.843673 sshd[5104]: Accepted publickey for core from 10.0.0.1 port 54570 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:18:32.845870 sshd-session[5104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:18:32.850901 systemd-logind[1605]: New session 12 of user core. Oct 27 08:18:32.857400 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 27 08:18:32.983444 sshd[5107]: Connection closed by 10.0.0.1 port 54570 Oct 27 08:18:32.983786 sshd-session[5104]: pam_unix(sshd:session): session closed for user core Oct 27 08:18:32.987601 systemd[1]: sshd@11-10.0.0.23:22-10.0.0.1:54570.service: Deactivated successfully. Oct 27 08:18:32.990993 systemd[1]: session-12.scope: Deactivated successfully. Oct 27 08:18:32.993384 systemd-logind[1605]: Session 12 logged out. Waiting for processes to exit. Oct 27 08:18:32.995973 systemd-logind[1605]: Removed session 12. Oct 27 08:18:33.084533 kubelet[2803]: E1027 08:18:33.084318 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h6bqk" podUID="2e0d791c-1eec-4e51-af5e-ee7c86a5bb94" Oct 27 08:18:37.999040 systemd[1]: Started sshd@12-10.0.0.23:22-10.0.0.1:54584.service - OpenSSH per-connection server daemon (10.0.0.1:54584). Oct 27 08:18:38.071683 sshd[5129]: Accepted publickey for core from 10.0.0.1 port 54584 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:18:38.074133 sshd-session[5129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:18:38.079510 systemd-logind[1605]: New session 13 of user core. Oct 27 08:18:38.090458 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 27 08:18:38.216892 sshd[5132]: Connection closed by 10.0.0.1 port 54584 Oct 27 08:18:38.217324 sshd-session[5129]: pam_unix(sshd:session): session closed for user core Oct 27 08:18:38.232566 systemd[1]: sshd@12-10.0.0.23:22-10.0.0.1:54584.service: Deactivated successfully. Oct 27 08:18:38.235602 systemd[1]: session-13.scope: Deactivated successfully. Oct 27 08:18:38.236564 systemd-logind[1605]: Session 13 logged out. Waiting for processes to exit. Oct 27 08:18:38.241045 systemd[1]: Started sshd@13-10.0.0.23:22-10.0.0.1:54596.service - OpenSSH per-connection server daemon (10.0.0.1:54596). Oct 27 08:18:38.241778 systemd-logind[1605]: Removed session 13. Oct 27 08:18:38.314640 sshd[5146]: Accepted publickey for core from 10.0.0.1 port 54596 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:18:38.316629 sshd-session[5146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:18:38.322566 systemd-logind[1605]: New session 14 of user core. Oct 27 08:18:38.333580 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 27 08:18:38.505836 sshd[5149]: Connection closed by 10.0.0.1 port 54596 Oct 27 08:18:38.506229 sshd-session[5146]: pam_unix(sshd:session): session closed for user core Oct 27 08:18:38.518979 systemd[1]: sshd@13-10.0.0.23:22-10.0.0.1:54596.service: Deactivated successfully. Oct 27 08:18:38.522137 systemd[1]: session-14.scope: Deactivated successfully. Oct 27 08:18:38.525291 systemd-logind[1605]: Session 14 logged out. Waiting for processes to exit. Oct 27 08:18:38.529532 systemd[1]: Started sshd@14-10.0.0.23:22-10.0.0.1:54598.service - OpenSSH per-connection server daemon (10.0.0.1:54598). Oct 27 08:18:38.531545 systemd-logind[1605]: Removed session 14. Oct 27 08:18:38.586315 sshd[5160]: Accepted publickey for core from 10.0.0.1 port 54598 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:18:38.588289 sshd-session[5160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:18:38.594391 systemd-logind[1605]: New session 15 of user core. Oct 27 08:18:38.605581 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 27 08:18:38.766051 sshd[5163]: Connection closed by 10.0.0.1 port 54598 Oct 27 08:18:38.766546 sshd-session[5160]: pam_unix(sshd:session): session closed for user core Oct 27 08:18:38.771696 systemd[1]: sshd@14-10.0.0.23:22-10.0.0.1:54598.service: Deactivated successfully. Oct 27 08:18:38.773989 systemd[1]: session-15.scope: Deactivated successfully. Oct 27 08:18:38.774980 systemd-logind[1605]: Session 15 logged out. Waiting for processes to exit. Oct 27 08:18:38.776510 systemd-logind[1605]: Removed session 15. Oct 27 08:18:41.883961 containerd[1622]: time="2025-10-27T08:18:41.883900111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:18:42.255092 containerd[1622]: time="2025-10-27T08:18:42.254927847Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:18:42.256274 containerd[1622]: time="2025-10-27T08:18:42.256203337Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:18:42.256320 containerd[1622]: time="2025-10-27T08:18:42.256265257Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:18:42.256561 kubelet[2803]: E1027 08:18:42.256491 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:18:42.256953 kubelet[2803]: E1027 08:18:42.256569 2803 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:18:42.256953 kubelet[2803]: E1027 08:18:42.256765 2803 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-66b74c9c6f-jk2nn_calico-apiserver(77100d96-e703-4e0a-b71a-6946f424cbfa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:18:42.256953 kubelet[2803]: E1027 08:18:42.256815 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b74c9c6f-jk2nn" podUID="77100d96-e703-4e0a-b71a-6946f424cbfa" Oct 27 08:18:42.257085 containerd[1622]: time="2025-10-27T08:18:42.257062094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:18:42.589045 containerd[1622]: time="2025-10-27T08:18:42.588989589Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:18:42.590402 containerd[1622]: time="2025-10-27T08:18:42.590356787Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:18:42.590551 containerd[1622]: time="2025-10-27T08:18:42.590470326Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:18:42.590681 kubelet[2803]: E1027 08:18:42.590625 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:18:42.590736 kubelet[2803]: E1027 08:18:42.590687 2803 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:18:42.590839 kubelet[2803]: E1027 08:18:42.590791 2803 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-66b74c9c6f-rnnck_calico-apiserver(bce4a7cd-a2cf-439e-a5d4-f335be73a306): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:18:42.590839 kubelet[2803]: E1027 08:18:42.590857 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b74c9c6f-rnnck" podUID="bce4a7cd-a2cf-439e-a5d4-f335be73a306" Oct 27 08:18:42.882589 containerd[1622]: time="2025-10-27T08:18:42.882426524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 27 08:18:43.377768 containerd[1622]: time="2025-10-27T08:18:43.377639891Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:18:43.414052 containerd[1622]: time="2025-10-27T08:18:43.413975880Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 27 08:18:43.414238 containerd[1622]: time="2025-10-27T08:18:43.414005267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 27 08:18:43.414330 kubelet[2803]: E1027 08:18:43.414287 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:18:43.414709 kubelet[2803]: E1027 08:18:43.414335 2803 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:18:43.414709 kubelet[2803]: E1027 08:18:43.414431 2803 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-lc29c_calico-system(cd97c86b-d16c-499b-928c-ee1afbc3c575): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 27 08:18:43.414709 kubelet[2803]: E1027 08:18:43.414470 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-lc29c" podUID="cd97c86b-d16c-499b-928c-ee1afbc3c575" Oct 27 08:18:43.788969 systemd[1]: Started sshd@15-10.0.0.23:22-10.0.0.1:60550.service - OpenSSH per-connection server daemon (10.0.0.1:60550). Oct 27 08:18:43.852761 sshd[5186]: Accepted publickey for core from 10.0.0.1 port 60550 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:18:43.854307 sshd-session[5186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:18:43.858862 systemd-logind[1605]: New session 16 of user core. Oct 27 08:18:43.869342 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 27 08:18:43.988421 sshd[5189]: Connection closed by 10.0.0.1 port 60550 Oct 27 08:18:43.988751 sshd-session[5186]: pam_unix(sshd:session): session closed for user core Oct 27 08:18:43.993812 systemd[1]: sshd@15-10.0.0.23:22-10.0.0.1:60550.service: Deactivated successfully. Oct 27 08:18:43.996058 systemd[1]: session-16.scope: Deactivated successfully. Oct 27 08:18:43.996793 systemd-logind[1605]: Session 16 logged out. Waiting for processes to exit. Oct 27 08:18:43.998115 systemd-logind[1605]: Removed session 16. Oct 27 08:18:44.882646 containerd[1622]: time="2025-10-27T08:18:44.882336412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 27 08:18:45.413969 containerd[1622]: time="2025-10-27T08:18:45.413894974Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:18:45.505805 containerd[1622]: time="2025-10-27T08:18:45.505713077Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 27 08:18:45.505954 containerd[1622]: time="2025-10-27T08:18:45.505768082Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 27 08:18:45.506141 kubelet[2803]: E1027 08:18:45.506089 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:18:45.506536 kubelet[2803]: E1027 08:18:45.506148 2803 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:18:45.506536 kubelet[2803]: E1027 08:18:45.506346 2803 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-h6bqk_calico-system(2e0d791c-1eec-4e51-af5e-ee7c86a5bb94): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 27 08:18:45.506619 containerd[1622]: time="2025-10-27T08:18:45.506581328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 27 08:18:45.905333 containerd[1622]: time="2025-10-27T08:18:45.905281810Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:18:45.986659 containerd[1622]: time="2025-10-27T08:18:45.986593819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 27 08:18:45.986659 containerd[1622]: time="2025-10-27T08:18:45.986638876Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 27 08:18:45.986967 kubelet[2803]: E1027 08:18:45.986907 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:18:45.986967 kubelet[2803]: E1027 08:18:45.986966 2803 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:18:45.987227 kubelet[2803]: E1027 08:18:45.987175 2803 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-8cb769c6-rbvs2_calico-system(eb96c43d-718e-4752-922d-cb8f671d414c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 27 08:18:45.987309 kubelet[2803]: E1027 08:18:45.987262 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8cb769c6-rbvs2" podUID="eb96c43d-718e-4752-922d-cb8f671d414c" Oct 27 08:18:45.987446 containerd[1622]: time="2025-10-27T08:18:45.987407405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 27 08:18:46.429967 containerd[1622]: time="2025-10-27T08:18:46.429899798Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:18:46.449097 containerd[1622]: time="2025-10-27T08:18:46.449022745Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 27 08:18:46.449250 containerd[1622]: time="2025-10-27T08:18:46.449075907Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 27 08:18:46.449354 kubelet[2803]: E1027 08:18:46.449302 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:18:46.449443 kubelet[2803]: E1027 08:18:46.449366 2803 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:18:46.449531 kubelet[2803]: E1027 08:18:46.449500 2803 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-h6bqk_calico-system(2e0d791c-1eec-4e51-af5e-ee7c86a5bb94): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 27 08:18:46.449627 kubelet[2803]: E1027 08:18:46.449558 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h6bqk" podUID="2e0d791c-1eec-4e51-af5e-ee7c86a5bb94" Oct 27 08:18:46.882882 containerd[1622]: time="2025-10-27T08:18:46.882716563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 27 08:18:47.227071 containerd[1622]: time="2025-10-27T08:18:47.226906890Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:18:47.228217 containerd[1622]: time="2025-10-27T08:18:47.228170409Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 27 08:18:47.228289 containerd[1622]: time="2025-10-27T08:18:47.228203963Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 27 08:18:47.228594 kubelet[2803]: E1027 08:18:47.228538 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:18:47.228932 kubelet[2803]: E1027 08:18:47.228609 2803 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:18:47.228932 kubelet[2803]: E1027 08:18:47.228707 2803 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5b8889988c-mxjjc_calico-system(5deac30e-2d61-4e43-811d-2847f4300da3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 27 08:18:47.229893 containerd[1622]: time="2025-10-27T08:18:47.229856921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 27 08:18:47.561747 containerd[1622]: time="2025-10-27T08:18:47.561668631Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:18:47.562990 containerd[1622]: time="2025-10-27T08:18:47.562944173Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 27 08:18:47.563074 containerd[1622]: time="2025-10-27T08:18:47.563045578Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 27 08:18:47.563363 kubelet[2803]: E1027 08:18:47.563296 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:18:47.563456 kubelet[2803]: E1027 08:18:47.563380 2803 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:18:47.563571 kubelet[2803]: E1027 08:18:47.563518 2803 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5b8889988c-mxjjc_calico-system(5deac30e-2d61-4e43-811d-2847f4300da3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 27 08:18:47.563571 kubelet[2803]: E1027 08:18:47.563585 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b8889988c-mxjjc" podUID="5deac30e-2d61-4e43-811d-2847f4300da3" Oct 27 08:18:49.005915 systemd[1]: Started sshd@16-10.0.0.23:22-10.0.0.1:60552.service - OpenSSH per-connection server daemon (10.0.0.1:60552). Oct 27 08:18:49.067923 sshd[5206]: Accepted publickey for core from 10.0.0.1 port 60552 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:18:49.070337 sshd-session[5206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:18:49.075786 systemd-logind[1605]: New session 17 of user core. Oct 27 08:18:49.085514 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 27 08:18:49.218196 sshd[5209]: Connection closed by 10.0.0.1 port 60552 Oct 27 08:18:49.218765 sshd-session[5206]: pam_unix(sshd:session): session closed for user core Oct 27 08:18:49.224642 systemd[1]: sshd@16-10.0.0.23:22-10.0.0.1:60552.service: Deactivated successfully. Oct 27 08:18:49.226862 systemd[1]: session-17.scope: Deactivated successfully. Oct 27 08:18:49.228051 systemd-logind[1605]: Session 17 logged out. Waiting for processes to exit. Oct 27 08:18:49.229288 systemd-logind[1605]: Removed session 17. Oct 27 08:18:54.246009 systemd[1]: Started sshd@17-10.0.0.23:22-10.0.0.1:54310.service - OpenSSH per-connection server daemon (10.0.0.1:54310). Oct 27 08:18:54.322330 sshd[5232]: Accepted publickey for core from 10.0.0.1 port 54310 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:18:54.324597 sshd-session[5232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:18:54.330052 systemd-logind[1605]: New session 18 of user core. Oct 27 08:18:54.336373 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 27 08:18:54.546692 sshd[5235]: Connection closed by 10.0.0.1 port 54310 Oct 27 08:18:54.547077 sshd-session[5232]: pam_unix(sshd:session): session closed for user core Oct 27 08:18:54.551098 systemd[1]: sshd@17-10.0.0.23:22-10.0.0.1:54310.service: Deactivated successfully. Oct 27 08:18:54.553743 systemd[1]: session-18.scope: Deactivated successfully. Oct 27 08:18:54.557244 systemd-logind[1605]: Session 18 logged out. Waiting for processes to exit. Oct 27 08:18:54.558736 systemd-logind[1605]: Removed session 18. Oct 27 08:18:54.882284 kubelet[2803]: E1027 08:18:54.882072 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-lc29c" podUID="cd97c86b-d16c-499b-928c-ee1afbc3c575" Oct 27 08:18:54.882284 kubelet[2803]: E1027 08:18:54.882151 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b74c9c6f-jk2nn" podUID="77100d96-e703-4e0a-b71a-6946f424cbfa" Oct 27 08:18:56.881933 kubelet[2803]: E1027 08:18:56.881870 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b74c9c6f-rnnck" podUID="bce4a7cd-a2cf-439e-a5d4-f335be73a306" Oct 27 08:18:58.139158 containerd[1622]: time="2025-10-27T08:18:58.139069137Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68eba6e469ebe6f6e3b3bcec94e5c2b29333e860be99d6609cf0efd3b58566fa\" id:\"a466661c912b9bef9f521b9a00ee943645278d185b0e58e064fffa8c622fc1ac\" pid:5258 exited_at:{seconds:1761553138 nanos:138459151}" Oct 27 08:18:58.140848 kubelet[2803]: E1027 08:18:58.140827 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:18:58.883536 kubelet[2803]: E1027 08:18:58.883310 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h6bqk" podUID="2e0d791c-1eec-4e51-af5e-ee7c86a5bb94" Oct 27 08:18:59.563469 systemd[1]: Started sshd@18-10.0.0.23:22-10.0.0.1:54312.service - OpenSSH per-connection server daemon (10.0.0.1:54312). Oct 27 08:18:59.644096 sshd[5273]: Accepted publickey for core from 10.0.0.1 port 54312 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:18:59.645893 sshd-session[5273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:18:59.650749 systemd-logind[1605]: New session 19 of user core. Oct 27 08:18:59.663695 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 27 08:18:59.795044 sshd[5277]: Connection closed by 10.0.0.1 port 54312 Oct 27 08:18:59.795522 sshd-session[5273]: pam_unix(sshd:session): session closed for user core Oct 27 08:18:59.805761 systemd[1]: sshd@18-10.0.0.23:22-10.0.0.1:54312.service: Deactivated successfully. Oct 27 08:18:59.808348 systemd[1]: session-19.scope: Deactivated successfully. Oct 27 08:18:59.809642 systemd-logind[1605]: Session 19 logged out. Waiting for processes to exit. Oct 27 08:18:59.814090 systemd[1]: Started sshd@19-10.0.0.23:22-10.0.0.1:54316.service - OpenSSH per-connection server daemon (10.0.0.1:54316). Oct 27 08:18:59.815364 systemd-logind[1605]: Removed session 19. Oct 27 08:18:59.871325 sshd[5290]: Accepted publickey for core from 10.0.0.1 port 54316 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:18:59.873246 sshd-session[5290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:18:59.878289 systemd-logind[1605]: New session 20 of user core. Oct 27 08:18:59.883550 kubelet[2803]: E1027 08:18:59.883492 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b8889988c-mxjjc" podUID="5deac30e-2d61-4e43-811d-2847f4300da3" Oct 27 08:18:59.884450 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 27 08:18:59.884734 kubelet[2803]: E1027 08:18:59.884573 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8cb769c6-rbvs2" podUID="eb96c43d-718e-4752-922d-cb8f671d414c" Oct 27 08:19:00.684434 sshd[5293]: Connection closed by 10.0.0.1 port 54316 Oct 27 08:19:00.684817 sshd-session[5290]: pam_unix(sshd:session): session closed for user core Oct 27 08:19:00.697017 systemd[1]: sshd@19-10.0.0.23:22-10.0.0.1:54316.service: Deactivated successfully. Oct 27 08:19:00.699713 systemd[1]: session-20.scope: Deactivated successfully. Oct 27 08:19:00.700651 systemd-logind[1605]: Session 20 logged out. Waiting for processes to exit. Oct 27 08:19:00.704857 systemd[1]: Started sshd@20-10.0.0.23:22-10.0.0.1:39208.service - OpenSSH per-connection server daemon (10.0.0.1:39208). Oct 27 08:19:00.705927 systemd-logind[1605]: Removed session 20. Oct 27 08:19:00.792149 sshd[5306]: Accepted publickey for core from 10.0.0.1 port 39208 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:19:00.794035 sshd-session[5306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:19:00.798706 systemd-logind[1605]: New session 21 of user core. Oct 27 08:19:00.810373 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 27 08:19:01.274444 sshd[5309]: Connection closed by 10.0.0.1 port 39208 Oct 27 08:19:01.276729 sshd-session[5306]: pam_unix(sshd:session): session closed for user core Oct 27 08:19:01.285457 systemd[1]: sshd@20-10.0.0.23:22-10.0.0.1:39208.service: Deactivated successfully. Oct 27 08:19:01.287479 systemd[1]: session-21.scope: Deactivated successfully. Oct 27 08:19:01.288320 systemd-logind[1605]: Session 21 logged out. Waiting for processes to exit. Oct 27 08:19:01.293294 systemd[1]: Started sshd@21-10.0.0.23:22-10.0.0.1:39214.service - OpenSSH per-connection server daemon (10.0.0.1:39214). Oct 27 08:19:01.294661 systemd-logind[1605]: Removed session 21. Oct 27 08:19:01.351366 sshd[5329]: Accepted publickey for core from 10.0.0.1 port 39214 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:19:01.353303 sshd-session[5329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:19:01.357944 systemd-logind[1605]: New session 22 of user core. Oct 27 08:19:01.368327 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 27 08:19:01.579182 sshd[5332]: Connection closed by 10.0.0.1 port 39214 Oct 27 08:19:01.579620 sshd-session[5329]: pam_unix(sshd:session): session closed for user core Oct 27 08:19:01.593052 systemd[1]: sshd@21-10.0.0.23:22-10.0.0.1:39214.service: Deactivated successfully. Oct 27 08:19:01.595934 systemd[1]: session-22.scope: Deactivated successfully. Oct 27 08:19:01.597507 systemd-logind[1605]: Session 22 logged out. Waiting for processes to exit. Oct 27 08:19:01.601405 systemd[1]: Started sshd@22-10.0.0.23:22-10.0.0.1:39226.service - OpenSSH per-connection server daemon (10.0.0.1:39226). Oct 27 08:19:01.602101 systemd-logind[1605]: Removed session 22. Oct 27 08:19:01.663230 sshd[5343]: Accepted publickey for core from 10.0.0.1 port 39226 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:19:01.664619 sshd-session[5343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:19:01.669064 systemd-logind[1605]: New session 23 of user core. Oct 27 08:19:01.680347 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 27 08:19:01.919474 sshd[5346]: Connection closed by 10.0.0.1 port 39226 Oct 27 08:19:01.956721 sshd-session[5343]: pam_unix(sshd:session): session closed for user core Oct 27 08:19:01.962885 systemd[1]: sshd@22-10.0.0.23:22-10.0.0.1:39226.service: Deactivated successfully. Oct 27 08:19:01.965298 systemd[1]: session-23.scope: Deactivated successfully. Oct 27 08:19:01.966089 systemd-logind[1605]: Session 23 logged out. Waiting for processes to exit. Oct 27 08:19:01.967651 systemd-logind[1605]: Removed session 23. Oct 27 08:19:02.881467 kubelet[2803]: E1027 08:19:02.881423 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:05.883184 containerd[1622]: time="2025-10-27T08:19:05.883115114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 27 08:19:06.319437 containerd[1622]: time="2025-10-27T08:19:06.319371282Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:19:06.410716 containerd[1622]: time="2025-10-27T08:19:06.410624937Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 27 08:19:06.410891 containerd[1622]: time="2025-10-27T08:19:06.410695872Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 27 08:19:06.410998 kubelet[2803]: E1027 08:19:06.410955 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:19:06.411452 kubelet[2803]: E1027 08:19:06.411001 2803 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:19:06.411452 kubelet[2803]: E1027 08:19:06.411090 2803 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-lc29c_calico-system(cd97c86b-d16c-499b-928c-ee1afbc3c575): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 27 08:19:06.411452 kubelet[2803]: E1027 08:19:06.411121 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-lc29c" podUID="cd97c86b-d16c-499b-928c-ee1afbc3c575" Oct 27 08:19:06.931952 systemd[1]: Started sshd@23-10.0.0.23:22-10.0.0.1:39228.service - OpenSSH per-connection server daemon (10.0.0.1:39228). Oct 27 08:19:07.008821 sshd[5363]: Accepted publickey for core from 10.0.0.1 port 39228 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:19:07.010625 sshd-session[5363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:19:07.015481 systemd-logind[1605]: New session 24 of user core. Oct 27 08:19:07.023356 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 27 08:19:07.153589 sshd[5366]: Connection closed by 10.0.0.1 port 39228 Oct 27 08:19:07.154002 sshd-session[5363]: pam_unix(sshd:session): session closed for user core Oct 27 08:19:07.159665 systemd[1]: sshd@23-10.0.0.23:22-10.0.0.1:39228.service: Deactivated successfully. Oct 27 08:19:07.162514 systemd[1]: session-24.scope: Deactivated successfully. Oct 27 08:19:07.164416 systemd-logind[1605]: Session 24 logged out. Waiting for processes to exit. Oct 27 08:19:07.166145 systemd-logind[1605]: Removed session 24. Oct 27 08:19:07.883224 containerd[1622]: time="2025-10-27T08:19:07.883123485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:19:08.209687 containerd[1622]: time="2025-10-27T08:19:08.209519974Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:19:08.210768 containerd[1622]: time="2025-10-27T08:19:08.210726109Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:19:08.210866 containerd[1622]: time="2025-10-27T08:19:08.210809238Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:19:08.210973 kubelet[2803]: E1027 08:19:08.210927 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:19:08.211366 kubelet[2803]: E1027 08:19:08.210982 2803 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:19:08.211366 kubelet[2803]: E1027 08:19:08.211087 2803 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-66b74c9c6f-rnnck_calico-apiserver(bce4a7cd-a2cf-439e-a5d4-f335be73a306): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:19:08.211366 kubelet[2803]: E1027 08:19:08.211123 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b74c9c6f-rnnck" podUID="bce4a7cd-a2cf-439e-a5d4-f335be73a306" Oct 27 08:19:08.882877 containerd[1622]: time="2025-10-27T08:19:08.882809309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:19:09.356852 containerd[1622]: time="2025-10-27T08:19:09.356777625Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:19:09.358151 containerd[1622]: time="2025-10-27T08:19:09.358096845Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:19:09.358272 containerd[1622]: time="2025-10-27T08:19:09.358175574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:19:09.358479 kubelet[2803]: E1027 08:19:09.358408 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:19:09.358479 kubelet[2803]: E1027 08:19:09.358475 2803 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:19:09.358946 kubelet[2803]: E1027 08:19:09.358631 2803 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-66b74c9c6f-jk2nn_calico-apiserver(77100d96-e703-4e0a-b71a-6946f424cbfa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:19:09.358946 kubelet[2803]: E1027 08:19:09.358674 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b74c9c6f-jk2nn" podUID="77100d96-e703-4e0a-b71a-6946f424cbfa" Oct 27 08:19:10.882442 kubelet[2803]: E1027 08:19:10.882100 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:10.883042 kubelet[2803]: E1027 08:19:10.882816 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:10.883535 containerd[1622]: time="2025-10-27T08:19:10.883438950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 27 08:19:11.309749 containerd[1622]: time="2025-10-27T08:19:11.309685156Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:19:11.404845 containerd[1622]: time="2025-10-27T08:19:11.404775963Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 27 08:19:11.404845 containerd[1622]: time="2025-10-27T08:19:11.404851356Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 27 08:19:11.405173 kubelet[2803]: E1027 08:19:11.405119 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:19:11.405254 kubelet[2803]: E1027 08:19:11.405178 2803 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:19:11.405332 kubelet[2803]: E1027 08:19:11.405298 2803 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-8cb769c6-rbvs2_calico-system(eb96c43d-718e-4752-922d-cb8f671d414c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 27 08:19:11.405403 kubelet[2803]: E1027 08:19:11.405337 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8cb769c6-rbvs2" podUID="eb96c43d-718e-4752-922d-cb8f671d414c" Oct 27 08:19:12.168931 systemd[1]: Started sshd@24-10.0.0.23:22-10.0.0.1:43766.service - OpenSSH per-connection server daemon (10.0.0.1:43766). Oct 27 08:19:12.230812 sshd[5387]: Accepted publickey for core from 10.0.0.1 port 43766 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:19:12.232191 sshd-session[5387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:19:12.236669 systemd-logind[1605]: New session 25 of user core. Oct 27 08:19:12.246343 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 27 08:19:12.358873 sshd[5390]: Connection closed by 10.0.0.1 port 43766 Oct 27 08:19:12.359260 sshd-session[5387]: pam_unix(sshd:session): session closed for user core Oct 27 08:19:12.364412 systemd[1]: sshd@24-10.0.0.23:22-10.0.0.1:43766.service: Deactivated successfully. Oct 27 08:19:12.366623 systemd[1]: session-25.scope: Deactivated successfully. Oct 27 08:19:12.367568 systemd-logind[1605]: Session 25 logged out. Waiting for processes to exit. Oct 27 08:19:12.368853 systemd-logind[1605]: Removed session 25. Oct 27 08:19:12.882413 containerd[1622]: time="2025-10-27T08:19:12.882342681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 27 08:19:13.266391 containerd[1622]: time="2025-10-27T08:19:13.266332189Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:19:13.267455 containerd[1622]: time="2025-10-27T08:19:13.267419485Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 27 08:19:13.267526 containerd[1622]: time="2025-10-27T08:19:13.267500419Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 27 08:19:13.267766 kubelet[2803]: E1027 08:19:13.267716 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:19:13.268180 kubelet[2803]: E1027 08:19:13.267779 2803 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:19:13.268180 kubelet[2803]: E1027 08:19:13.267886 2803 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-h6bqk_calico-system(2e0d791c-1eec-4e51-af5e-ee7c86a5bb94): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 27 08:19:13.269005 containerd[1622]: time="2025-10-27T08:19:13.268969800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 27 08:19:13.621376 containerd[1622]: time="2025-10-27T08:19:13.621178399Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:19:13.622408 containerd[1622]: time="2025-10-27T08:19:13.622367218Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 27 08:19:13.622408 containerd[1622]: time="2025-10-27T08:19:13.622401674Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 27 08:19:13.622711 kubelet[2803]: E1027 08:19:13.622631 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:19:13.622711 kubelet[2803]: E1027 08:19:13.622707 2803 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:19:13.622905 kubelet[2803]: E1027 08:19:13.622803 2803 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-h6bqk_calico-system(2e0d791c-1eec-4e51-af5e-ee7c86a5bb94): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 27 08:19:13.622905 kubelet[2803]: E1027 08:19:13.622849 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h6bqk" podUID="2e0d791c-1eec-4e51-af5e-ee7c86a5bb94" Oct 27 08:19:13.883553 kubelet[2803]: E1027 08:19:13.883427 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:14.883048 containerd[1622]: time="2025-10-27T08:19:14.882967666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 27 08:19:15.260410 containerd[1622]: time="2025-10-27T08:19:15.260350290Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:19:15.261547 containerd[1622]: time="2025-10-27T08:19:15.261497228Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 27 08:19:15.261619 containerd[1622]: time="2025-10-27T08:19:15.261541021Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 27 08:19:15.261861 kubelet[2803]: E1027 08:19:15.261798 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:19:15.262304 kubelet[2803]: E1027 08:19:15.261860 2803 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:19:15.262304 kubelet[2803]: E1027 08:19:15.261961 2803 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5b8889988c-mxjjc_calico-system(5deac30e-2d61-4e43-811d-2847f4300da3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 27 08:19:15.262835 containerd[1622]: time="2025-10-27T08:19:15.262788009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 27 08:19:16.665994 containerd[1622]: time="2025-10-27T08:19:16.665925852Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:19:16.692560 containerd[1622]: time="2025-10-27T08:19:16.692516680Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 27 08:19:16.692697 containerd[1622]: time="2025-10-27T08:19:16.692523513Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 27 08:19:16.692877 kubelet[2803]: E1027 08:19:16.692828 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:19:16.693320 kubelet[2803]: E1027 08:19:16.692887 2803 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:19:16.693320 kubelet[2803]: E1027 08:19:16.692972 2803 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5b8889988c-mxjjc_calico-system(5deac30e-2d61-4e43-811d-2847f4300da3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 27 08:19:16.693320 kubelet[2803]: E1027 08:19:16.693015 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b8889988c-mxjjc" podUID="5deac30e-2d61-4e43-811d-2847f4300da3" Oct 27 08:19:17.378553 systemd[1]: Started sshd@25-10.0.0.23:22-10.0.0.1:43778.service - OpenSSH per-connection server daemon (10.0.0.1:43778). Oct 27 08:19:17.439551 sshd[5403]: Accepted publickey for core from 10.0.0.1 port 43778 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:19:17.441739 sshd-session[5403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:19:17.447324 systemd-logind[1605]: New session 26 of user core. Oct 27 08:19:17.462437 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 27 08:19:17.584364 sshd[5406]: Connection closed by 10.0.0.1 port 43778 Oct 27 08:19:17.584802 sshd-session[5403]: pam_unix(sshd:session): session closed for user core Oct 27 08:19:17.590699 systemd[1]: sshd@25-10.0.0.23:22-10.0.0.1:43778.service: Deactivated successfully. Oct 27 08:19:17.592886 systemd[1]: session-26.scope: Deactivated successfully. Oct 27 08:19:17.593693 systemd-logind[1605]: Session 26 logged out. Waiting for processes to exit. Oct 27 08:19:17.594939 systemd-logind[1605]: Removed session 26.