Sep 11 00:15:49.905965 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 10 22:15:45 -00 2025 Sep 11 00:15:49.906001 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=20820f07706ad5590d38fe5324b9055d59a89dc1109fdc449cad1a53209b9dbd Sep 11 00:15:49.906013 kernel: BIOS-provided physical RAM map: Sep 11 00:15:49.906022 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 11 00:15:49.906030 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 11 00:15:49.906039 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 11 00:15:49.906049 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 11 00:15:49.906058 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 11 00:15:49.906074 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 11 00:15:49.906083 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 11 00:15:49.906092 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 11 00:15:49.906101 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 11 00:15:49.906110 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 11 00:15:49.906119 kernel: NX (Execute Disable) protection: active Sep 11 00:15:49.906133 kernel: APIC: Static calls initialized Sep 11 00:15:49.906143 kernel: SMBIOS 2.8 present. Sep 11 00:15:49.906157 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 11 00:15:49.906167 kernel: DMI: Memory slots populated: 1/1 Sep 11 00:15:49.906177 kernel: Hypervisor detected: KVM Sep 11 00:15:49.906187 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 11 00:15:49.906197 kernel: kvm-clock: using sched offset of 4370980828 cycles Sep 11 00:15:49.906208 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 11 00:15:49.906218 kernel: tsc: Detected 2794.750 MHz processor Sep 11 00:15:49.906232 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 11 00:15:49.906242 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 11 00:15:49.906252 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 11 00:15:49.906262 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 11 00:15:49.906272 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 11 00:15:49.906282 kernel: Using GB pages for direct mapping Sep 11 00:15:49.906292 kernel: ACPI: Early table checksum verification disabled Sep 11 00:15:49.906301 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 11 00:15:49.906318 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:15:49.906334 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:15:49.906344 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:15:49.906354 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 11 00:15:49.906364 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:15:49.906373 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:15:49.906383 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:15:49.906397 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:15:49.906407 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 11 00:15:49.906424 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 11 00:15:49.906435 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 11 00:15:49.906445 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 11 00:15:49.906455 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 11 00:15:49.906465 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 11 00:15:49.906475 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 11 00:15:49.906488 kernel: No NUMA configuration found Sep 11 00:15:49.906534 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 11 00:15:49.906545 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Sep 11 00:15:49.906553 kernel: Zone ranges: Sep 11 00:15:49.906560 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 11 00:15:49.906568 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 11 00:15:49.906576 kernel: Normal empty Sep 11 00:15:49.906583 kernel: Device empty Sep 11 00:15:49.906591 kernel: Movable zone start for each node Sep 11 00:15:49.906599 kernel: Early memory node ranges Sep 11 00:15:49.906612 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 11 00:15:49.906620 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 11 00:15:49.906630 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 11 00:15:49.906638 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 11 00:15:49.906646 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 11 00:15:49.906654 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 11 00:15:49.906661 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 11 00:15:49.906672 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 11 00:15:49.906680 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 11 00:15:49.906690 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 11 00:15:49.906698 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 11 00:15:49.906709 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 11 00:15:49.906716 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 11 00:15:49.906724 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 11 00:15:49.906732 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 11 00:15:49.906739 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 11 00:15:49.906747 kernel: TSC deadline timer available Sep 11 00:15:49.906755 kernel: CPU topo: Max. logical packages: 1 Sep 11 00:15:49.906765 kernel: CPU topo: Max. logical dies: 1 Sep 11 00:15:49.906772 kernel: CPU topo: Max. dies per package: 1 Sep 11 00:15:49.906782 kernel: CPU topo: Max. threads per core: 1 Sep 11 00:15:49.906797 kernel: CPU topo: Num. cores per package: 4 Sep 11 00:15:49.906810 kernel: CPU topo: Num. threads per package: 4 Sep 11 00:15:49.906820 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 11 00:15:49.906830 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 11 00:15:49.906846 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 11 00:15:49.906946 kernel: kvm-guest: setup PV sched yield Sep 11 00:15:49.906959 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 11 00:15:49.906975 kernel: Booting paravirtualized kernel on KVM Sep 11 00:15:49.906986 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 11 00:15:49.906996 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 11 00:15:49.907006 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 11 00:15:49.907016 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 11 00:15:49.907026 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 11 00:15:49.907036 kernel: kvm-guest: PV spinlocks enabled Sep 11 00:15:49.907044 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 11 00:15:49.907053 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=20820f07706ad5590d38fe5324b9055d59a89dc1109fdc449cad1a53209b9dbd Sep 11 00:15:49.907064 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 11 00:15:49.907072 kernel: random: crng init done Sep 11 00:15:49.907079 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 11 00:15:49.907087 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 11 00:15:49.907097 kernel: Fallback order for Node 0: 0 Sep 11 00:15:49.907108 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Sep 11 00:15:49.907118 kernel: Policy zone: DMA32 Sep 11 00:15:49.907128 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 11 00:15:49.907140 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 11 00:15:49.907150 kernel: ftrace: allocating 40106 entries in 157 pages Sep 11 00:15:49.907160 kernel: ftrace: allocated 157 pages with 5 groups Sep 11 00:15:49.907168 kernel: Dynamic Preempt: voluntary Sep 11 00:15:49.907176 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 11 00:15:49.907187 kernel: rcu: RCU event tracing is enabled. Sep 11 00:15:49.907198 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 11 00:15:49.907208 kernel: Trampoline variant of Tasks RCU enabled. Sep 11 00:15:49.907233 kernel: Rude variant of Tasks RCU enabled. Sep 11 00:15:49.907251 kernel: Tracing variant of Tasks RCU enabled. Sep 11 00:15:49.907264 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 11 00:15:49.907274 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 11 00:15:49.907284 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 00:15:49.907294 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 00:15:49.907305 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 00:15:49.907614 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 11 00:15:49.907626 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 11 00:15:49.907654 kernel: Console: colour VGA+ 80x25 Sep 11 00:15:49.907665 kernel: printk: legacy console [ttyS0] enabled Sep 11 00:15:49.907675 kernel: ACPI: Core revision 20240827 Sep 11 00:15:49.907686 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 11 00:15:49.907700 kernel: APIC: Switch to symmetric I/O mode setup Sep 11 00:15:49.907711 kernel: x2apic enabled Sep 11 00:15:49.907721 kernel: APIC: Switched APIC routing to: physical x2apic Sep 11 00:15:49.907736 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 11 00:15:49.907747 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 11 00:15:49.907762 kernel: kvm-guest: setup PV IPIs Sep 11 00:15:49.907772 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 11 00:15:49.907798 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 11 00:15:49.907809 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 11 00:15:49.907827 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 11 00:15:49.907838 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 11 00:15:49.907851 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 11 00:15:49.907862 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 11 00:15:49.907878 kernel: Spectre V2 : Mitigation: Retpolines Sep 11 00:15:49.907889 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 11 00:15:49.907900 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 11 00:15:49.907911 kernel: active return thunk: retbleed_return_thunk Sep 11 00:15:49.908081 kernel: RETBleed: Mitigation: untrained return thunk Sep 11 00:15:49.908098 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 11 00:15:49.908109 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 11 00:15:49.908119 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 11 00:15:49.908130 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 11 00:15:49.908148 kernel: active return thunk: srso_return_thunk Sep 11 00:15:49.908159 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 11 00:15:49.908169 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 11 00:15:49.908180 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 11 00:15:49.908193 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 11 00:15:49.908204 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 11 00:15:49.908214 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 11 00:15:49.908225 kernel: Freeing SMP alternatives memory: 32K Sep 11 00:15:49.908240 kernel: pid_max: default: 32768 minimum: 301 Sep 11 00:15:49.908251 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 11 00:15:49.908263 kernel: landlock: Up and running. Sep 11 00:15:49.908273 kernel: SELinux: Initializing. Sep 11 00:15:49.908289 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 11 00:15:49.908300 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 11 00:15:49.908311 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 11 00:15:49.908322 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 11 00:15:49.908332 kernel: ... version: 0 Sep 11 00:15:49.908346 kernel: ... bit width: 48 Sep 11 00:15:49.908357 kernel: ... generic registers: 6 Sep 11 00:15:49.908367 kernel: ... value mask: 0000ffffffffffff Sep 11 00:15:49.908378 kernel: ... max period: 00007fffffffffff Sep 11 00:15:49.908389 kernel: ... fixed-purpose events: 0 Sep 11 00:15:49.908399 kernel: ... event mask: 000000000000003f Sep 11 00:15:49.908410 kernel: signal: max sigframe size: 1776 Sep 11 00:15:49.908420 kernel: rcu: Hierarchical SRCU implementation. Sep 11 00:15:49.908432 kernel: rcu: Max phase no-delay instances is 400. Sep 11 00:15:49.908443 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 11 00:15:49.908457 kernel: smp: Bringing up secondary CPUs ... Sep 11 00:15:49.908468 kernel: smpboot: x86: Booting SMP configuration: Sep 11 00:15:49.908479 kernel: .... node #0, CPUs: #1 #2 #3 Sep 11 00:15:49.908489 kernel: smp: Brought up 1 node, 4 CPUs Sep 11 00:15:49.908500 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 11 00:15:49.908536 kernel: Memory: 2428916K/2571752K available (14336K kernel code, 2429K rwdata, 9960K rodata, 54036K init, 2932K bss, 136904K reserved, 0K cma-reserved) Sep 11 00:15:49.908546 kernel: devtmpfs: initialized Sep 11 00:15:49.908556 kernel: x86/mm: Memory block size: 128MB Sep 11 00:15:49.908566 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 11 00:15:49.908580 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 11 00:15:49.908591 kernel: pinctrl core: initialized pinctrl subsystem Sep 11 00:15:49.908605 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 11 00:15:49.908614 kernel: audit: initializing netlink subsys (disabled) Sep 11 00:15:49.908625 kernel: audit: type=2000 audit(1757549745.695:1): state=initialized audit_enabled=0 res=1 Sep 11 00:15:49.908637 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 11 00:15:49.908647 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 11 00:15:49.908657 kernel: cpuidle: using governor menu Sep 11 00:15:49.908668 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 11 00:15:49.908683 kernel: dca service started, version 1.12.1 Sep 11 00:15:49.908694 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Sep 11 00:15:49.908704 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 11 00:15:49.908715 kernel: PCI: Using configuration type 1 for base access Sep 11 00:15:49.908725 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 11 00:15:49.908736 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 11 00:15:49.908746 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 11 00:15:49.908757 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 11 00:15:49.908771 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 11 00:15:49.908781 kernel: ACPI: Added _OSI(Module Device) Sep 11 00:15:49.908791 kernel: ACPI: Added _OSI(Processor Device) Sep 11 00:15:49.908801 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 11 00:15:49.908812 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 11 00:15:49.908823 kernel: ACPI: Interpreter enabled Sep 11 00:15:49.908833 kernel: ACPI: PM: (supports S0 S3 S5) Sep 11 00:15:49.908853 kernel: ACPI: Using IOAPIC for interrupt routing Sep 11 00:15:49.908864 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 11 00:15:49.908874 kernel: PCI: Using E820 reservations for host bridge windows Sep 11 00:15:49.908893 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 11 00:15:49.908903 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 11 00:15:49.909369 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 11 00:15:49.909585 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 11 00:15:49.909796 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 11 00:15:49.909929 kernel: PCI host bridge to bus 0000:00 Sep 11 00:15:49.910136 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 11 00:15:49.910294 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 11 00:15:49.910439 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 11 00:15:49.910608 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 11 00:15:49.910750 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 11 00:15:49.910895 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 11 00:15:49.911063 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 11 00:15:49.911343 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 11 00:15:49.911585 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 11 00:15:49.911763 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Sep 11 00:15:49.911971 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Sep 11 00:15:49.912140 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Sep 11 00:15:49.912300 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 11 00:15:49.912495 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 11 00:15:49.913589 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Sep 11 00:15:49.913773 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Sep 11 00:15:49.913952 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Sep 11 00:15:49.914494 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 11 00:15:49.914696 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Sep 11 00:15:49.914873 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Sep 11 00:15:49.915057 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Sep 11 00:15:49.915296 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 11 00:15:49.915507 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Sep 11 00:15:49.915720 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Sep 11 00:15:49.915898 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 11 00:15:49.916072 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Sep 11 00:15:49.916267 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 11 00:15:49.916443 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 11 00:15:49.916651 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 11 00:15:49.916825 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Sep 11 00:15:49.917004 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Sep 11 00:15:49.917197 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 11 00:15:49.917416 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Sep 11 00:15:49.917441 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 11 00:15:49.917542 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 11 00:15:49.917577 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 11 00:15:49.917589 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 11 00:15:49.917614 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 11 00:15:49.917647 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 11 00:15:49.917674 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 11 00:15:49.917696 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 11 00:15:49.917710 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 11 00:15:49.917720 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 11 00:15:49.917736 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 11 00:15:49.917746 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 11 00:15:49.917757 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 11 00:15:49.917768 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 11 00:15:49.917782 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 11 00:15:49.917792 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 11 00:15:49.917802 kernel: iommu: Default domain type: Translated Sep 11 00:15:49.917813 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 11 00:15:49.917823 kernel: PCI: Using ACPI for IRQ routing Sep 11 00:15:49.917838 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 11 00:15:49.917849 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 11 00:15:49.917860 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 11 00:15:49.918050 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 11 00:15:49.918212 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 11 00:15:49.918455 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 11 00:15:49.918473 kernel: vgaarb: loaded Sep 11 00:15:49.918484 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 11 00:15:49.918500 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 11 00:15:49.918527 kernel: clocksource: Switched to clocksource kvm-clock Sep 11 00:15:49.918535 kernel: VFS: Disk quotas dquot_6.6.0 Sep 11 00:15:49.918544 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 11 00:15:49.918552 kernel: pnp: PnP ACPI init Sep 11 00:15:49.918707 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 11 00:15:49.918720 kernel: pnp: PnP ACPI: found 6 devices Sep 11 00:15:49.918728 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 11 00:15:49.918741 kernel: NET: Registered PF_INET protocol family Sep 11 00:15:49.918749 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 11 00:15:49.918760 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 11 00:15:49.918771 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 11 00:15:49.918795 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 11 00:15:49.918808 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 11 00:15:49.918818 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 11 00:15:49.918829 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 11 00:15:49.918840 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 11 00:15:49.918855 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 11 00:15:49.918866 kernel: NET: Registered PF_XDP protocol family Sep 11 00:15:49.919015 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 11 00:15:49.919773 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 11 00:15:49.920399 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 11 00:15:49.920605 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 11 00:15:49.920814 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 11 00:15:49.920994 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 11 00:15:49.921019 kernel: PCI: CLS 0 bytes, default 64 Sep 11 00:15:49.921031 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 11 00:15:49.921042 kernel: Initialise system trusted keyrings Sep 11 00:15:49.921054 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 11 00:15:49.921065 kernel: Key type asymmetric registered Sep 11 00:15:49.921076 kernel: Asymmetric key parser 'x509' registered Sep 11 00:15:49.921087 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 11 00:15:49.921105 kernel: io scheduler mq-deadline registered Sep 11 00:15:49.922413 kernel: io scheduler kyber registered Sep 11 00:15:49.922433 kernel: io scheduler bfq registered Sep 11 00:15:49.922452 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 11 00:15:49.922463 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 11 00:15:49.922472 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 11 00:15:49.922480 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 11 00:15:49.922488 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 11 00:15:49.922497 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 11 00:15:49.922524 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 11 00:15:49.922536 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 11 00:15:49.922547 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 11 00:15:49.922562 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 11 00:15:49.922858 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 11 00:15:49.923052 kernel: rtc_cmos 00:04: registered as rtc0 Sep 11 00:15:49.923215 kernel: rtc_cmos 00:04: setting system clock to 2025-09-11T00:15:49 UTC (1757549749) Sep 11 00:15:49.923399 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 11 00:15:49.923421 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 11 00:15:49.923431 kernel: NET: Registered PF_INET6 protocol family Sep 11 00:15:49.923444 kernel: Segment Routing with IPv6 Sep 11 00:15:49.923452 kernel: In-situ OAM (IOAM) with IPv6 Sep 11 00:15:49.923461 kernel: NET: Registered PF_PACKET protocol family Sep 11 00:15:49.923469 kernel: Key type dns_resolver registered Sep 11 00:15:49.923478 kernel: IPI shorthand broadcast: enabled Sep 11 00:15:49.923496 kernel: sched_clock: Marking stable (4079002805, 111809913)->(4224831077, -34018359) Sep 11 00:15:49.923531 kernel: registered taskstats version 1 Sep 11 00:15:49.923540 kernel: Loading compiled-in X.509 certificates Sep 11 00:15:49.923549 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: 941433bdd955e1c3aa4064827516bddd510466ee' Sep 11 00:15:49.923561 kernel: Demotion targets for Node 0: null Sep 11 00:15:49.923569 kernel: Key type .fscrypt registered Sep 11 00:15:49.923578 kernel: Key type fscrypt-provisioning registered Sep 11 00:15:49.923586 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 11 00:15:49.923594 kernel: ima: Allocated hash algorithm: sha1 Sep 11 00:15:49.923603 kernel: ima: No architecture policies found Sep 11 00:15:49.923613 kernel: clk: Disabling unused clocks Sep 11 00:15:49.923622 kernel: Warning: unable to open an initial console. Sep 11 00:15:49.923631 kernel: Freeing unused kernel image (initmem) memory: 54036K Sep 11 00:15:49.923644 kernel: Write protecting the kernel read-only data: 24576k Sep 11 00:15:49.923653 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Sep 11 00:15:49.923664 kernel: Run /init as init process Sep 11 00:15:49.923672 kernel: with arguments: Sep 11 00:15:49.923680 kernel: /init Sep 11 00:15:49.923688 kernel: with environment: Sep 11 00:15:49.923697 kernel: HOME=/ Sep 11 00:15:49.923705 kernel: TERM=linux Sep 11 00:15:49.923713 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 11 00:15:49.923727 systemd[1]: Successfully made /usr/ read-only. Sep 11 00:15:49.923751 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 00:15:49.923767 systemd[1]: Detected virtualization kvm. Sep 11 00:15:49.923778 systemd[1]: Detected architecture x86-64. Sep 11 00:15:49.923790 systemd[1]: Running in initrd. Sep 11 00:15:49.923805 systemd[1]: No hostname configured, using default hostname. Sep 11 00:15:49.923817 systemd[1]: Hostname set to . Sep 11 00:15:49.923829 systemd[1]: Initializing machine ID from VM UUID. Sep 11 00:15:49.923838 systemd[1]: Queued start job for default target initrd.target. Sep 11 00:15:49.923847 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:15:49.923856 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:15:49.923866 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 11 00:15:49.923876 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 00:15:49.923887 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 11 00:15:49.923898 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 11 00:15:49.923908 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 11 00:15:49.923917 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 11 00:15:49.923926 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:15:49.923946 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:15:49.923955 systemd[1]: Reached target paths.target - Path Units. Sep 11 00:15:49.923967 systemd[1]: Reached target slices.target - Slice Units. Sep 11 00:15:49.923976 systemd[1]: Reached target swap.target - Swaps. Sep 11 00:15:49.923985 systemd[1]: Reached target timers.target - Timer Units. Sep 11 00:15:49.923995 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 00:15:49.924004 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 00:15:49.924013 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 11 00:15:49.924022 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 11 00:15:49.924033 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:15:49.924042 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 00:15:49.924055 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:15:49.924066 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 00:15:49.924079 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 11 00:15:49.924099 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 00:15:49.924124 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 11 00:15:49.924142 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 11 00:15:49.924158 systemd[1]: Starting systemd-fsck-usr.service... Sep 11 00:15:49.924171 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 00:15:49.924184 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 00:15:49.924197 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:15:49.924210 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 11 00:15:49.924228 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:15:49.924241 systemd[1]: Finished systemd-fsck-usr.service. Sep 11 00:15:49.924255 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 11 00:15:49.924310 systemd-journald[220]: Collecting audit messages is disabled. Sep 11 00:15:49.924349 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 11 00:15:49.924369 systemd-journald[220]: Journal started Sep 11 00:15:49.924398 systemd-journald[220]: Runtime Journal (/run/log/journal/55b167159cf040ec965f01e78bd194ee) is 6M, max 48.6M, 42.5M free. Sep 11 00:15:49.896789 systemd-modules-load[221]: Inserted module 'overlay' Sep 11 00:15:49.944963 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 11 00:15:49.944997 kernel: Bridge firewalling registered Sep 11 00:15:49.937467 systemd-modules-load[221]: Inserted module 'br_netfilter' Sep 11 00:15:49.947986 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 00:15:49.948363 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 00:15:49.949921 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:15:49.956178 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 11 00:15:49.958229 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:15:49.961166 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 00:15:49.969705 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 00:15:49.981039 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:15:49.983581 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:15:49.983723 systemd-tmpfiles[242]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 11 00:15:49.989794 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:15:49.993116 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 00:15:49.995410 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 00:15:49.996994 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 11 00:15:50.026097 dracut-cmdline[260]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=20820f07706ad5590d38fe5324b9055d59a89dc1109fdc449cad1a53209b9dbd Sep 11 00:15:50.047896 systemd-resolved[258]: Positive Trust Anchors: Sep 11 00:15:50.047918 systemd-resolved[258]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 00:15:50.047960 systemd-resolved[258]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 00:15:50.052664 systemd-resolved[258]: Defaulting to hostname 'linux'. Sep 11 00:15:50.057094 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 00:15:50.057832 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:15:50.154561 kernel: SCSI subsystem initialized Sep 11 00:15:50.165569 kernel: Loading iSCSI transport class v2.0-870. Sep 11 00:15:50.178578 kernel: iscsi: registered transport (tcp) Sep 11 00:15:50.203544 kernel: iscsi: registered transport (qla4xxx) Sep 11 00:15:50.203631 kernel: QLogic iSCSI HBA Driver Sep 11 00:15:50.227529 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 00:15:50.258230 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:15:50.259629 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 00:15:50.340208 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 11 00:15:50.342372 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 11 00:15:50.411566 kernel: raid6: avx2x4 gen() 29128 MB/s Sep 11 00:15:50.428566 kernel: raid6: avx2x2 gen() 28483 MB/s Sep 11 00:15:50.445620 kernel: raid6: avx2x1 gen() 23147 MB/s Sep 11 00:15:50.445716 kernel: raid6: using algorithm avx2x4 gen() 29128 MB/s Sep 11 00:15:50.463692 kernel: raid6: .... xor() 6884 MB/s, rmw enabled Sep 11 00:15:50.463805 kernel: raid6: using avx2x2 recovery algorithm Sep 11 00:15:50.485574 kernel: xor: automatically using best checksumming function avx Sep 11 00:15:50.669596 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 11 00:15:50.680586 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 11 00:15:50.683695 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:15:50.726736 systemd-udevd[471]: Using default interface naming scheme 'v255'. Sep 11 00:15:50.732553 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:15:50.734326 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 11 00:15:50.761527 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Sep 11 00:15:50.801649 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 00:15:50.804503 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 00:15:51.078100 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:15:51.081090 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 11 00:15:51.130560 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 11 00:15:51.137412 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 11 00:15:51.139152 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 11 00:15:51.139167 kernel: GPT:9289727 != 19775487 Sep 11 00:15:51.139187 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 11 00:15:51.139197 kernel: GPT:9289727 != 19775487 Sep 11 00:15:51.139207 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 11 00:15:51.139217 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:15:51.152547 kernel: cryptd: max_cpu_qlen set to 1000 Sep 11 00:15:51.165541 kernel: AES CTR mode by8 optimization enabled Sep 11 00:15:51.165588 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 11 00:15:51.167556 kernel: libata version 3.00 loaded. Sep 11 00:15:51.175471 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:15:51.175600 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:15:51.181587 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:15:51.185111 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:15:51.192731 kernel: ahci 0000:00:1f.2: version 3.0 Sep 11 00:15:51.192984 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 11 00:15:51.192999 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 11 00:15:51.193144 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 11 00:15:51.193284 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 11 00:15:51.190079 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:15:51.221554 kernel: scsi host0: ahci Sep 11 00:15:51.222533 kernel: scsi host1: ahci Sep 11 00:15:51.225118 kernel: scsi host2: ahci Sep 11 00:15:51.226532 kernel: scsi host3: ahci Sep 11 00:15:51.228564 kernel: scsi host4: ahci Sep 11 00:15:51.232133 kernel: scsi host5: ahci Sep 11 00:15:51.232356 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Sep 11 00:15:51.232374 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Sep 11 00:15:51.235257 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Sep 11 00:15:51.235285 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Sep 11 00:15:51.235307 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Sep 11 00:15:51.235321 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Sep 11 00:15:51.240352 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 11 00:15:51.258867 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 11 00:15:51.282556 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 11 00:15:51.283364 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:15:51.309608 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 11 00:15:51.321379 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 11 00:15:51.324090 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 11 00:15:51.351665 disk-uuid[633]: Primary Header is updated. Sep 11 00:15:51.351665 disk-uuid[633]: Secondary Entries is updated. Sep 11 00:15:51.351665 disk-uuid[633]: Secondary Header is updated. Sep 11 00:15:51.357551 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:15:51.364843 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:15:51.542574 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 11 00:15:51.542652 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 11 00:15:51.542664 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 11 00:15:51.542676 kernel: ata3.00: LPM support broken, forcing max_power Sep 11 00:15:51.542690 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 11 00:15:51.543828 kernel: ata3.00: applying bridge limits Sep 11 00:15:51.544556 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 11 00:15:51.545536 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 11 00:15:51.545560 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 11 00:15:51.546544 kernel: ata3.00: LPM support broken, forcing max_power Sep 11 00:15:51.547549 kernel: ata3.00: configured for UDMA/100 Sep 11 00:15:51.549553 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 11 00:15:51.603552 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 11 00:15:51.603858 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 11 00:15:51.629554 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 11 00:15:52.045070 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 11 00:15:52.047023 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 00:15:52.049183 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:15:52.052049 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 00:15:52.053548 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 11 00:15:52.092832 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 11 00:15:52.394552 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:15:52.395658 disk-uuid[634]: The operation has completed successfully. Sep 11 00:15:52.428031 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 11 00:15:52.428175 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 11 00:15:52.471976 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 11 00:15:52.505479 sh[663]: Success Sep 11 00:15:52.528439 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 11 00:15:52.528540 kernel: device-mapper: uevent: version 1.0.3 Sep 11 00:15:52.528561 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 11 00:15:52.538559 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 11 00:15:52.575047 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 11 00:15:52.713231 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 11 00:15:52.718226 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 11 00:15:52.731556 kernel: BTRFS: device fsid 1d23f222-37c7-4ff5-813e-235ce83bed46 devid 1 transid 40 /dev/mapper/usr (253:0) scanned by mount (675) Sep 11 00:15:52.731613 kernel: BTRFS info (device dm-0): first mount of filesystem 1d23f222-37c7-4ff5-813e-235ce83bed46 Sep 11 00:15:52.733533 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:15:52.739588 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 11 00:15:52.739607 kernel: BTRFS info (device dm-0): enabling free space tree Sep 11 00:15:52.741222 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 11 00:15:52.742646 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 11 00:15:52.743547 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 11 00:15:52.744804 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 11 00:15:52.747152 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 11 00:15:52.774562 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (708) Sep 11 00:15:52.777235 kernel: BTRFS info (device vda6): first mount of filesystem dfd585e5-5346-4151-8d09-25f0fad7f81c Sep 11 00:15:52.777278 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:15:52.781129 kernel: BTRFS info (device vda6): turning on async discard Sep 11 00:15:52.781159 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 00:15:52.786537 kernel: BTRFS info (device vda6): last unmount of filesystem dfd585e5-5346-4151-8d09-25f0fad7f81c Sep 11 00:15:52.788794 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 11 00:15:52.790470 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 11 00:15:52.948720 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 00:15:52.980930 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 00:15:53.013195 ignition[750]: Ignition 2.21.0 Sep 11 00:15:53.013210 ignition[750]: Stage: fetch-offline Sep 11 00:15:53.013270 ignition[750]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:15:53.013284 ignition[750]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:15:53.013416 ignition[750]: parsed url from cmdline: "" Sep 11 00:15:53.013422 ignition[750]: no config URL provided Sep 11 00:15:53.013430 ignition[750]: reading system config file "/usr/lib/ignition/user.ign" Sep 11 00:15:53.013443 ignition[750]: no config at "/usr/lib/ignition/user.ign" Sep 11 00:15:53.013475 ignition[750]: op(1): [started] loading QEMU firmware config module Sep 11 00:15:53.013485 ignition[750]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 11 00:15:53.023537 ignition[750]: op(1): [finished] loading QEMU firmware config module Sep 11 00:15:53.023567 ignition[750]: QEMU firmware config was not found. Ignoring... Sep 11 00:15:53.053793 systemd-networkd[852]: lo: Link UP Sep 11 00:15:53.054260 systemd-networkd[852]: lo: Gained carrier Sep 11 00:15:53.056175 systemd-networkd[852]: Enumeration completed Sep 11 00:15:53.056614 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 00:15:53.057741 systemd-networkd[852]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:15:53.057746 systemd-networkd[852]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 00:15:53.058896 systemd[1]: Reached target network.target - Network. Sep 11 00:15:53.060072 systemd-networkd[852]: eth0: Link UP Sep 11 00:15:53.060296 systemd-networkd[852]: eth0: Gained carrier Sep 11 00:15:53.060308 systemd-networkd[852]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:15:53.073561 systemd-networkd[852]: eth0: DHCPv4 address 10.0.0.58/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 11 00:15:53.081326 ignition[750]: parsing config with SHA512: a24e7d773496ac4fa013d2caede11527d1f5d6de005c384de1cb1da3f341c576b0012447aa54fd84561247d66ce024236119c00258634cf849dbb6734949a2e1 Sep 11 00:15:53.086350 unknown[750]: fetched base config from "system" Sep 11 00:15:53.086366 unknown[750]: fetched user config from "qemu" Sep 11 00:15:53.086843 ignition[750]: fetch-offline: fetch-offline passed Sep 11 00:15:53.086932 ignition[750]: Ignition finished successfully Sep 11 00:15:53.090548 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 00:15:53.093215 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 11 00:15:53.094208 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 11 00:15:53.135794 ignition[860]: Ignition 2.21.0 Sep 11 00:15:53.135809 ignition[860]: Stage: kargs Sep 11 00:15:53.135976 ignition[860]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:15:53.135990 ignition[860]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:15:53.136850 ignition[860]: kargs: kargs passed Sep 11 00:15:53.136917 ignition[860]: Ignition finished successfully Sep 11 00:15:53.141910 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 11 00:15:53.144151 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 11 00:15:53.295188 ignition[868]: Ignition 2.21.0 Sep 11 00:15:53.295204 ignition[868]: Stage: disks Sep 11 00:15:53.295729 ignition[868]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:15:53.295742 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:15:53.297711 ignition[868]: disks: disks passed Sep 11 00:15:53.297796 ignition[868]: Ignition finished successfully Sep 11 00:15:53.303903 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 11 00:15:53.304449 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 11 00:15:53.306180 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 11 00:15:53.306468 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 00:15:53.306943 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 00:15:53.307234 systemd[1]: Reached target basic.target - Basic System. Sep 11 00:15:53.314868 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 11 00:15:53.356940 systemd-fsck[878]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 11 00:15:53.458856 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 11 00:15:53.460430 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 11 00:15:53.615537 kernel: EXT4-fs (vda9): mounted filesystem 8ebc908f-0860-41e2-beed-287b778bd592 r/w with ordered data mode. Quota mode: none. Sep 11 00:15:53.615901 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 11 00:15:53.616747 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 11 00:15:53.619934 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 00:15:53.621686 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 11 00:15:53.622987 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 11 00:15:53.623043 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 11 00:15:53.623072 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 00:15:53.634350 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 11 00:15:53.636660 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 11 00:15:53.643012 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (886) Sep 11 00:15:53.645286 kernel: BTRFS info (device vda6): first mount of filesystem dfd585e5-5346-4151-8d09-25f0fad7f81c Sep 11 00:15:53.645338 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:15:53.648547 kernel: BTRFS info (device vda6): turning on async discard Sep 11 00:15:53.648585 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 00:15:53.650731 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 00:15:53.681453 initrd-setup-root[911]: cut: /sysroot/etc/passwd: No such file or directory Sep 11 00:15:53.686437 initrd-setup-root[918]: cut: /sysroot/etc/group: No such file or directory Sep 11 00:15:53.691328 initrd-setup-root[925]: cut: /sysroot/etc/shadow: No such file or directory Sep 11 00:15:53.696383 initrd-setup-root[932]: cut: /sysroot/etc/gshadow: No such file or directory Sep 11 00:15:53.897710 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 11 00:15:53.946617 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 11 00:15:53.948747 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 11 00:15:53.967809 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 11 00:15:53.969368 kernel: BTRFS info (device vda6): last unmount of filesystem dfd585e5-5346-4151-8d09-25f0fad7f81c Sep 11 00:15:53.988686 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 11 00:15:54.013889 ignition[1001]: INFO : Ignition 2.21.0 Sep 11 00:15:54.013889 ignition[1001]: INFO : Stage: mount Sep 11 00:15:54.015908 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:15:54.015908 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:15:54.018208 ignition[1001]: INFO : mount: mount passed Sep 11 00:15:54.018208 ignition[1001]: INFO : Ignition finished successfully Sep 11 00:15:54.019713 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 11 00:15:54.022080 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 11 00:15:54.048181 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 00:15:54.059636 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1013) Sep 11 00:15:54.059685 kernel: BTRFS info (device vda6): first mount of filesystem dfd585e5-5346-4151-8d09-25f0fad7f81c Sep 11 00:15:54.060550 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:15:54.064784 kernel: BTRFS info (device vda6): turning on async discard Sep 11 00:15:54.064813 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 00:15:54.066877 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 00:15:54.129888 ignition[1030]: INFO : Ignition 2.21.0 Sep 11 00:15:54.131203 ignition[1030]: INFO : Stage: files Sep 11 00:15:54.131992 ignition[1030]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:15:54.131992 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:15:54.134783 ignition[1030]: DEBUG : files: compiled without relabeling support, skipping Sep 11 00:15:54.137118 ignition[1030]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 11 00:15:54.137118 ignition[1030]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 11 00:15:54.142361 ignition[1030]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 11 00:15:54.144097 ignition[1030]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 11 00:15:54.146031 unknown[1030]: wrote ssh authorized keys file for user: core Sep 11 00:15:54.147277 ignition[1030]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 11 00:15:54.149053 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 11 00:15:54.151136 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 11 00:15:54.237384 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 11 00:15:54.639168 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 11 00:15:54.639168 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 11 00:15:54.643399 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 11 00:15:54.837742 systemd-networkd[852]: eth0: Gained IPv6LL Sep 11 00:15:54.895553 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 11 00:15:55.013120 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 11 00:15:55.013120 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 11 00:15:55.017672 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 11 00:15:55.017672 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 11 00:15:55.017672 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 11 00:15:55.017672 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 00:15:55.017672 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 00:15:55.017672 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 00:15:55.017672 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 00:15:55.030394 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 00:15:55.030394 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 00:15:55.030394 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 11 00:15:55.037837 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 11 00:15:55.037837 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 11 00:15:55.043215 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 11 00:15:55.458454 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 11 00:15:57.613634 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 11 00:15:57.613634 ignition[1030]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 11 00:15:57.617685 ignition[1030]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 00:15:57.891197 ignition[1030]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 00:15:57.891197 ignition[1030]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 11 00:15:57.891197 ignition[1030]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 11 00:15:57.895896 ignition[1030]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 11 00:15:57.895896 ignition[1030]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 11 00:15:57.895896 ignition[1030]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 11 00:15:57.895896 ignition[1030]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 11 00:15:57.917473 ignition[1030]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 11 00:15:57.923952 ignition[1030]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 11 00:15:57.925691 ignition[1030]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 11 00:15:57.925691 ignition[1030]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 11 00:15:57.925691 ignition[1030]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 11 00:15:57.925691 ignition[1030]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 11 00:15:57.925691 ignition[1030]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 11 00:15:57.925691 ignition[1030]: INFO : files: files passed Sep 11 00:15:57.925691 ignition[1030]: INFO : Ignition finished successfully Sep 11 00:15:57.935093 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 11 00:15:57.938841 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 11 00:15:57.941136 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 11 00:15:57.962352 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 11 00:15:57.962553 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 11 00:15:57.966329 initrd-setup-root-after-ignition[1059]: grep: /sysroot/oem/oem-release: No such file or directory Sep 11 00:15:57.969819 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:15:57.969819 initrd-setup-root-after-ignition[1061]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:15:57.976721 initrd-setup-root-after-ignition[1065]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:15:57.979781 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 00:15:57.982781 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 11 00:15:57.985726 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 11 00:15:58.072126 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 11 00:15:58.072276 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 11 00:15:58.073645 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 11 00:15:58.076271 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 11 00:15:58.076901 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 11 00:15:58.077912 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 11 00:15:58.106871 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 00:15:58.108840 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 11 00:15:58.138005 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:15:58.138425 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:15:58.138952 systemd[1]: Stopped target timers.target - Timer Units. Sep 11 00:15:58.139236 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 11 00:15:58.139377 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 00:15:58.144811 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 11 00:15:58.145144 systemd[1]: Stopped target basic.target - Basic System. Sep 11 00:15:58.145494 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 11 00:15:58.146065 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 00:15:58.146416 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 11 00:15:58.146976 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 11 00:15:58.147328 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 11 00:15:58.147896 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 00:15:58.148258 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 11 00:15:58.148812 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 11 00:15:58.149163 systemd[1]: Stopped target swap.target - Swaps. Sep 11 00:15:58.149504 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 11 00:15:58.149641 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 11 00:15:58.171258 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:15:58.172250 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:15:58.174617 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 11 00:15:58.177249 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:15:58.179750 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 11 00:15:58.179926 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 11 00:15:58.182705 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 11 00:15:58.182865 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 00:15:58.183288 systemd[1]: Stopped target paths.target - Path Units. Sep 11 00:15:58.183558 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 11 00:15:58.190741 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:15:58.193629 systemd[1]: Stopped target slices.target - Slice Units. Sep 11 00:15:58.195537 systemd[1]: Stopped target sockets.target - Socket Units. Sep 11 00:15:58.196003 systemd[1]: iscsid.socket: Deactivated successfully. Sep 11 00:15:58.196141 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 00:15:58.196459 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 11 00:15:58.196586 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 00:15:58.199587 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 11 00:15:58.199758 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 00:15:58.202362 systemd[1]: ignition-files.service: Deactivated successfully. Sep 11 00:15:58.202530 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 11 00:15:58.206337 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 11 00:15:58.208107 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 11 00:15:58.208333 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:15:58.211852 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 11 00:15:58.213435 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 11 00:15:58.213626 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:15:58.213995 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 11 00:15:58.214099 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 00:15:58.226355 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 11 00:15:58.429970 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 11 00:15:58.455674 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 11 00:15:58.464941 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 11 00:15:58.465083 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 11 00:15:58.471086 ignition[1085]: INFO : Ignition 2.21.0 Sep 11 00:15:58.471086 ignition[1085]: INFO : Stage: umount Sep 11 00:15:58.473663 ignition[1085]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:15:58.473663 ignition[1085]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:15:58.476158 ignition[1085]: INFO : umount: umount passed Sep 11 00:15:58.476158 ignition[1085]: INFO : Ignition finished successfully Sep 11 00:15:58.480812 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 11 00:15:58.480980 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 11 00:15:58.481992 systemd[1]: Stopped target network.target - Network. Sep 11 00:15:58.484369 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 11 00:15:58.484439 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 11 00:15:58.484989 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 11 00:15:58.485037 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 11 00:15:58.485338 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 11 00:15:58.485385 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 11 00:15:58.491457 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 11 00:15:58.491531 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 11 00:15:58.492057 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 11 00:15:58.492117 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 11 00:15:58.492491 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 11 00:15:58.497165 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 11 00:15:58.511502 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 11 00:15:58.512829 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 11 00:15:58.516878 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 11 00:15:58.517189 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 11 00:15:58.517359 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 11 00:15:58.521587 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 11 00:15:58.522413 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 11 00:15:58.523366 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 11 00:15:58.523454 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:15:58.526263 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 11 00:15:58.527410 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 11 00:15:58.527500 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 00:15:58.528200 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 11 00:15:58.528266 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:15:58.533862 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 11 00:15:58.533930 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 11 00:15:58.534359 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 11 00:15:58.534412 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:15:58.538930 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:15:58.540434 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 11 00:15:58.540504 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:15:58.552746 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 11 00:15:58.552900 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 11 00:15:58.561758 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 11 00:15:58.561989 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:15:58.563200 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 11 00:15:58.563265 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 11 00:15:58.566589 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 11 00:15:58.566634 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:15:58.567254 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 11 00:15:58.567334 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 11 00:15:58.568160 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 11 00:15:58.568212 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 11 00:15:58.574140 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 11 00:15:58.574222 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 00:15:58.576527 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 11 00:15:58.578747 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 11 00:15:58.578836 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:15:58.582393 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 11 00:15:58.582461 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:15:58.585585 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:15:58.585648 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:15:58.591335 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 11 00:15:58.591469 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 11 00:15:58.591577 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:15:58.641026 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 11 00:15:58.641164 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 11 00:15:58.642029 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 11 00:15:58.645195 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 11 00:15:58.682269 systemd[1]: Switching root. Sep 11 00:15:58.733709 systemd-journald[220]: Journal stopped Sep 11 00:16:00.210151 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 11 00:16:00.210249 kernel: SELinux: policy capability network_peer_controls=1 Sep 11 00:16:00.210277 kernel: SELinux: policy capability open_perms=1 Sep 11 00:16:00.210296 kernel: SELinux: policy capability extended_socket_class=1 Sep 11 00:16:00.210315 kernel: SELinux: policy capability always_check_network=0 Sep 11 00:16:00.210338 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 11 00:16:00.210350 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 11 00:16:00.210370 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 11 00:16:00.210387 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 11 00:16:00.210398 kernel: SELinux: policy capability userspace_initial_context=0 Sep 11 00:16:00.210417 kernel: audit: type=1403 audit(1757549759.284:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 11 00:16:00.210438 systemd[1]: Successfully loaded SELinux policy in 63.845ms. Sep 11 00:16:00.210472 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.281ms. Sep 11 00:16:00.210486 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 00:16:00.210499 systemd[1]: Detected virtualization kvm. Sep 11 00:16:00.210542 systemd[1]: Detected architecture x86-64. Sep 11 00:16:00.210556 systemd[1]: Detected first boot. Sep 11 00:16:00.210568 systemd[1]: Initializing machine ID from VM UUID. Sep 11 00:16:00.210587 zram_generator::config[1132]: No configuration found. Sep 11 00:16:00.210611 kernel: Guest personality initialized and is inactive Sep 11 00:16:00.210626 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 11 00:16:00.210644 kernel: Initialized host personality Sep 11 00:16:00.210656 kernel: NET: Registered PF_VSOCK protocol family Sep 11 00:16:00.210668 systemd[1]: Populated /etc with preset unit settings. Sep 11 00:16:00.210687 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 11 00:16:00.210714 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 11 00:16:00.210730 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 11 00:16:00.210752 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 11 00:16:00.210765 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 11 00:16:00.210784 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 11 00:16:00.210797 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 11 00:16:00.210809 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 11 00:16:00.210821 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 11 00:16:00.210837 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 11 00:16:00.210850 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 11 00:16:00.210868 systemd[1]: Created slice user.slice - User and Session Slice. Sep 11 00:16:00.210881 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:16:00.210894 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:16:00.210912 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 11 00:16:00.210924 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 11 00:16:00.210943 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 11 00:16:00.210958 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 00:16:00.210972 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 11 00:16:00.210985 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:16:00.210997 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:16:00.211010 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 11 00:16:00.211028 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 11 00:16:00.211044 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 11 00:16:00.211059 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 11 00:16:00.211074 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:16:00.211089 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 00:16:00.211105 systemd[1]: Reached target slices.target - Slice Units. Sep 11 00:16:00.211120 systemd[1]: Reached target swap.target - Swaps. Sep 11 00:16:00.211134 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 11 00:16:00.211154 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 11 00:16:00.211172 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 11 00:16:00.211184 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:16:00.211206 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 00:16:00.211222 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:16:00.211234 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 11 00:16:00.211247 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 11 00:16:00.211259 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 11 00:16:00.211271 systemd[1]: Mounting media.mount - External Media Directory... Sep 11 00:16:00.211283 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:16:00.211302 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 11 00:16:00.211314 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 11 00:16:00.211327 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 11 00:16:00.211339 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 11 00:16:00.211351 systemd[1]: Reached target machines.target - Containers. Sep 11 00:16:00.211363 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 11 00:16:00.211380 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:16:00.211392 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 00:16:00.211409 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 11 00:16:00.211422 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:16:00.211434 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 00:16:00.211448 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:16:00.211460 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 11 00:16:00.211472 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:16:00.211485 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 11 00:16:00.211497 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 11 00:16:00.211532 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 11 00:16:00.211544 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 11 00:16:00.211558 systemd[1]: Stopped systemd-fsck-usr.service. Sep 11 00:16:00.211571 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:16:00.211584 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 00:16:00.211597 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 00:16:00.211609 kernel: loop: module loaded Sep 11 00:16:00.211621 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 00:16:00.211633 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 11 00:16:00.211651 kernel: fuse: init (API version 7.41) Sep 11 00:16:00.211695 systemd-journald[1196]: Collecting audit messages is disabled. Sep 11 00:16:00.211724 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 11 00:16:00.211737 systemd-journald[1196]: Journal started Sep 11 00:16:00.211765 systemd-journald[1196]: Runtime Journal (/run/log/journal/55b167159cf040ec965f01e78bd194ee) is 6M, max 48.6M, 42.5M free. Sep 11 00:15:59.946454 systemd[1]: Queued start job for default target multi-user.target. Sep 11 00:15:59.967259 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 11 00:15:59.967966 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 11 00:16:00.215828 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 00:16:00.217799 systemd[1]: verity-setup.service: Deactivated successfully. Sep 11 00:16:00.217836 systemd[1]: Stopped verity-setup.service. Sep 11 00:16:00.221543 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:16:00.226538 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 00:16:00.227609 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 11 00:16:00.228831 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 11 00:16:00.230119 systemd[1]: Mounted media.mount - External Media Directory. Sep 11 00:16:00.231255 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 11 00:16:00.232589 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 11 00:16:00.234043 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 11 00:16:00.235587 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:16:00.237422 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 11 00:16:00.237855 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 11 00:16:00.241034 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:16:00.241417 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:16:00.243602 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:16:00.244102 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:16:00.246153 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 11 00:16:00.246406 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 11 00:16:00.248975 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:16:00.249299 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:16:00.253323 kernel: ACPI: bus type drm_connector registered Sep 11 00:16:00.251395 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 00:16:00.255454 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 00:16:00.255831 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 00:16:00.257472 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:16:00.259280 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 11 00:16:00.264238 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 11 00:16:00.275313 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 00:16:00.278111 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 11 00:16:00.281105 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 11 00:16:00.282458 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 11 00:16:00.282489 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 00:16:00.284594 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 11 00:16:00.303977 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 11 00:16:00.306777 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:16:00.308383 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 11 00:16:00.318072 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 11 00:16:00.319625 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 00:16:00.322437 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 11 00:16:00.323764 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 00:16:00.326880 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:16:00.331236 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 11 00:16:00.336369 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 11 00:16:00.337892 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 11 00:16:00.346086 systemd-journald[1196]: Time spent on flushing to /var/log/journal/55b167159cf040ec965f01e78bd194ee is 21.065ms for 988 entries. Sep 11 00:16:00.346086 systemd-journald[1196]: System Journal (/var/log/journal/55b167159cf040ec965f01e78bd194ee) is 8M, max 195.6M, 187.6M free. Sep 11 00:16:00.378068 systemd-journald[1196]: Received client request to flush runtime journal. Sep 11 00:16:00.378113 kernel: loop0: detected capacity change from 0 to 221472 Sep 11 00:16:00.353053 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 11 00:16:00.358192 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 11 00:16:00.359933 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:16:00.364413 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 11 00:16:00.368663 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 11 00:16:00.374231 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 11 00:16:00.381053 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:16:00.385561 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 11 00:16:00.419560 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 11 00:16:00.421696 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 11 00:16:00.442683 kernel: loop1: detected capacity change from 0 to 128016 Sep 11 00:16:00.446727 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 11 00:16:00.450417 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 00:16:00.473687 kernel: loop2: detected capacity change from 0 to 111000 Sep 11 00:16:00.487977 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Sep 11 00:16:00.487998 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Sep 11 00:16:00.499198 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:16:00.535550 kernel: loop3: detected capacity change from 0 to 221472 Sep 11 00:16:00.551541 kernel: loop4: detected capacity change from 0 to 128016 Sep 11 00:16:00.569539 kernel: loop5: detected capacity change from 0 to 111000 Sep 11 00:16:00.590960 (sd-merge)[1273]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 11 00:16:00.591795 (sd-merge)[1273]: Merged extensions into '/usr'. Sep 11 00:16:00.599231 systemd[1]: Reload requested from client PID 1247 ('systemd-sysext') (unit systemd-sysext.service)... Sep 11 00:16:00.599252 systemd[1]: Reloading... Sep 11 00:16:00.746716 zram_generator::config[1302]: No configuration found. Sep 11 00:16:00.943599 ldconfig[1238]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 11 00:16:00.984539 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 11 00:16:00.985190 systemd[1]: Reloading finished in 385 ms. Sep 11 00:16:01.024140 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 11 00:16:01.025973 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 11 00:16:01.063211 systemd[1]: Starting ensure-sysext.service... Sep 11 00:16:01.065921 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 00:16:01.086683 systemd[1]: Reload requested from client PID 1336 ('systemctl') (unit ensure-sysext.service)... Sep 11 00:16:01.086703 systemd[1]: Reloading... Sep 11 00:16:01.148814 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 11 00:16:01.148858 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 11 00:16:01.149235 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 11 00:16:01.152065 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 11 00:16:01.153204 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 11 00:16:01.153549 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Sep 11 00:16:01.153620 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Sep 11 00:16:01.163278 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 00:16:01.163297 systemd-tmpfiles[1337]: Skipping /boot Sep 11 00:16:01.182399 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 00:16:01.182637 systemd-tmpfiles[1337]: Skipping /boot Sep 11 00:16:01.187549 zram_generator::config[1365]: No configuration found. Sep 11 00:16:01.547171 systemd[1]: Reloading finished in 460 ms. Sep 11 00:16:01.568148 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 11 00:16:01.598844 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:16:01.608867 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 00:16:01.611495 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 11 00:16:01.614295 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 11 00:16:01.623398 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 00:16:01.627421 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:16:01.631625 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 11 00:16:01.636207 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:16:01.636434 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:16:01.662855 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:16:01.666999 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:16:01.679396 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:16:01.681155 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:16:01.681308 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:16:01.681454 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:16:01.683637 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 11 00:16:01.698476 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:16:01.698897 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:16:01.701185 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:16:01.701648 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:16:01.712392 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:16:01.712734 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:16:01.721268 systemd-udevd[1407]: Using default interface naming scheme 'v255'. Sep 11 00:16:01.721458 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 11 00:16:01.731716 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:16:01.732249 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:16:01.734818 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:16:01.737258 augenrules[1437]: No rules Sep 11 00:16:01.739839 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 00:16:01.751879 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:16:01.755279 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:16:01.756793 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:16:01.757030 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:16:01.760742 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 11 00:16:01.774676 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 11 00:16:01.782599 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:16:01.784292 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:16:01.786653 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 00:16:01.788049 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 00:16:01.790201 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 11 00:16:01.792388 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:16:01.793727 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:16:01.795785 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 00:16:01.796058 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 00:16:01.799144 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:16:01.799454 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:16:01.802622 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:16:01.802870 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:16:01.805396 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 11 00:16:01.821682 systemd[1]: Finished ensure-sysext.service. Sep 11 00:16:01.845689 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 00:16:01.846892 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 00:16:01.846984 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 00:16:01.851253 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 11 00:16:01.852596 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 11 00:16:01.901884 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 11 00:16:01.927855 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 11 00:16:02.014439 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 11 00:16:02.024540 kernel: mousedev: PS/2 mouse device common for all mice Sep 11 00:16:02.027276 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 11 00:16:02.072586 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 11 00:16:02.079556 kernel: ACPI: button: Power Button [PWRF] Sep 11 00:16:02.086913 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 11 00:16:02.096699 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 11 00:16:02.097081 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 11 00:16:02.129734 systemd-resolved[1406]: Positive Trust Anchors: Sep 11 00:16:02.129751 systemd-resolved[1406]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 00:16:02.129781 systemd-resolved[1406]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 00:16:02.152609 systemd-resolved[1406]: Defaulting to hostname 'linux'. Sep 11 00:16:02.154020 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 00:16:02.156464 systemd-networkd[1484]: lo: Link UP Sep 11 00:16:02.156787 systemd-networkd[1484]: lo: Gained carrier Sep 11 00:16:02.158592 systemd-networkd[1484]: Enumeration completed Sep 11 00:16:02.160956 systemd-networkd[1484]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:16:02.161617 systemd-networkd[1484]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 00:16:02.173732 systemd-networkd[1484]: eth0: Link UP Sep 11 00:16:02.176818 systemd-networkd[1484]: eth0: Gained carrier Sep 11 00:16:02.177213 systemd-networkd[1484]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:16:02.182994 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 11 00:16:02.184322 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 00:16:02.189590 systemd-networkd[1484]: eth0: DHCPv4 address 10.0.0.58/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 11 00:16:02.192215 systemd-timesyncd[1485]: Network configuration changed, trying to establish connection. Sep 11 00:16:02.192395 systemd[1]: Reached target network.target - Network. Sep 11 00:16:02.210318 systemd-timesyncd[1485]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 11 00:16:02.210388 systemd-timesyncd[1485]: Initial clock synchronization to Thu 2025-09-11 00:16:02.443113 UTC. Sep 11 00:16:02.211757 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:16:02.216655 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 00:16:02.218711 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 11 00:16:02.220240 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 11 00:16:02.221649 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 11 00:16:02.222899 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 11 00:16:02.224265 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 11 00:16:02.224305 systemd[1]: Reached target paths.target - Path Units. Sep 11 00:16:02.225343 systemd[1]: Reached target time-set.target - System Time Set. Sep 11 00:16:02.227835 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 11 00:16:02.229170 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 11 00:16:02.230806 systemd[1]: Reached target timers.target - Timer Units. Sep 11 00:16:02.233165 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 11 00:16:02.236479 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 11 00:16:02.241425 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 11 00:16:02.243853 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 11 00:16:02.245468 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 11 00:16:02.260218 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 11 00:16:02.262729 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 11 00:16:02.266733 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 11 00:16:02.270946 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 11 00:16:02.274464 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 11 00:16:02.293389 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 00:16:02.294694 systemd[1]: Reached target basic.target - Basic System. Sep 11 00:16:02.297480 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 11 00:16:02.297532 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 11 00:16:02.319779 systemd[1]: Starting containerd.service - containerd container runtime... Sep 11 00:16:02.322819 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 11 00:16:02.331577 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 11 00:16:02.337795 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 11 00:16:02.340296 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 11 00:16:02.341439 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 11 00:16:02.343666 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 11 00:16:02.345190 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 11 00:16:02.350642 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 11 00:16:02.352293 jq[1530]: false Sep 11 00:16:02.390335 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 11 00:16:02.395222 google_oslogin_nss_cache[1532]: oslogin_cache_refresh[1532]: Refreshing passwd entry cache Sep 11 00:16:02.395241 oslogin_cache_refresh[1532]: Refreshing passwd entry cache Sep 11 00:16:02.397136 extend-filesystems[1531]: Found /dev/vda6 Sep 11 00:16:02.400377 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 11 00:16:02.404544 extend-filesystems[1531]: Found /dev/vda9 Sep 11 00:16:02.404544 extend-filesystems[1531]: Checking size of /dev/vda9 Sep 11 00:16:02.404533 oslogin_cache_refresh[1532]: Failure getting users, quitting Sep 11 00:16:02.410671 google_oslogin_nss_cache[1532]: oslogin_cache_refresh[1532]: Failure getting users, quitting Sep 11 00:16:02.410671 google_oslogin_nss_cache[1532]: oslogin_cache_refresh[1532]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 11 00:16:02.410671 google_oslogin_nss_cache[1532]: oslogin_cache_refresh[1532]: Refreshing group entry cache Sep 11 00:16:02.404559 oslogin_cache_refresh[1532]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 11 00:16:02.404615 oslogin_cache_refresh[1532]: Refreshing group entry cache Sep 11 00:16:02.412367 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 11 00:16:02.416009 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:16:02.416155 oslogin_cache_refresh[1532]: Failure getting groups, quitting Sep 11 00:16:02.417089 google_oslogin_nss_cache[1532]: oslogin_cache_refresh[1532]: Failure getting groups, quitting Sep 11 00:16:02.417089 google_oslogin_nss_cache[1532]: oslogin_cache_refresh[1532]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 11 00:16:02.416184 oslogin_cache_refresh[1532]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 11 00:16:02.418597 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 11 00:16:02.419418 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 11 00:16:02.421889 systemd[1]: Starting update-engine.service - Update Engine... Sep 11 00:16:02.424775 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 11 00:16:02.427856 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 11 00:16:02.431969 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 11 00:16:02.434830 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 11 00:16:02.435177 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 11 00:16:02.435576 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 11 00:16:02.435904 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 11 00:16:02.437931 extend-filesystems[1531]: Resized partition /dev/vda9 Sep 11 00:16:02.437712 systemd[1]: motdgen.service: Deactivated successfully. Sep 11 00:16:02.468020 extend-filesystems[1557]: resize2fs 1.47.2 (1-Jan-2025) Sep 11 00:16:02.456769 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 11 00:16:02.466404 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 11 00:16:02.470439 jq[1552]: true Sep 11 00:16:02.466785 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 11 00:16:02.487182 (ntainerd)[1561]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 11 00:16:02.491556 jq[1559]: true Sep 11 00:16:02.522189 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 11 00:16:02.544908 systemd-logind[1548]: Watching system buttons on /dev/input/event2 (Power Button) Sep 11 00:16:02.544966 systemd-logind[1548]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 11 00:16:02.548718 systemd-logind[1548]: New seat seat0. Sep 11 00:16:02.552001 systemd[1]: Started systemd-logind.service - User Login Management. Sep 11 00:16:02.555858 update_engine[1551]: I20250911 00:16:02.555282 1551 main.cc:92] Flatcar Update Engine starting Sep 11 00:16:02.558313 tar[1558]: linux-amd64/helm Sep 11 00:16:02.584081 kernel: kvm_amd: TSC scaling supported Sep 11 00:16:02.584161 kernel: kvm_amd: Nested Virtualization enabled Sep 11 00:16:02.584181 kernel: kvm_amd: Nested Paging enabled Sep 11 00:16:02.584938 kernel: kvm_amd: LBR virtualization supported Sep 11 00:16:02.584978 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 11 00:16:02.586542 kernel: kvm_amd: Virtual GIF supported Sep 11 00:16:02.593800 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 11 00:16:02.612909 dbus-daemon[1528]: [system] SELinux support is enabled Sep 11 00:16:02.623184 dbus-daemon[1528]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 11 00:16:02.654491 update_engine[1551]: I20250911 00:16:02.625422 1551 update_check_scheduler.cc:74] Next update check in 3m6s Sep 11 00:16:02.613173 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 11 00:16:02.616782 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 11 00:16:02.616809 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 11 00:16:02.616901 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 11 00:16:02.616918 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 11 00:16:02.625290 systemd[1]: Started update-engine.service - Update Engine. Sep 11 00:16:02.627840 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 11 00:16:02.658300 extend-filesystems[1557]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 11 00:16:02.658300 extend-filesystems[1557]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 11 00:16:02.658300 extend-filesystems[1557]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 11 00:16:02.657401 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 11 00:16:02.660128 extend-filesystems[1531]: Resized filesystem in /dev/vda9 Sep 11 00:16:02.657767 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 11 00:16:02.676779 bash[1593]: Updated "/home/core/.ssh/authorized_keys" Sep 11 00:16:02.777538 kernel: EDAC MC: Ver: 3.0.0 Sep 11 00:16:02.785596 sshd_keygen[1578]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 11 00:16:02.827954 locksmithd[1594]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 11 00:16:02.887111 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 11 00:16:02.889412 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:16:02.891338 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 11 00:16:02.900088 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 11 00:16:02.901805 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 11 00:16:02.928038 systemd[1]: issuegen.service: Deactivated successfully. Sep 11 00:16:02.928374 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 11 00:16:02.932932 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 11 00:16:02.964848 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 11 00:16:02.969838 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 11 00:16:02.972426 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 11 00:16:02.974184 systemd[1]: Reached target getty.target - Login Prompts. Sep 11 00:16:03.079906 containerd[1561]: time="2025-09-11T00:16:03Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 11 00:16:03.080677 containerd[1561]: time="2025-09-11T00:16:03.080634109Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 11 00:16:03.096234 containerd[1561]: time="2025-09-11T00:16:03.096145748Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.919µs" Sep 11 00:16:03.096234 containerd[1561]: time="2025-09-11T00:16:03.096196166Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 11 00:16:03.096234 containerd[1561]: time="2025-09-11T00:16:03.096222881Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 11 00:16:03.096571 containerd[1561]: time="2025-09-11T00:16:03.096540752Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 11 00:16:03.096571 containerd[1561]: time="2025-09-11T00:16:03.096563765Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 11 00:16:03.096661 containerd[1561]: time="2025-09-11T00:16:03.096594686Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 00:16:03.096715 containerd[1561]: time="2025-09-11T00:16:03.096680552Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 00:16:03.096715 containerd[1561]: time="2025-09-11T00:16:03.096711555Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 00:16:03.097092 containerd[1561]: time="2025-09-11T00:16:03.097061007Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 00:16:03.097092 containerd[1561]: time="2025-09-11T00:16:03.097079834Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 00:16:03.097158 containerd[1561]: time="2025-09-11T00:16:03.097090886Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 00:16:03.097158 containerd[1561]: time="2025-09-11T00:16:03.097100083Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 11 00:16:03.097285 containerd[1561]: time="2025-09-11T00:16:03.097244966Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 11 00:16:03.097664 containerd[1561]: time="2025-09-11T00:16:03.097633814Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 00:16:03.097700 containerd[1561]: time="2025-09-11T00:16:03.097676345Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 00:16:03.097700 containerd[1561]: time="2025-09-11T00:16:03.097688006Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 11 00:16:03.097766 containerd[1561]: time="2025-09-11T00:16:03.097739476Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 11 00:16:03.099348 containerd[1561]: time="2025-09-11T00:16:03.099313211Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 11 00:16:03.099486 containerd[1561]: time="2025-09-11T00:16:03.099449475Z" level=info msg="metadata content store policy set" policy=shared Sep 11 00:16:03.107173 containerd[1561]: time="2025-09-11T00:16:03.107116284Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 11 00:16:03.107293 containerd[1561]: time="2025-09-11T00:16:03.107192994Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 11 00:16:03.107293 containerd[1561]: time="2025-09-11T00:16:03.107235122Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 11 00:16:03.107293 containerd[1561]: time="2025-09-11T00:16:03.107271734Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 11 00:16:03.107385 containerd[1561]: time="2025-09-11T00:16:03.107294757Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 11 00:16:03.107385 containerd[1561]: time="2025-09-11T00:16:03.107307718Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 11 00:16:03.107385 containerd[1561]: time="2025-09-11T00:16:03.107321162Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 11 00:16:03.107385 containerd[1561]: time="2025-09-11T00:16:03.107335113Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 11 00:16:03.107385 containerd[1561]: time="2025-09-11T00:16:03.107355538Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 11 00:16:03.107385 containerd[1561]: time="2025-09-11T00:16:03.107376839Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 11 00:16:03.107578 containerd[1561]: time="2025-09-11T00:16:03.107387438Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 11 00:16:03.107578 containerd[1561]: time="2025-09-11T00:16:03.107402873Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 11 00:16:03.107837 containerd[1561]: time="2025-09-11T00:16:03.107796804Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 11 00:16:03.107901 containerd[1561]: time="2025-09-11T00:16:03.107866183Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 11 00:16:03.107948 containerd[1561]: time="2025-09-11T00:16:03.107904651Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 11 00:16:03.108009 containerd[1561]: time="2025-09-11T00:16:03.107943749Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 11 00:16:03.108009 containerd[1561]: time="2025-09-11T00:16:03.107977567Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 11 00:16:03.108087 containerd[1561]: time="2025-09-11T00:16:03.108030625Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 11 00:16:03.108087 containerd[1561]: time="2025-09-11T00:16:03.108059277Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 11 00:16:03.108087 containerd[1561]: time="2025-09-11T00:16:03.108073887Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 11 00:16:03.108168 containerd[1561]: time="2025-09-11T00:16:03.108091518Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 11 00:16:03.108168 containerd[1561]: time="2025-09-11T00:16:03.108113500Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 11 00:16:03.108253 containerd[1561]: time="2025-09-11T00:16:03.108224132Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 11 00:16:03.108407 containerd[1561]: time="2025-09-11T00:16:03.108377551Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 11 00:16:03.108439 containerd[1561]: time="2025-09-11T00:16:03.108407255Z" level=info msg="Start snapshots syncer" Sep 11 00:16:03.108477 containerd[1561]: time="2025-09-11T00:16:03.108462220Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 11 00:16:03.109024 containerd[1561]: time="2025-09-11T00:16:03.108895269Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 11 00:16:03.115333 containerd[1561]: time="2025-09-11T00:16:03.109044379Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 11 00:16:03.116389 containerd[1561]: time="2025-09-11T00:16:03.116322052Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 11 00:16:03.116690 containerd[1561]: time="2025-09-11T00:16:03.116662750Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 11 00:16:03.116724 containerd[1561]: time="2025-09-11T00:16:03.116712116Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 11 00:16:03.116745 containerd[1561]: time="2025-09-11T00:16:03.116730768Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 11 00:16:03.116771 containerd[1561]: time="2025-09-11T00:16:03.116749544Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 11 00:16:03.116814 containerd[1561]: time="2025-09-11T00:16:03.116777743Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 11 00:16:03.116814 containerd[1561]: time="2025-09-11T00:16:03.116795693Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 11 00:16:03.116852 containerd[1561]: time="2025-09-11T00:16:03.116811716Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 11 00:16:03.116873 containerd[1561]: time="2025-09-11T00:16:03.116859648Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 11 00:16:03.116892 containerd[1561]: time="2025-09-11T00:16:03.116875372Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 11 00:16:03.116892 containerd[1561]: time="2025-09-11T00:16:03.116888940Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 11 00:16:03.117046 containerd[1561]: time="2025-09-11T00:16:03.116962299Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 00:16:03.117046 containerd[1561]: time="2025-09-11T00:16:03.116986508Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 00:16:03.117097 containerd[1561]: time="2025-09-11T00:16:03.117071240Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 00:16:03.117118 containerd[1561]: time="2025-09-11T00:16:03.117105935Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 00:16:03.117139 containerd[1561]: time="2025-09-11T00:16:03.117120297Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 11 00:16:03.117159 containerd[1561]: time="2025-09-11T00:16:03.117147032Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 11 00:16:03.117185 containerd[1561]: time="2025-09-11T00:16:03.117169994Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 11 00:16:03.117206 containerd[1561]: time="2025-09-11T00:16:03.117195728Z" level=info msg="runtime interface created" Sep 11 00:16:03.117206 containerd[1561]: time="2025-09-11T00:16:03.117203182Z" level=info msg="created NRI interface" Sep 11 00:16:03.117242 containerd[1561]: time="2025-09-11T00:16:03.117214060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 11 00:16:03.117242 containerd[1561]: time="2025-09-11T00:16:03.117238475Z" level=info msg="Connect containerd service" Sep 11 00:16:03.117317 containerd[1561]: time="2025-09-11T00:16:03.117279985Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 11 00:16:03.118597 containerd[1561]: time="2025-09-11T00:16:03.118566049Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 11 00:16:03.238933 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 11 00:16:03.242618 systemd[1]: Started sshd@0-10.0.0.58:22-10.0.0.1:35370.service - OpenSSH per-connection server daemon (10.0.0.1:35370). Sep 11 00:16:03.351392 containerd[1561]: time="2025-09-11T00:16:03.351326801Z" level=info msg="Start subscribing containerd event" Sep 11 00:16:03.351565 containerd[1561]: time="2025-09-11T00:16:03.351409510Z" level=info msg="Start recovering state" Sep 11 00:16:03.351609 containerd[1561]: time="2025-09-11T00:16:03.351581448Z" level=info msg="Start event monitor" Sep 11 00:16:03.351609 containerd[1561]: time="2025-09-11T00:16:03.351599872Z" level=info msg="Start cni network conf syncer for default" Sep 11 00:16:03.351659 containerd[1561]: time="2025-09-11T00:16:03.351612905Z" level=info msg="Start streaming server" Sep 11 00:16:03.351659 containerd[1561]: time="2025-09-11T00:16:03.351636062Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 11 00:16:03.351659 containerd[1561]: time="2025-09-11T00:16:03.351649033Z" level=info msg="runtime interface starting up..." Sep 11 00:16:03.351737 containerd[1561]: time="2025-09-11T00:16:03.351660859Z" level=info msg="starting plugins..." Sep 11 00:16:03.351737 containerd[1561]: time="2025-09-11T00:16:03.351688563Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 11 00:16:03.352394 containerd[1561]: time="2025-09-11T00:16:03.352358515Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 11 00:16:03.352440 containerd[1561]: time="2025-09-11T00:16:03.352417027Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 11 00:16:03.352501 containerd[1561]: time="2025-09-11T00:16:03.352483911Z" level=info msg="containerd successfully booted in 0.273222s" Sep 11 00:16:03.352708 systemd[1]: Started containerd.service - containerd container runtime. Sep 11 00:16:03.370332 tar[1558]: linux-amd64/LICENSE Sep 11 00:16:03.370495 tar[1558]: linux-amd64/README.md Sep 11 00:16:03.393098 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 11 00:16:03.441640 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 35370 ssh2: RSA SHA256:y/XwUTkYMtMNacauLj7j4r7D0OZbB+8bKKbHTNwhPa4 Sep 11 00:16:03.444088 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:16:03.453763 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 11 00:16:03.456793 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 11 00:16:03.470060 systemd-logind[1548]: New session 1 of user core. Sep 11 00:16:03.499692 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 11 00:16:03.508779 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 11 00:16:03.565758 systemd-networkd[1484]: eth0: Gained IPv6LL Sep 11 00:16:03.570021 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 11 00:16:03.572089 systemd[1]: Reached target network-online.target - Network is Online. Sep 11 00:16:03.575232 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 11 00:16:03.578524 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:16:03.581623 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 11 00:16:03.589189 (systemd)[1653]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 11 00:16:03.595368 systemd-logind[1548]: New session c1 of user core. Sep 11 00:16:03.616995 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 11 00:16:03.621290 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 11 00:16:03.621621 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 11 00:16:03.623502 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 11 00:16:03.801919 systemd[1653]: Queued start job for default target default.target. Sep 11 00:16:03.832343 systemd[1653]: Created slice app.slice - User Application Slice. Sep 11 00:16:03.832377 systemd[1653]: Reached target paths.target - Paths. Sep 11 00:16:03.832427 systemd[1653]: Reached target timers.target - Timers. Sep 11 00:16:03.834275 systemd[1653]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 11 00:16:03.851727 systemd[1653]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 11 00:16:03.851943 systemd[1653]: Reached target sockets.target - Sockets. Sep 11 00:16:03.852018 systemd[1653]: Reached target basic.target - Basic System. Sep 11 00:16:03.852083 systemd[1653]: Reached target default.target - Main User Target. Sep 11 00:16:03.852131 systemd[1653]: Startup finished in 247ms. Sep 11 00:16:03.852649 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 11 00:16:03.860996 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 11 00:16:03.932847 systemd[1]: Started sshd@1-10.0.0.58:22-10.0.0.1:35384.service - OpenSSH per-connection server daemon (10.0.0.1:35384). Sep 11 00:16:04.073141 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 35384 ssh2: RSA SHA256:y/XwUTkYMtMNacauLj7j4r7D0OZbB+8bKKbHTNwhPa4 Sep 11 00:16:04.075094 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:16:04.082047 systemd-logind[1548]: New session 2 of user core. Sep 11 00:16:04.091725 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 11 00:16:04.154426 sshd[1685]: Connection closed by 10.0.0.1 port 35384 Sep 11 00:16:04.154919 sshd-session[1682]: pam_unix(sshd:session): session closed for user core Sep 11 00:16:04.165186 systemd[1]: sshd@1-10.0.0.58:22-10.0.0.1:35384.service: Deactivated successfully. Sep 11 00:16:04.167644 systemd[1]: session-2.scope: Deactivated successfully. Sep 11 00:16:04.168511 systemd-logind[1548]: Session 2 logged out. Waiting for processes to exit. Sep 11 00:16:04.173910 systemd[1]: Started sshd@2-10.0.0.58:22-10.0.0.1:35386.service - OpenSSH per-connection server daemon (10.0.0.1:35386). Sep 11 00:16:04.176342 systemd-logind[1548]: Removed session 2. Sep 11 00:16:04.247232 sshd[1691]: Accepted publickey for core from 10.0.0.1 port 35386 ssh2: RSA SHA256:y/XwUTkYMtMNacauLj7j4r7D0OZbB+8bKKbHTNwhPa4 Sep 11 00:16:04.249418 sshd-session[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:16:04.255466 systemd-logind[1548]: New session 3 of user core. Sep 11 00:16:04.266684 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 11 00:16:04.327882 sshd[1694]: Connection closed by 10.0.0.1 port 35386 Sep 11 00:16:04.328310 sshd-session[1691]: pam_unix(sshd:session): session closed for user core Sep 11 00:16:04.334250 systemd[1]: sshd@2-10.0.0.58:22-10.0.0.1:35386.service: Deactivated successfully. Sep 11 00:16:04.337145 systemd[1]: session-3.scope: Deactivated successfully. Sep 11 00:16:04.338551 systemd-logind[1548]: Session 3 logged out. Waiting for processes to exit. Sep 11 00:16:04.340200 systemd-logind[1548]: Removed session 3. Sep 11 00:16:05.161655 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:16:05.163462 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 11 00:16:05.164780 systemd[1]: Startup finished in 4.154s (kernel) + 9.576s (initrd) + 5.941s (userspace) = 19.673s. Sep 11 00:16:05.200246 (kubelet)[1704]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:16:06.250111 kubelet[1704]: E0911 00:16:06.250003 1704 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:16:06.256381 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:16:06.256629 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:16:06.257112 systemd[1]: kubelet.service: Consumed 2.308s CPU time, 267.3M memory peak. Sep 11 00:16:14.487231 systemd[1]: Started sshd@3-10.0.0.58:22-10.0.0.1:37802.service - OpenSSH per-connection server daemon (10.0.0.1:37802). Sep 11 00:16:14.552233 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 37802 ssh2: RSA SHA256:y/XwUTkYMtMNacauLj7j4r7D0OZbB+8bKKbHTNwhPa4 Sep 11 00:16:14.554200 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:16:14.559351 systemd-logind[1548]: New session 4 of user core. Sep 11 00:16:14.578693 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 11 00:16:14.636098 sshd[1720]: Connection closed by 10.0.0.1 port 37802 Sep 11 00:16:14.636560 sshd-session[1717]: pam_unix(sshd:session): session closed for user core Sep 11 00:16:14.645344 systemd[1]: sshd@3-10.0.0.58:22-10.0.0.1:37802.service: Deactivated successfully. Sep 11 00:16:14.647503 systemd[1]: session-4.scope: Deactivated successfully. Sep 11 00:16:14.648498 systemd-logind[1548]: Session 4 logged out. Waiting for processes to exit. Sep 11 00:16:14.651443 systemd[1]: Started sshd@4-10.0.0.58:22-10.0.0.1:37804.service - OpenSSH per-connection server daemon (10.0.0.1:37804). Sep 11 00:16:14.652360 systemd-logind[1548]: Removed session 4. Sep 11 00:16:14.705458 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 37804 ssh2: RSA SHA256:y/XwUTkYMtMNacauLj7j4r7D0OZbB+8bKKbHTNwhPa4 Sep 11 00:16:14.707019 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:16:14.712435 systemd-logind[1548]: New session 5 of user core. Sep 11 00:16:14.722709 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 11 00:16:14.774329 sshd[1729]: Connection closed by 10.0.0.1 port 37804 Sep 11 00:16:14.774462 sshd-session[1726]: pam_unix(sshd:session): session closed for user core Sep 11 00:16:14.786492 systemd[1]: sshd@4-10.0.0.58:22-10.0.0.1:37804.service: Deactivated successfully. Sep 11 00:16:14.788604 systemd[1]: session-5.scope: Deactivated successfully. Sep 11 00:16:14.789582 systemd-logind[1548]: Session 5 logged out. Waiting for processes to exit. Sep 11 00:16:14.791532 systemd-logind[1548]: Removed session 5. Sep 11 00:16:14.793043 systemd[1]: Started sshd@5-10.0.0.58:22-10.0.0.1:37808.service - OpenSSH per-connection server daemon (10.0.0.1:37808). Sep 11 00:16:14.855096 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 37808 ssh2: RSA SHA256:y/XwUTkYMtMNacauLj7j4r7D0OZbB+8bKKbHTNwhPa4 Sep 11 00:16:14.856893 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:16:14.862322 systemd-logind[1548]: New session 6 of user core. Sep 11 00:16:14.869663 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 11 00:16:14.926464 sshd[1738]: Connection closed by 10.0.0.1 port 37808 Sep 11 00:16:14.926868 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Sep 11 00:16:14.939609 systemd[1]: sshd@5-10.0.0.58:22-10.0.0.1:37808.service: Deactivated successfully. Sep 11 00:16:14.941776 systemd[1]: session-6.scope: Deactivated successfully. Sep 11 00:16:14.942576 systemd-logind[1548]: Session 6 logged out. Waiting for processes to exit. Sep 11 00:16:14.945459 systemd[1]: Started sshd@6-10.0.0.58:22-10.0.0.1:37812.service - OpenSSH per-connection server daemon (10.0.0.1:37812). Sep 11 00:16:14.946272 systemd-logind[1548]: Removed session 6. Sep 11 00:16:15.008778 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 37812 ssh2: RSA SHA256:y/XwUTkYMtMNacauLj7j4r7D0OZbB+8bKKbHTNwhPa4 Sep 11 00:16:15.010429 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:16:15.015758 systemd-logind[1548]: New session 7 of user core. Sep 11 00:16:15.025695 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 11 00:16:15.088622 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 11 00:16:15.088972 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:16:15.108017 sudo[1748]: pam_unix(sudo:session): session closed for user root Sep 11 00:16:15.110469 sshd[1747]: Connection closed by 10.0.0.1 port 37812 Sep 11 00:16:15.110973 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Sep 11 00:16:15.144095 systemd[1]: sshd@6-10.0.0.58:22-10.0.0.1:37812.service: Deactivated successfully. Sep 11 00:16:15.146434 systemd[1]: session-7.scope: Deactivated successfully. Sep 11 00:16:15.147445 systemd-logind[1548]: Session 7 logged out. Waiting for processes to exit. Sep 11 00:16:15.151546 systemd[1]: Started sshd@7-10.0.0.58:22-10.0.0.1:37828.service - OpenSSH per-connection server daemon (10.0.0.1:37828). Sep 11 00:16:15.152344 systemd-logind[1548]: Removed session 7. Sep 11 00:16:15.211202 sshd[1754]: Accepted publickey for core from 10.0.0.1 port 37828 ssh2: RSA SHA256:y/XwUTkYMtMNacauLj7j4r7D0OZbB+8bKKbHTNwhPa4 Sep 11 00:16:15.213392 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:16:15.219151 systemd-logind[1548]: New session 8 of user core. Sep 11 00:16:15.228746 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 11 00:16:15.287497 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 11 00:16:15.288012 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:16:15.295789 sudo[1759]: pam_unix(sudo:session): session closed for user root Sep 11 00:16:15.303424 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 11 00:16:15.303801 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:16:15.316885 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 00:16:15.380228 augenrules[1781]: No rules Sep 11 00:16:15.382309 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 00:16:15.382700 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 00:16:15.384260 sudo[1758]: pam_unix(sudo:session): session closed for user root Sep 11 00:16:15.386730 sshd[1757]: Connection closed by 10.0.0.1 port 37828 Sep 11 00:16:15.387134 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Sep 11 00:16:15.398449 systemd[1]: sshd@7-10.0.0.58:22-10.0.0.1:37828.service: Deactivated successfully. Sep 11 00:16:15.401071 systemd[1]: session-8.scope: Deactivated successfully. Sep 11 00:16:15.402044 systemd-logind[1548]: Session 8 logged out. Waiting for processes to exit. Sep 11 00:16:15.406262 systemd[1]: Started sshd@8-10.0.0.58:22-10.0.0.1:37834.service - OpenSSH per-connection server daemon (10.0.0.1:37834). Sep 11 00:16:15.407046 systemd-logind[1548]: Removed session 8. Sep 11 00:16:15.458954 sshd[1790]: Accepted publickey for core from 10.0.0.1 port 37834 ssh2: RSA SHA256:y/XwUTkYMtMNacauLj7j4r7D0OZbB+8bKKbHTNwhPa4 Sep 11 00:16:15.460356 sshd-session[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:16:15.465726 systemd-logind[1548]: New session 9 of user core. Sep 11 00:16:15.479684 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 11 00:16:15.534054 sudo[1794]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 11 00:16:15.534374 sudo[1794]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:16:16.084246 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 11 00:16:16.101965 (dockerd)[1814]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 11 00:16:16.508105 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 11 00:16:16.512072 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:16:16.633906 dockerd[1814]: time="2025-09-11T00:16:16.633806261Z" level=info msg="Starting up" Sep 11 00:16:16.634897 dockerd[1814]: time="2025-09-11T00:16:16.634843651Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 11 00:16:16.656731 dockerd[1814]: time="2025-09-11T00:16:16.656660352Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 11 00:16:16.962909 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:16:16.971956 (kubelet)[1847]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:16:17.005468 dockerd[1814]: time="2025-09-11T00:16:17.005398272Z" level=info msg="Loading containers: start." Sep 11 00:16:17.017573 kernel: Initializing XFRM netlink socket Sep 11 00:16:17.027116 kubelet[1847]: E0911 00:16:17.027054 1847 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:16:17.034659 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:16:17.034875 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:16:17.035302 systemd[1]: kubelet.service: Consumed 366ms CPU time, 110.9M memory peak. Sep 11 00:16:17.369460 systemd-networkd[1484]: docker0: Link UP Sep 11 00:16:17.376848 dockerd[1814]: time="2025-09-11T00:16:17.376773262Z" level=info msg="Loading containers: done." Sep 11 00:16:17.406359 dockerd[1814]: time="2025-09-11T00:16:17.406269005Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 11 00:16:17.406697 dockerd[1814]: time="2025-09-11T00:16:17.406415835Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 11 00:16:17.406697 dockerd[1814]: time="2025-09-11T00:16:17.406586576Z" level=info msg="Initializing buildkit" Sep 11 00:16:17.449751 dockerd[1814]: time="2025-09-11T00:16:17.449693739Z" level=info msg="Completed buildkit initialization" Sep 11 00:16:17.459283 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 11 00:16:17.459726 dockerd[1814]: time="2025-09-11T00:16:17.458355388Z" level=info msg="Daemon has completed initialization" Sep 11 00:16:17.459726 dockerd[1814]: time="2025-09-11T00:16:17.459435954Z" level=info msg="API listen on /run/docker.sock" Sep 11 00:16:18.960352 containerd[1561]: time="2025-09-11T00:16:18.960258334Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 11 00:16:19.651432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3368483863.mount: Deactivated successfully. Sep 11 00:16:20.855485 containerd[1561]: time="2025-09-11T00:16:20.855417144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:16:20.856211 containerd[1561]: time="2025-09-11T00:16:20.856139208Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 11 00:16:20.857533 containerd[1561]: time="2025-09-11T00:16:20.857485974Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:16:20.860154 containerd[1561]: time="2025-09-11T00:16:20.860123574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:16:20.861209 containerd[1561]: time="2025-09-11T00:16:20.861150301Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 1.900811905s" Sep 11 00:16:20.861275 containerd[1561]: time="2025-09-11T00:16:20.861226402Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 11 00:16:20.862053 containerd[1561]: time="2025-09-11T00:16:20.862009715Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 11 00:16:22.241683 containerd[1561]: time="2025-09-11T00:16:22.241592678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:16:22.242469 containerd[1561]: time="2025-09-11T00:16:22.242405457Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 11 00:16:22.243721 containerd[1561]: time="2025-09-11T00:16:22.243687259Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:16:22.247580 containerd[1561]: time="2025-09-11T00:16:22.247500623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:16:22.248724 containerd[1561]: time="2025-09-11T00:16:22.248668160Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.386624116s" Sep 11 00:16:22.248724 containerd[1561]: time="2025-09-11T00:16:22.248717576Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 11 00:16:22.249536 containerd[1561]: time="2025-09-11T00:16:22.249316964Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 11 00:16:25.129204 containerd[1561]: time="2025-09-11T00:16:25.129084915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:16:25.130142 containerd[1561]: time="2025-09-11T00:16:25.129947057Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 11 00:16:25.131394 containerd[1561]: time="2025-09-11T00:16:25.131353134Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:16:25.134395 containerd[1561]: time="2025-09-11T00:16:25.134351275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:16:25.135709 containerd[1561]: time="2025-09-11T00:16:25.135661124Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 2.886309495s" Sep 11 00:16:25.135789 containerd[1561]: time="2025-09-11T00:16:25.135717116Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 11 00:16:25.136749 containerd[1561]: time="2025-09-11T00:16:25.136717840Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 11 00:16:26.487385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount777259721.mount: Deactivated successfully. Sep 11 00:16:27.285768 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 11 00:16:27.288609 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:16:27.303649 containerd[1561]: time="2025-09-11T00:16:27.303550647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:16:27.304930 containerd[1561]: time="2025-09-11T00:16:27.304878550Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 11 00:16:27.306983 containerd[1561]: time="2025-09-11T00:16:27.306893500Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:16:27.310256 containerd[1561]: time="2025-09-11T00:16:27.310182480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:16:27.310771 containerd[1561]: time="2025-09-11T00:16:27.310730192Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 2.173946191s" Sep 11 00:16:27.310928 containerd[1561]: time="2025-09-11T00:16:27.310773494Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 11 00:16:27.311933 containerd[1561]: time="2025-09-11T00:16:27.311894165Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 11 00:16:27.571347 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:16:27.586939 (kubelet)[2133]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:16:27.667278 kubelet[2133]: E0911 00:16:27.667192 2133 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:16:27.672091 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:16:27.672326 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:16:27.672783 systemd[1]: kubelet.service: Consumed 307ms CPU time, 110.6M memory peak. Sep 11 00:16:28.513144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount721238538.mount: Deactivated successfully. Sep 11 00:16:31.375408 containerd[1561]: time="2025-09-11T00:16:31.375301112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:16:31.376370 containerd[1561]: time="2025-09-11T00:16:31.376293658Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 11 00:16:31.377775 containerd[1561]: time="2025-09-11T00:16:31.377719115Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:16:31.381043 containerd[1561]: time="2025-09-11T00:16:31.380981816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:16:31.382440 containerd[1561]: time="2025-09-11T00:16:31.382360632Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 4.070428028s" Sep 11 00:16:31.382440 containerd[1561]: time="2025-09-11T00:16:31.382421870Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 11 00:16:31.383136 containerd[1561]: time="2025-09-11T00:16:31.383083166Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 11 00:16:32.243652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3161788028.mount: Deactivated successfully. Sep 11 00:16:32.320764 containerd[1561]: time="2025-09-11T00:16:32.320663591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:16:32.321924 containerd[1561]: time="2025-09-11T00:16:32.321858694Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 11 00:16:32.323753 containerd[1561]: time="2025-09-11T00:16:32.323653473Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:16:32.326393 containerd[1561]: time="2025-09-11T00:16:32.326269970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:16:32.327067 containerd[1561]: time="2025-09-11T00:16:32.327012842Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 943.885946ms" Sep 11 00:16:32.327067 containerd[1561]: time="2025-09-11T00:16:32.327058506Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 11 00:16:32.327805 containerd[1561]: time="2025-09-11T00:16:32.327764316Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 11 00:16:32.935943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount214103305.mount: Deactivated successfully. Sep 11 00:16:35.667208 containerd[1561]: time="2025-09-11T00:16:35.667097355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:16:35.668386 containerd[1561]: time="2025-09-11T00:16:35.668342790Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 11 00:16:35.679894 containerd[1561]: time="2025-09-11T00:16:35.679792713Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:16:35.693395 containerd[1561]: time="2025-09-11T00:16:35.693305096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:16:35.695171 containerd[1561]: time="2025-09-11T00:16:35.695080656Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.367272563s" Sep 11 00:16:35.695171 containerd[1561]: time="2025-09-11T00:16:35.695149211Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 11 00:16:37.863722 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 11 00:16:37.865963 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:16:38.100009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:16:38.116080 (kubelet)[2283]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:16:38.182068 kubelet[2283]: E0911 00:16:38.181965 2283 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:16:38.187484 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:16:38.187764 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:16:38.188176 systemd[1]: kubelet.service: Consumed 262ms CPU time, 109.8M memory peak. Sep 11 00:16:39.068556 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:16:39.068764 systemd[1]: kubelet.service: Consumed 262ms CPU time, 109.8M memory peak. Sep 11 00:16:39.071184 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:16:39.096574 systemd[1]: Reload requested from client PID 2298 ('systemctl') (unit session-9.scope)... Sep 11 00:16:39.096599 systemd[1]: Reloading... Sep 11 00:16:39.192954 zram_generator::config[2338]: No configuration found. Sep 11 00:16:40.337906 systemd[1]: Reloading finished in 1240 ms. Sep 11 00:16:40.415637 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 11 00:16:40.415793 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 11 00:16:40.416240 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:16:40.416304 systemd[1]: kubelet.service: Consumed 187ms CPU time, 98.3M memory peak. Sep 11 00:16:40.418553 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:16:40.627155 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:16:40.646982 (kubelet)[2389]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 00:16:40.685609 kubelet[2389]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:16:40.685609 kubelet[2389]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 11 00:16:40.685609 kubelet[2389]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:16:40.687297 kubelet[2389]: I0911 00:16:40.685658 2389 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 00:16:42.244285 kubelet[2389]: I0911 00:16:42.244201 2389 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 11 00:16:42.244285 kubelet[2389]: I0911 00:16:42.244245 2389 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 00:16:42.244886 kubelet[2389]: I0911 00:16:42.244544 2389 server.go:934] "Client rotation is on, will bootstrap in background" Sep 11 00:16:42.272892 kubelet[2389]: E0911 00:16:42.272820 2389 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.58:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:16:42.273934 kubelet[2389]: I0911 00:16:42.273888 2389 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 00:16:42.287114 kubelet[2389]: I0911 00:16:42.287075 2389 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 00:16:42.294580 kubelet[2389]: I0911 00:16:42.294497 2389 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 00:16:42.294736 kubelet[2389]: I0911 00:16:42.294647 2389 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 11 00:16:42.294865 kubelet[2389]: I0911 00:16:42.294808 2389 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 00:16:42.295051 kubelet[2389]: I0911 00:16:42.294845 2389 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 00:16:42.295051 kubelet[2389]: I0911 00:16:42.295054 2389 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 00:16:42.295236 kubelet[2389]: I0911 00:16:42.295065 2389 container_manager_linux.go:300] "Creating device plugin manager" Sep 11 00:16:42.295236 kubelet[2389]: I0911 00:16:42.295204 2389 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:16:42.298736 kubelet[2389]: I0911 00:16:42.298642 2389 kubelet.go:408] "Attempting to sync node with API server" Sep 11 00:16:42.298736 kubelet[2389]: I0911 00:16:42.298711 2389 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 00:16:42.298961 kubelet[2389]: I0911 00:16:42.298775 2389 kubelet.go:314] "Adding apiserver pod source" Sep 11 00:16:42.298961 kubelet[2389]: I0911 00:16:42.298818 2389 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 00:16:42.301058 kubelet[2389]: W0911 00:16:42.300816 2389 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Sep 11 00:16:42.301058 kubelet[2389]: E0911 00:16:42.300948 2389 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:16:42.301333 kubelet[2389]: W0911 00:16:42.301202 2389 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.58:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Sep 11 00:16:42.301333 kubelet[2389]: E0911 00:16:42.301259 2389 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.58:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:16:42.302278 kubelet[2389]: I0911 00:16:42.302242 2389 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 11 00:16:42.302872 kubelet[2389]: I0911 00:16:42.302835 2389 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 11 00:16:42.302971 kubelet[2389]: W0911 00:16:42.302952 2389 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 11 00:16:42.305902 kubelet[2389]: I0911 00:16:42.305697 2389 server.go:1274] "Started kubelet" Sep 11 00:16:42.306278 kubelet[2389]: I0911 00:16:42.306217 2389 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 00:16:42.307191 kubelet[2389]: I0911 00:16:42.306570 2389 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 00:16:42.307191 kubelet[2389]: I0911 00:16:42.306802 2389 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 00:16:42.308624 kubelet[2389]: I0911 00:16:42.308427 2389 server.go:449] "Adding debug handlers to kubelet server" Sep 11 00:16:42.308688 kubelet[2389]: I0911 00:16:42.308662 2389 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 00:16:42.310294 kubelet[2389]: I0911 00:16:42.310243 2389 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 00:16:42.313557 kubelet[2389]: E0911 00:16:42.313499 2389 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:16:42.314197 kubelet[2389]: I0911 00:16:42.314177 2389 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 11 00:16:42.314676 kubelet[2389]: I0911 00:16:42.314656 2389 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 11 00:16:42.315571 kubelet[2389]: I0911 00:16:42.315184 2389 reconciler.go:26] "Reconciler: start to sync state" Sep 11 00:16:42.317130 kubelet[2389]: W0911 00:16:42.315960 2389 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Sep 11 00:16:42.317130 kubelet[2389]: E0911 00:16:42.316031 2389 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:16:42.317130 kubelet[2389]: E0911 00:16:42.313455 2389 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.58:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.58:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864123bb97c906a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-11 00:16:42.30565489 +0000 UTC m=+1.654295997,LastTimestamp:2025-09-11 00:16:42.30565489 +0000 UTC m=+1.654295997,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 11 00:16:42.317130 kubelet[2389]: E0911 00:16:42.316540 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.58:6443: connect: connection refused" interval="200ms" Sep 11 00:16:42.317130 kubelet[2389]: E0911 00:16:42.316607 2389 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 00:16:42.317130 kubelet[2389]: I0911 00:16:42.316944 2389 factory.go:221] Registration of the systemd container factory successfully Sep 11 00:16:42.317474 kubelet[2389]: I0911 00:16:42.317054 2389 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 00:16:42.322991 kubelet[2389]: I0911 00:16:42.322960 2389 factory.go:221] Registration of the containerd container factory successfully Sep 11 00:16:42.337315 kubelet[2389]: I0911 00:16:42.337232 2389 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 11 00:16:42.339037 kubelet[2389]: I0911 00:16:42.339005 2389 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 11 00:16:42.339037 kubelet[2389]: I0911 00:16:42.339029 2389 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 11 00:16:42.339170 kubelet[2389]: I0911 00:16:42.339055 2389 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:16:42.340587 kubelet[2389]: I0911 00:16:42.340548 2389 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 11 00:16:42.340659 kubelet[2389]: I0911 00:16:42.340606 2389 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 11 00:16:42.340659 kubelet[2389]: I0911 00:16:42.340642 2389 kubelet.go:2321] "Starting kubelet main sync loop" Sep 11 00:16:42.340760 kubelet[2389]: E0911 00:16:42.340710 2389 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 00:16:42.341385 kubelet[2389]: W0911 00:16:42.341354 2389 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Sep 11 00:16:42.341482 kubelet[2389]: E0911 00:16:42.341393 2389 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:16:42.342713 kubelet[2389]: I0911 00:16:42.342679 2389 policy_none.go:49] "None policy: Start" Sep 11 00:16:42.344047 kubelet[2389]: I0911 00:16:42.343708 2389 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 11 00:16:42.344047 kubelet[2389]: I0911 00:16:42.343737 2389 state_mem.go:35] "Initializing new in-memory state store" Sep 11 00:16:42.352682 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 11 00:16:42.371711 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 11 00:16:42.375688 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 11 00:16:42.395418 kubelet[2389]: I0911 00:16:42.395293 2389 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 11 00:16:42.395729 kubelet[2389]: I0911 00:16:42.395689 2389 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 00:16:42.395729 kubelet[2389]: I0911 00:16:42.395713 2389 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 00:16:42.396225 kubelet[2389]: I0911 00:16:42.396194 2389 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 00:16:42.397544 kubelet[2389]: E0911 00:16:42.397486 2389 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 11 00:16:42.452228 systemd[1]: Created slice kubepods-burstable-pod239dba76177eef97232b74c1ba8d372f.slice - libcontainer container kubepods-burstable-pod239dba76177eef97232b74c1ba8d372f.slice. Sep 11 00:16:42.487201 systemd[1]: Created slice kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice - libcontainer container kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice. Sep 11 00:16:42.497483 kubelet[2389]: I0911 00:16:42.497361 2389 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 11 00:16:42.497989 kubelet[2389]: E0911 00:16:42.497908 2389 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.58:6443/api/v1/nodes\": dial tcp 10.0.0.58:6443: connect: connection refused" node="localhost" Sep 11 00:16:42.501655 systemd[1]: Created slice kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice - libcontainer container kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice. Sep 11 00:16:42.515601 kubelet[2389]: I0911 00:16:42.515490 2389 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/239dba76177eef97232b74c1ba8d372f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"239dba76177eef97232b74c1ba8d372f\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:16:42.515601 kubelet[2389]: I0911 00:16:42.515554 2389 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:16:42.515601 kubelet[2389]: I0911 00:16:42.515571 2389 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:16:42.515601 kubelet[2389]: I0911 00:16:42.515588 2389 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 11 00:16:42.515601 kubelet[2389]: I0911 00:16:42.515604 2389 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/239dba76177eef97232b74c1ba8d372f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"239dba76177eef97232b74c1ba8d372f\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:16:42.515920 kubelet[2389]: I0911 00:16:42.515618 2389 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/239dba76177eef97232b74c1ba8d372f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"239dba76177eef97232b74c1ba8d372f\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:16:42.515920 kubelet[2389]: I0911 00:16:42.515632 2389 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:16:42.515920 kubelet[2389]: I0911 00:16:42.515646 2389 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:16:42.515920 kubelet[2389]: I0911 00:16:42.515663 2389 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:16:42.517819 kubelet[2389]: E0911 00:16:42.517759 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.58:6443: connect: connection refused" interval="400ms" Sep 11 00:16:42.700667 kubelet[2389]: I0911 00:16:42.700620 2389 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 11 00:16:42.701203 kubelet[2389]: E0911 00:16:42.701149 2389 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.58:6443/api/v1/nodes\": dial tcp 10.0.0.58:6443: connect: connection refused" node="localhost" Sep 11 00:16:42.784647 kubelet[2389]: E0911 00:16:42.784381 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:16:42.785377 containerd[1561]: time="2025-09-11T00:16:42.785299813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:239dba76177eef97232b74c1ba8d372f,Namespace:kube-system,Attempt:0,}" Sep 11 00:16:42.790629 kubelet[2389]: E0911 00:16:42.790586 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:16:42.793553 containerd[1561]: time="2025-09-11T00:16:42.791576975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 11 00:16:42.805223 kubelet[2389]: E0911 00:16:42.805180 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:16:42.805726 containerd[1561]: time="2025-09-11T00:16:42.805683201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 11 00:16:42.850545 containerd[1561]: time="2025-09-11T00:16:42.850402345Z" level=info msg="connecting to shim 70aab2e69debe7deaa30168ba19b4eeed25ca4804c3a728041da03411091e6ab" address="unix:///run/containerd/s/7c5d0b0dd22ceef410caf50f206488b017a8d7a35b5f3466be9ba9e49742ca88" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:16:42.851728 containerd[1561]: time="2025-09-11T00:16:42.851672913Z" level=info msg="connecting to shim 46d69a93a6ae46d205f0551408241e1764ee0916fdbb447c85458c2216e16040" address="unix:///run/containerd/s/12bcaaa8069a48d296bcbdc96a03e1ead179c49410874c0687d31f14ede8ea25" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:16:42.890924 systemd[1]: Started cri-containerd-70aab2e69debe7deaa30168ba19b4eeed25ca4804c3a728041da03411091e6ab.scope - libcontainer container 70aab2e69debe7deaa30168ba19b4eeed25ca4804c3a728041da03411091e6ab. Sep 11 00:16:42.892035 containerd[1561]: time="2025-09-11T00:16:42.891995727Z" level=info msg="connecting to shim 67a411bdb9cac83836778ca4cfa43b9046bd0bf98a9022d7733923da55c0e5b8" address="unix:///run/containerd/s/1cf1610a7c0a56f5d42e6398f477634191d69ad18d32ce1820f84214e0920ba1" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:16:42.896588 systemd[1]: Started cri-containerd-46d69a93a6ae46d205f0551408241e1764ee0916fdbb447c85458c2216e16040.scope - libcontainer container 46d69a93a6ae46d205f0551408241e1764ee0916fdbb447c85458c2216e16040. Sep 11 00:16:42.919078 kubelet[2389]: E0911 00:16:42.918972 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.58:6443: connect: connection refused" interval="800ms" Sep 11 00:16:42.934687 systemd[1]: Started cri-containerd-67a411bdb9cac83836778ca4cfa43b9046bd0bf98a9022d7733923da55c0e5b8.scope - libcontainer container 67a411bdb9cac83836778ca4cfa43b9046bd0bf98a9022d7733923da55c0e5b8. Sep 11 00:16:42.980338 containerd[1561]: time="2025-09-11T00:16:42.980285339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:239dba76177eef97232b74c1ba8d372f,Namespace:kube-system,Attempt:0,} returns sandbox id \"70aab2e69debe7deaa30168ba19b4eeed25ca4804c3a728041da03411091e6ab\"" Sep 11 00:16:42.982040 containerd[1561]: time="2025-09-11T00:16:42.981955760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"46d69a93a6ae46d205f0551408241e1764ee0916fdbb447c85458c2216e16040\"" Sep 11 00:16:42.982294 kubelet[2389]: E0911 00:16:42.982157 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:16:42.982815 kubelet[2389]: E0911 00:16:42.982800 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:16:42.985135 containerd[1561]: time="2025-09-11T00:16:42.985021306Z" level=info msg="CreateContainer within sandbox \"46d69a93a6ae46d205f0551408241e1764ee0916fdbb447c85458c2216e16040\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 11 00:16:42.985590 containerd[1561]: time="2025-09-11T00:16:42.985405855Z" level=info msg="CreateContainer within sandbox \"70aab2e69debe7deaa30168ba19b4eeed25ca4804c3a728041da03411091e6ab\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 11 00:16:42.996842 containerd[1561]: time="2025-09-11T00:16:42.996788590Z" level=info msg="Container f46f38a4e5ca8cf07b4045284b4052335b308d68ae78dbd171a011ffa222cd94: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:16:43.007206 containerd[1561]: time="2025-09-11T00:16:43.007155329Z" level=info msg="Container e5936858ee1c0ba4afbcd3a9897c7416de3227d694227755c18f6fb3c23b463b: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:16:43.014056 containerd[1561]: time="2025-09-11T00:16:43.014015604Z" level=info msg="CreateContainer within sandbox \"46d69a93a6ae46d205f0551408241e1764ee0916fdbb447c85458c2216e16040\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f46f38a4e5ca8cf07b4045284b4052335b308d68ae78dbd171a011ffa222cd94\"" Sep 11 00:16:43.015310 containerd[1561]: time="2025-09-11T00:16:43.015270994Z" level=info msg="StartContainer for \"f46f38a4e5ca8cf07b4045284b4052335b308d68ae78dbd171a011ffa222cd94\"" Sep 11 00:16:43.016679 containerd[1561]: time="2025-09-11T00:16:43.016649367Z" level=info msg="connecting to shim f46f38a4e5ca8cf07b4045284b4052335b308d68ae78dbd171a011ffa222cd94" address="unix:///run/containerd/s/12bcaaa8069a48d296bcbdc96a03e1ead179c49410874c0687d31f14ede8ea25" protocol=ttrpc version=3 Sep 11 00:16:43.019639 containerd[1561]: time="2025-09-11T00:16:43.019601065Z" level=info msg="CreateContainer within sandbox \"70aab2e69debe7deaa30168ba19b4eeed25ca4804c3a728041da03411091e6ab\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e5936858ee1c0ba4afbcd3a9897c7416de3227d694227755c18f6fb3c23b463b\"" Sep 11 00:16:43.020489 containerd[1561]: time="2025-09-11T00:16:43.020461290Z" level=info msg="StartContainer for \"e5936858ee1c0ba4afbcd3a9897c7416de3227d694227755c18f6fb3c23b463b\"" Sep 11 00:16:43.021818 containerd[1561]: time="2025-09-11T00:16:43.021789852Z" level=info msg="connecting to shim e5936858ee1c0ba4afbcd3a9897c7416de3227d694227755c18f6fb3c23b463b" address="unix:///run/containerd/s/7c5d0b0dd22ceef410caf50f206488b017a8d7a35b5f3466be9ba9e49742ca88" protocol=ttrpc version=3 Sep 11 00:16:43.041705 systemd[1]: Started cri-containerd-e5936858ee1c0ba4afbcd3a9897c7416de3227d694227755c18f6fb3c23b463b.scope - libcontainer container e5936858ee1c0ba4afbcd3a9897c7416de3227d694227755c18f6fb3c23b463b. Sep 11 00:16:43.045279 systemd[1]: Started cri-containerd-f46f38a4e5ca8cf07b4045284b4052335b308d68ae78dbd171a011ffa222cd94.scope - libcontainer container f46f38a4e5ca8cf07b4045284b4052335b308d68ae78dbd171a011ffa222cd94. Sep 11 00:16:43.057799 containerd[1561]: time="2025-09-11T00:16:43.057757345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"67a411bdb9cac83836778ca4cfa43b9046bd0bf98a9022d7733923da55c0e5b8\"" Sep 11 00:16:43.059109 kubelet[2389]: E0911 00:16:43.059037 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:16:43.062293 containerd[1561]: time="2025-09-11T00:16:43.061470129Z" level=info msg="CreateContainer within sandbox \"67a411bdb9cac83836778ca4cfa43b9046bd0bf98a9022d7733923da55c0e5b8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 11 00:16:43.074357 containerd[1561]: time="2025-09-11T00:16:43.074315038Z" level=info msg="Container ca15aacb7726739d6db8811a7b83327342469f3dc1b6016395e7b1a167a2f6ce: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:16:43.081817 containerd[1561]: time="2025-09-11T00:16:43.081792389Z" level=info msg="CreateContainer within sandbox \"67a411bdb9cac83836778ca4cfa43b9046bd0bf98a9022d7733923da55c0e5b8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ca15aacb7726739d6db8811a7b83327342469f3dc1b6016395e7b1a167a2f6ce\"" Sep 11 00:16:43.082678 containerd[1561]: time="2025-09-11T00:16:43.082659359Z" level=info msg="StartContainer for \"ca15aacb7726739d6db8811a7b83327342469f3dc1b6016395e7b1a167a2f6ce\"" Sep 11 00:16:43.083773 containerd[1561]: time="2025-09-11T00:16:43.083749935Z" level=info msg="connecting to shim ca15aacb7726739d6db8811a7b83327342469f3dc1b6016395e7b1a167a2f6ce" address="unix:///run/containerd/s/1cf1610a7c0a56f5d42e6398f477634191d69ad18d32ce1820f84214e0920ba1" protocol=ttrpc version=3 Sep 11 00:16:43.103533 kubelet[2389]: I0911 00:16:43.103476 2389 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 11 00:16:43.104759 kubelet[2389]: E0911 00:16:43.104730 2389 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.58:6443/api/v1/nodes\": dial tcp 10.0.0.58:6443: connect: connection refused" node="localhost" Sep 11 00:16:43.109975 systemd[1]: Started cri-containerd-ca15aacb7726739d6db8811a7b83327342469f3dc1b6016395e7b1a167a2f6ce.scope - libcontainer container ca15aacb7726739d6db8811a7b83327342469f3dc1b6016395e7b1a167a2f6ce. Sep 11 00:16:43.120852 containerd[1561]: time="2025-09-11T00:16:43.120695403Z" level=info msg="StartContainer for \"e5936858ee1c0ba4afbcd3a9897c7416de3227d694227755c18f6fb3c23b463b\" returns successfully" Sep 11 00:16:43.129140 containerd[1561]: time="2025-09-11T00:16:43.129071635Z" level=info msg="StartContainer for \"f46f38a4e5ca8cf07b4045284b4052335b308d68ae78dbd171a011ffa222cd94\" returns successfully" Sep 11 00:16:43.135538 kubelet[2389]: W0911 00:16:43.134980 2389 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Sep 11 00:16:43.135538 kubelet[2389]: E0911 00:16:43.135067 2389 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:16:43.189883 containerd[1561]: time="2025-09-11T00:16:43.189827661Z" level=info msg="StartContainer for \"ca15aacb7726739d6db8811a7b83327342469f3dc1b6016395e7b1a167a2f6ce\" returns successfully" Sep 11 00:16:43.352191 kubelet[2389]: E0911 00:16:43.351423 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:16:43.352878 kubelet[2389]: E0911 00:16:43.352851 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:16:43.356527 kubelet[2389]: E0911 00:16:43.355113 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:16:43.908912 kubelet[2389]: I0911 00:16:43.908843 2389 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 11 00:16:44.358201 kubelet[2389]: E0911 00:16:44.358076 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:16:44.401544 kubelet[2389]: E0911 00:16:44.401032 2389 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 11 00:16:44.895390 kubelet[2389]: I0911 00:16:44.895321 2389 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 11 00:16:44.895390 kubelet[2389]: E0911 00:16:44.895365 2389 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 11 00:16:45.300924 kubelet[2389]: I0911 00:16:45.300740 2389 apiserver.go:52] "Watching apiserver" Sep 11 00:16:45.315435 kubelet[2389]: I0911 00:16:45.315349 2389 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 11 00:16:46.022354 kubelet[2389]: E0911 00:16:46.022295 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:16:46.360249 kubelet[2389]: E0911 00:16:46.360112 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:16:47.492502 update_engine[1551]: I20250911 00:16:47.492381 1551 update_attempter.cc:509] Updating boot flags... Sep 11 00:16:48.406603 kubelet[2389]: E0911 00:16:48.406541 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:16:48.592867 systemd[1]: Reload requested from client PID 2682 ('systemctl') (unit session-9.scope)... Sep 11 00:16:48.592884 systemd[1]: Reloading... Sep 11 00:16:48.678542 zram_generator::config[2728]: No configuration found. Sep 11 00:16:48.947487 systemd[1]: Reloading finished in 354 ms. Sep 11 00:16:48.971612 kubelet[2389]: I0911 00:16:48.971564 2389 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 00:16:48.971896 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:16:48.987145 systemd[1]: kubelet.service: Deactivated successfully. Sep 11 00:16:48.987599 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:16:48.987663 systemd[1]: kubelet.service: Consumed 895ms CPU time, 130M memory peak. Sep 11 00:16:48.989792 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:16:49.262410 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:16:49.275292 (kubelet)[2770]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 00:16:50.069421 kubelet[2770]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:16:50.069421 kubelet[2770]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 11 00:16:50.069421 kubelet[2770]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:16:50.070059 kubelet[2770]: I0911 00:16:50.069453 2770 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 00:16:50.076272 kubelet[2770]: I0911 00:16:50.076220 2770 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 11 00:16:50.076272 kubelet[2770]: I0911 00:16:50.076250 2770 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 00:16:50.076631 kubelet[2770]: I0911 00:16:50.076607 2770 server.go:934] "Client rotation is on, will bootstrap in background" Sep 11 00:16:50.078399 kubelet[2770]: I0911 00:16:50.078347 2770 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 11 00:16:50.081205 kubelet[2770]: I0911 00:16:50.080717 2770 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 00:16:50.086970 kubelet[2770]: I0911 00:16:50.086944 2770 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 00:16:50.091997 kubelet[2770]: I0911 00:16:50.091968 2770 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 00:16:50.092151 kubelet[2770]: I0911 00:16:50.092133 2770 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 11 00:16:50.092338 kubelet[2770]: I0911 00:16:50.092300 2770 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 00:16:50.092580 kubelet[2770]: I0911 00:16:50.092341 2770 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 00:16:50.092695 kubelet[2770]: I0911 00:16:50.092606 2770 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 00:16:50.092695 kubelet[2770]: I0911 00:16:50.092622 2770 container_manager_linux.go:300] "Creating device plugin manager" Sep 11 00:16:50.092695 kubelet[2770]: I0911 00:16:50.092659 2770 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:16:50.092823 kubelet[2770]: I0911 00:16:50.092808 2770 kubelet.go:408] "Attempting to sync node with API server" Sep 11 00:16:50.092865 kubelet[2770]: I0911 00:16:50.092831 2770 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 00:16:50.092894 kubelet[2770]: I0911 00:16:50.092865 2770 kubelet.go:314] "Adding apiserver pod source" Sep 11 00:16:50.092894 kubelet[2770]: I0911 00:16:50.092879 2770 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 00:16:50.093716 sudo[2786]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 11 00:16:50.094177 sudo[2786]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 11 00:16:50.095722 kubelet[2770]: I0911 00:16:50.094101 2770 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 11 00:16:50.095722 kubelet[2770]: I0911 00:16:50.094957 2770 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 11 00:16:50.095772 kubelet[2770]: I0911 00:16:50.095724 2770 server.go:1274] "Started kubelet" Sep 11 00:16:50.095995 kubelet[2770]: I0911 00:16:50.095955 2770 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 00:16:50.096881 kubelet[2770]: I0911 00:16:50.096200 2770 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 00:16:50.096881 kubelet[2770]: I0911 00:16:50.096793 2770 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 00:16:50.098534 kubelet[2770]: I0911 00:16:50.097261 2770 server.go:449] "Adding debug handlers to kubelet server" Sep 11 00:16:50.100091 kubelet[2770]: I0911 00:16:50.099778 2770 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 00:16:50.106684 kubelet[2770]: I0911 00:16:50.106644 2770 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 00:16:50.114579 kubelet[2770]: I0911 00:16:50.107096 2770 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 11 00:16:50.114718 kubelet[2770]: I0911 00:16:50.107140 2770 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 11 00:16:50.115654 kubelet[2770]: I0911 00:16:50.115629 2770 reconciler.go:26] "Reconciler: start to sync state" Sep 11 00:16:50.115654 kubelet[2770]: E0911 00:16:50.107337 2770 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:16:50.116399 kubelet[2770]: I0911 00:16:50.116373 2770 factory.go:221] Registration of the systemd container factory successfully Sep 11 00:16:50.116583 kubelet[2770]: I0911 00:16:50.116554 2770 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 00:16:50.116922 kubelet[2770]: E0911 00:16:50.116900 2770 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 00:16:50.120733 kubelet[2770]: I0911 00:16:50.120696 2770 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 11 00:16:50.123533 kubelet[2770]: I0911 00:16:50.121658 2770 factory.go:221] Registration of the containerd container factory successfully Sep 11 00:16:50.126738 kubelet[2770]: I0911 00:16:50.126678 2770 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 11 00:16:50.128603 kubelet[2770]: I0911 00:16:50.128567 2770 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 11 00:16:50.128603 kubelet[2770]: I0911 00:16:50.128614 2770 kubelet.go:2321] "Starting kubelet main sync loop" Sep 11 00:16:50.128747 kubelet[2770]: E0911 00:16:50.128681 2770 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 00:16:50.171143 kubelet[2770]: I0911 00:16:50.171114 2770 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 11 00:16:50.171294 kubelet[2770]: I0911 00:16:50.171283 2770 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 11 00:16:50.171397 kubelet[2770]: I0911 00:16:50.171387 2770 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:16:50.171636 kubelet[2770]: I0911 00:16:50.171618 2770 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 11 00:16:50.171711 kubelet[2770]: I0911 00:16:50.171685 2770 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 11 00:16:50.171768 kubelet[2770]: I0911 00:16:50.171760 2770 policy_none.go:49] "None policy: Start" Sep 11 00:16:50.172625 kubelet[2770]: I0911 00:16:50.172607 2770 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 11 00:16:50.172668 kubelet[2770]: I0911 00:16:50.172630 2770 state_mem.go:35] "Initializing new in-memory state store" Sep 11 00:16:50.172750 kubelet[2770]: I0911 00:16:50.172737 2770 state_mem.go:75] "Updated machine memory state" Sep 11 00:16:50.177651 kubelet[2770]: I0911 00:16:50.177628 2770 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 11 00:16:50.178482 kubelet[2770]: I0911 00:16:50.178072 2770 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 00:16:50.178482 kubelet[2770]: I0911 00:16:50.178093 2770 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 00:16:50.178482 kubelet[2770]: I0911 00:16:50.178390 2770 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 00:16:50.238833 kubelet[2770]: E0911 00:16:50.238787 2770 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 11 00:16:50.239102 kubelet[2770]: E0911 00:16:50.239063 2770 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 11 00:16:50.284976 kubelet[2770]: I0911 00:16:50.284934 2770 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 11 00:16:50.294108 kubelet[2770]: I0911 00:16:50.294051 2770 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 11 00:16:50.294261 kubelet[2770]: I0911 00:16:50.294157 2770 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 11 00:16:50.317414 kubelet[2770]: I0911 00:16:50.317294 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:16:50.317414 kubelet[2770]: I0911 00:16:50.317340 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 11 00:16:50.317414 kubelet[2770]: I0911 00:16:50.317362 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/239dba76177eef97232b74c1ba8d372f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"239dba76177eef97232b74c1ba8d372f\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:16:50.317414 kubelet[2770]: I0911 00:16:50.317379 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/239dba76177eef97232b74c1ba8d372f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"239dba76177eef97232b74c1ba8d372f\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:16:50.317414 kubelet[2770]: I0911 00:16:50.317404 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/239dba76177eef97232b74c1ba8d372f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"239dba76177eef97232b74c1ba8d372f\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:16:50.317803 kubelet[2770]: I0911 00:16:50.317426 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:16:50.317803 kubelet[2770]: I0911 00:16:50.317441 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:16:50.317803 kubelet[2770]: I0911 00:16:50.317469 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:16:50.317803 kubelet[2770]: I0911 00:16:50.317487 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:16:50.536733 kubelet[2770]: E0911 00:16:50.536464 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:16:50.539775 kubelet[2770]: E0911 00:16:50.539731 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:16:50.539890 kubelet[2770]: E0911 00:16:50.539867 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:16:50.587440 sudo[2786]: pam_unix(sudo:session): session closed for user root Sep 11 00:16:51.093777 kubelet[2770]: I0911 00:16:51.093711 2770 apiserver.go:52] "Watching apiserver" Sep 11 00:16:51.115823 kubelet[2770]: I0911 00:16:51.115753 2770 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 11 00:16:51.152688 kubelet[2770]: E0911 00:16:51.152357 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:16:51.152688 kubelet[2770]: E0911 00:16:51.152474 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:16:51.157934 kubelet[2770]: E0911 00:16:51.157630 2770 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 11 00:16:51.157934 kubelet[2770]: E0911 00:16:51.157844 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:16:51.181189 kubelet[2770]: I0911 00:16:51.181058 2770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.18102847 podStartE2EDuration="1.18102847s" podCreationTimestamp="2025-09-11 00:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:16:51.180871 +0000 UTC m=+1.173728472" watchObservedRunningTime="2025-09-11 00:16:51.18102847 +0000 UTC m=+1.173885942" Sep 11 00:16:51.181487 kubelet[2770]: I0911 00:16:51.181263 2770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=6.181252161 podStartE2EDuration="6.181252161s" podCreationTimestamp="2025-09-11 00:16:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:16:51.174261509 +0000 UTC m=+1.167118981" watchObservedRunningTime="2025-09-11 00:16:51.181252161 +0000 UTC m=+1.174109633" Sep 11 00:16:51.928066 sudo[1794]: pam_unix(sudo:session): session closed for user root Sep 11 00:16:51.929898 sshd[1793]: Connection closed by 10.0.0.1 port 37834 Sep 11 00:16:51.930746 sshd-session[1790]: pam_unix(sshd:session): session closed for user core Sep 11 00:16:51.936303 systemd[1]: sshd@8-10.0.0.58:22-10.0.0.1:37834.service: Deactivated successfully. Sep 11 00:16:51.938918 systemd[1]: session-9.scope: Deactivated successfully. Sep 11 00:16:51.939176 systemd[1]: session-9.scope: Consumed 6.314s CPU time, 266.7M memory peak. Sep 11 00:16:51.940724 systemd-logind[1548]: Session 9 logged out. Waiting for processes to exit. Sep 11 00:16:51.942225 systemd-logind[1548]: Removed session 9. Sep 11 00:16:52.153470 kubelet[2770]: E0911 00:16:52.153420 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:16:53.412373 kubelet[2770]: I0911 00:16:53.412316 2770 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 11 00:16:53.412907 containerd[1561]: time="2025-09-11T00:16:53.412720840Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 11 00:16:53.413262 kubelet[2770]: I0911 00:16:53.412900 2770 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 11 00:16:53.696738 kubelet[2770]: E0911 00:16:53.696574 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:16:53.707374 kubelet[2770]: I0911 00:16:53.707292 2770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=5.707244188 podStartE2EDuration="5.707244188s" podCreationTimestamp="2025-09-11 00:16:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:16:51.186759695 +0000 UTC m=+1.179617187" watchObservedRunningTime="2025-09-11 00:16:53.707244188 +0000 UTC m=+3.700101660" Sep 11 00:16:54.042437 systemd[1]: Created slice kubepods-besteffort-pod731c3a34_f3d3_474e_b71e_84767f581212.slice - libcontainer container kubepods-besteffort-pod731c3a34_f3d3_474e_b71e_84767f581212.slice. Sep 11 00:16:54.055782 systemd[1]: Created slice kubepods-burstable-pod42e150ed_be6b_4713_ac8f_63896a7aff87.slice - libcontainer container kubepods-burstable-pod42e150ed_be6b_4713_ac8f_63896a7aff87.slice. Sep 11 00:16:54.056447 kubelet[2770]: I0911 00:16:54.056416 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/731c3a34-f3d3-474e-b71e-84767f581212-lib-modules\") pod \"kube-proxy-lmlhs\" (UID: \"731c3a34-f3d3-474e-b71e-84767f581212\") " pod="kube-system/kube-proxy-lmlhs" Sep 11 00:16:54.056584 kubelet[2770]: I0911 00:16:54.056452 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/731c3a34-f3d3-474e-b71e-84767f581212-kube-proxy\") pod \"kube-proxy-lmlhs\" (UID: \"731c3a34-f3d3-474e-b71e-84767f581212\") " pod="kube-system/kube-proxy-lmlhs" Sep 11 00:16:54.056584 kubelet[2770]: I0911 00:16:54.056472 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk25b\" (UniqueName: \"kubernetes.io/projected/731c3a34-f3d3-474e-b71e-84767f581212-kube-api-access-nk25b\") pod \"kube-proxy-lmlhs\" (UID: \"731c3a34-f3d3-474e-b71e-84767f581212\") " pod="kube-system/kube-proxy-lmlhs" Sep 11 00:16:54.056584 kubelet[2770]: I0911 00:16:54.056491 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/731c3a34-f3d3-474e-b71e-84767f581212-xtables-lock\") pod \"kube-proxy-lmlhs\" (UID: \"731c3a34-f3d3-474e-b71e-84767f581212\") " pod="kube-system/kube-proxy-lmlhs" Sep 11 00:16:54.156902 kubelet[2770]: I0911 00:16:54.156816 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-host-proc-sys-kernel\") pod \"cilium-2bmkp\" (UID: \"42e150ed-be6b-4713-ac8f-63896a7aff87\") " pod="kube-system/cilium-2bmkp" Sep 11 00:16:54.156902 kubelet[2770]: I0911 00:16:54.156872 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-lib-modules\") pod \"cilium-2bmkp\" (UID: \"42e150ed-be6b-4713-ac8f-63896a7aff87\") " pod="kube-system/cilium-2bmkp" Sep 11 00:16:54.157305 kubelet[2770]: I0911 00:16:54.156925 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-etc-cni-netd\") pod \"cilium-2bmkp\" (UID: \"42e150ed-be6b-4713-ac8f-63896a7aff87\") " pod="kube-system/cilium-2bmkp" Sep 11 00:16:54.157305 kubelet[2770]: I0911 00:16:54.156941 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-xtables-lock\") pod \"cilium-2bmkp\" (UID: \"42e150ed-be6b-4713-ac8f-63896a7aff87\") " pod="kube-system/cilium-2bmkp" Sep 11 00:16:54.157305 kubelet[2770]: I0911 00:16:54.156960 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/42e150ed-be6b-4713-ac8f-63896a7aff87-cilium-config-path\") pod \"cilium-2bmkp\" (UID: \"42e150ed-be6b-4713-ac8f-63896a7aff87\") " pod="kube-system/cilium-2bmkp" Sep 11 00:16:54.157305 kubelet[2770]: I0911 00:16:54.156984 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/42e150ed-be6b-4713-ac8f-63896a7aff87-hubble-tls\") pod \"cilium-2bmkp\" (UID: \"42e150ed-be6b-4713-ac8f-63896a7aff87\") " pod="kube-system/cilium-2bmkp" Sep 11 00:16:54.157305 kubelet[2770]: I0911 00:16:54.157008 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-hostproc\") pod \"cilium-2bmkp\" (UID: \"42e150ed-be6b-4713-ac8f-63896a7aff87\") " pod="kube-system/cilium-2bmkp" Sep 11 00:16:54.157305 kubelet[2770]: I0911 00:16:54.157045 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-cni-path\") pod \"cilium-2bmkp\" (UID: \"42e150ed-be6b-4713-ac8f-63896a7aff87\") " pod="kube-system/cilium-2bmkp" Sep 11 00:16:54.157542 kubelet[2770]: I0911 00:16:54.157067 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-host-proc-sys-net\") pod \"cilium-2bmkp\" (UID: \"42e150ed-be6b-4713-ac8f-63896a7aff87\") " pod="kube-system/cilium-2bmkp" Sep 11 00:16:54.157542 kubelet[2770]: I0911 00:16:54.157097 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-cilium-run\") pod \"cilium-2bmkp\" (UID: \"42e150ed-be6b-4713-ac8f-63896a7aff87\") " pod="kube-system/cilium-2bmkp" Sep 11 00:16:54.157542 kubelet[2770]: I0911 00:16:54.157113 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-bpf-maps\") pod \"cilium-2bmkp\" (UID: \"42e150ed-be6b-4713-ac8f-63896a7aff87\") " pod="kube-system/cilium-2bmkp" Sep 11 00:16:54.157542 kubelet[2770]: I0911 00:16:54.157132 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-cilium-cgroup\") pod \"cilium-2bmkp\" (UID: \"42e150ed-be6b-4713-ac8f-63896a7aff87\") " pod="kube-system/cilium-2bmkp" Sep 11 00:16:54.157542 kubelet[2770]: I0911 00:16:54.157151 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/42e150ed-be6b-4713-ac8f-63896a7aff87-clustermesh-secrets\") pod \"cilium-2bmkp\" (UID: \"42e150ed-be6b-4713-ac8f-63896a7aff87\") " pod="kube-system/cilium-2bmkp" Sep 11 00:16:54.157542 kubelet[2770]: I0911 00:16:54.157170 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4gpw\" (UniqueName: \"kubernetes.io/projected/42e150ed-be6b-4713-ac8f-63896a7aff87-kube-api-access-c4gpw\") pod \"cilium-2bmkp\" (UID: \"42e150ed-be6b-4713-ac8f-63896a7aff87\") " pod="kube-system/cilium-2bmkp" Sep 11 00:16:54.158574 kubelet[2770]: E0911 00:16:54.157893 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:16:54.165595 kubelet[2770]: E0911 00:16:54.164833 2770 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 11 00:16:54.165595 kubelet[2770]: E0911 00:16:54.164875 2770 projected.go:194] Error preparing data for projected volume kube-api-access-nk25b for pod kube-system/kube-proxy-lmlhs: configmap "kube-root-ca.crt" not found Sep 11 00:16:54.165595 kubelet[2770]: E0911 00:16:54.164944 2770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/731c3a34-f3d3-474e-b71e-84767f581212-kube-api-access-nk25b podName:731c3a34-f3d3-474e-b71e-84767f581212 nodeName:}" failed. No retries permitted until 2025-09-11 00:16:54.664913469 +0000 UTC m=+4.657770941 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nk25b" (UniqueName: "kubernetes.io/projected/731c3a34-f3d3-474e-b71e-84767f581212-kube-api-access-nk25b") pod "kube-proxy-lmlhs" (UID: "731c3a34-f3d3-474e-b71e-84767f581212") : configmap "kube-root-ca.crt" not found Sep 11 00:16:54.360436 kubelet[2770]: E0911 00:16:54.360259 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:16:54.361938 containerd[1561]: time="2025-09-11T00:16:54.361755675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2bmkp,Uid:42e150ed-be6b-4713-ac8f-63896a7aff87,Namespace:kube-system,Attempt:0,}" Sep 11 00:16:54.511905 containerd[1561]: time="2025-09-11T00:16:54.511814355Z" level=info msg="connecting to shim 5feb32fdb4edf96226ffd1f6f25dde11315e0e0928d69a8adc7a5b8757e1c948" address="unix:///run/containerd/s/5de2deb067eeef59e07d9ab3f94284c4f5ef824f16196436f6d8cdc3109b35a9" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:16:54.532925 systemd[1]: Created slice kubepods-besteffort-pod2c41fe9c_3c2c_412b_a69d_89d1d41ef025.slice - libcontainer container kubepods-besteffort-pod2c41fe9c_3c2c_412b_a69d_89d1d41ef025.slice. Sep 11 00:16:54.544760 systemd[1]: Started cri-containerd-5feb32fdb4edf96226ffd1f6f25dde11315e0e0928d69a8adc7a5b8757e1c948.scope - libcontainer container 5feb32fdb4edf96226ffd1f6f25dde11315e0e0928d69a8adc7a5b8757e1c948. Sep 11 00:16:54.559526 kubelet[2770]: I0911 00:16:54.559461 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c41fe9c-3c2c-412b-a69d-89d1d41ef025-cilium-config-path\") pod \"cilium-operator-5d85765b45-6s594\" (UID: \"2c41fe9c-3c2c-412b-a69d-89d1d41ef025\") " pod="kube-system/cilium-operator-5d85765b45-6s594" Sep 11 00:16:54.560147 kubelet[2770]: I0911 00:16:54.560058 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xw87\" (UniqueName: \"kubernetes.io/projected/2c41fe9c-3c2c-412b-a69d-89d1d41ef025-kube-api-access-7xw87\") pod \"cilium-operator-5d85765b45-6s594\" (UID: \"2c41fe9c-3c2c-412b-a69d-89d1d41ef025\") " pod="kube-system/cilium-operator-5d85765b45-6s594" Sep 11 00:16:54.578528 containerd[1561]: time="2025-09-11T00:16:54.578473355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2bmkp,Uid:42e150ed-be6b-4713-ac8f-63896a7aff87,Namespace:kube-system,Attempt:0,} returns sandbox id \"5feb32fdb4edf96226ffd1f6f25dde11315e0e0928d69a8adc7a5b8757e1c948\"" Sep 11 00:16:54.579318 kubelet[2770]: E0911 00:16:54.579294 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:16:54.580767 containerd[1561]: time="2025-09-11T00:16:54.580742977Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 11 00:16:54.838048 kubelet[2770]: E0911 00:16:54.837989 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:16:54.838770 containerd[1561]: time="2025-09-11T00:16:54.838715750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-6s594,Uid:2c41fe9c-3c2c-412b-a69d-89d1d41ef025,Namespace:kube-system,Attempt:0,}" Sep 11 00:16:54.864262 containerd[1561]: time="2025-09-11T00:16:54.864206367Z" level=info msg="connecting to shim c77568eccc275c2a769d58774ea85520940fa78f9c890504dd0e7579db5d6d9e" address="unix:///run/containerd/s/e876c07dd6525582bac2cbd466d3150470519d74a2e40b98988d5ce991197da9" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:16:54.890671 systemd[1]: Started cri-containerd-c77568eccc275c2a769d58774ea85520940fa78f9c890504dd0e7579db5d6d9e.scope - libcontainer container c77568eccc275c2a769d58774ea85520940fa78f9c890504dd0e7579db5d6d9e. Sep 11 00:16:54.940205 containerd[1561]: time="2025-09-11T00:16:54.940124152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-6s594,Uid:2c41fe9c-3c2c-412b-a69d-89d1d41ef025,Namespace:kube-system,Attempt:0,} returns sandbox id \"c77568eccc275c2a769d58774ea85520940fa78f9c890504dd0e7579db5d6d9e\"" Sep 11 00:16:54.940961 kubelet[2770]: E0911 00:16:54.940933 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:16:54.951844 kubelet[2770]: E0911 00:16:54.951811 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:16:54.952252 containerd[1561]: time="2025-09-11T00:16:54.952203902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lmlhs,Uid:731c3a34-f3d3-474e-b71e-84767f581212,Namespace:kube-system,Attempt:0,}" Sep 11 00:16:54.975914 containerd[1561]: time="2025-09-11T00:16:54.975854026Z" level=info msg="connecting to shim b48da7fc510ceda6ae25e7d22398fce3de81a5ab71d021ba805ef1799aeb2c5a" address="unix:///run/containerd/s/7917740eaa584f61614ed78d8e10a98f032568b71a25b974092eac70c30fdff8" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:16:55.004683 systemd[1]: Started cri-containerd-b48da7fc510ceda6ae25e7d22398fce3de81a5ab71d021ba805ef1799aeb2c5a.scope - libcontainer container b48da7fc510ceda6ae25e7d22398fce3de81a5ab71d021ba805ef1799aeb2c5a. Sep 11 00:16:55.031826 containerd[1561]: time="2025-09-11T00:16:55.031772873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lmlhs,Uid:731c3a34-f3d3-474e-b71e-84767f581212,Namespace:kube-system,Attempt:0,} returns sandbox id \"b48da7fc510ceda6ae25e7d22398fce3de81a5ab71d021ba805ef1799aeb2c5a\"" Sep 11 00:16:55.032481 kubelet[2770]: E0911 00:16:55.032459 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:16:55.034388 containerd[1561]: time="2025-09-11T00:16:55.034341623Z" level=info msg="CreateContainer within sandbox \"b48da7fc510ceda6ae25e7d22398fce3de81a5ab71d021ba805ef1799aeb2c5a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 11 00:16:55.046254 containerd[1561]: time="2025-09-11T00:16:55.046206194Z" level=info msg="Container 86cf91f3d7f66c0fcddf363f77c87b7c20357a7c06d3521b2f75cb6b8143e7b6: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:16:55.074306 containerd[1561]: time="2025-09-11T00:16:55.074239909Z" level=info msg="CreateContainer within sandbox \"b48da7fc510ceda6ae25e7d22398fce3de81a5ab71d021ba805ef1799aeb2c5a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"86cf91f3d7f66c0fcddf363f77c87b7c20357a7c06d3521b2f75cb6b8143e7b6\"" Sep 11 00:16:55.074825 containerd[1561]: time="2025-09-11T00:16:55.074765524Z" level=info msg="StartContainer for \"86cf91f3d7f66c0fcddf363f77c87b7c20357a7c06d3521b2f75cb6b8143e7b6\"" Sep 11 00:16:55.076498 containerd[1561]: time="2025-09-11T00:16:55.076453187Z" level=info msg="connecting to shim 86cf91f3d7f66c0fcddf363f77c87b7c20357a7c06d3521b2f75cb6b8143e7b6" address="unix:///run/containerd/s/7917740eaa584f61614ed78d8e10a98f032568b71a25b974092eac70c30fdff8" protocol=ttrpc version=3 Sep 11 00:16:55.103698 systemd[1]: Started cri-containerd-86cf91f3d7f66c0fcddf363f77c87b7c20357a7c06d3521b2f75cb6b8143e7b6.scope - libcontainer container 86cf91f3d7f66c0fcddf363f77c87b7c20357a7c06d3521b2f75cb6b8143e7b6. Sep 11 00:16:55.152751 containerd[1561]: time="2025-09-11T00:16:55.152706649Z" level=info msg="StartContainer for \"86cf91f3d7f66c0fcddf363f77c87b7c20357a7c06d3521b2f75cb6b8143e7b6\" returns successfully" Sep 11 00:16:55.161674 kubelet[2770]: E0911 00:16:55.161549 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:16:55.172849 kubelet[2770]: I0911 00:16:55.172786 2770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lmlhs" podStartSLOduration=1.172766661 podStartE2EDuration="1.172766661s" podCreationTimestamp="2025-09-11 00:16:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:16:55.171572607 +0000 UTC m=+5.164430099" watchObservedRunningTime="2025-09-11 00:16:55.172766661 +0000 UTC m=+5.165624133" Sep 11 00:16:56.393474 kubelet[2770]: E0911 00:16:56.393369 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:16:57.166304 kubelet[2770]: E0911 00:16:57.166253 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:00.716092 kubelet[2770]: E0911 00:17:00.716036 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:01.173738 kubelet[2770]: E0911 00:17:01.173645 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:07.448387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2066626622.mount: Deactivated successfully. Sep 11 00:17:09.578989 containerd[1561]: time="2025-09-11T00:17:09.578883567Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:17:09.580022 containerd[1561]: time="2025-09-11T00:17:09.579972090Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 11 00:17:09.581886 containerd[1561]: time="2025-09-11T00:17:09.581834528Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:17:09.583493 containerd[1561]: time="2025-09-11T00:17:09.583432693Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 15.00265769s" Sep 11 00:17:09.583493 containerd[1561]: time="2025-09-11T00:17:09.583477953Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 11 00:17:09.592048 containerd[1561]: time="2025-09-11T00:17:09.591984941Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 11 00:17:09.603484 containerd[1561]: time="2025-09-11T00:17:09.601772122Z" level=info msg="CreateContainer within sandbox \"5feb32fdb4edf96226ffd1f6f25dde11315e0e0928d69a8adc7a5b8757e1c948\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 11 00:17:09.615798 containerd[1561]: time="2025-09-11T00:17:09.615006530Z" level=info msg="Container 3ed5e5c5d453204496168142c8daee7eeabf307a59fc4f21cd6b916159cb48c8: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:17:09.626248 containerd[1561]: time="2025-09-11T00:17:09.626179824Z" level=info msg="CreateContainer within sandbox \"5feb32fdb4edf96226ffd1f6f25dde11315e0e0928d69a8adc7a5b8757e1c948\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3ed5e5c5d453204496168142c8daee7eeabf307a59fc4f21cd6b916159cb48c8\"" Sep 11 00:17:09.626967 containerd[1561]: time="2025-09-11T00:17:09.626920095Z" level=info msg="StartContainer for \"3ed5e5c5d453204496168142c8daee7eeabf307a59fc4f21cd6b916159cb48c8\"" Sep 11 00:17:09.627994 containerd[1561]: time="2025-09-11T00:17:09.627950382Z" level=info msg="connecting to shim 3ed5e5c5d453204496168142c8daee7eeabf307a59fc4f21cd6b916159cb48c8" address="unix:///run/containerd/s/5de2deb067eeef59e07d9ab3f94284c4f5ef824f16196436f6d8cdc3109b35a9" protocol=ttrpc version=3 Sep 11 00:17:09.698728 systemd[1]: Started cri-containerd-3ed5e5c5d453204496168142c8daee7eeabf307a59fc4f21cd6b916159cb48c8.scope - libcontainer container 3ed5e5c5d453204496168142c8daee7eeabf307a59fc4f21cd6b916159cb48c8. Sep 11 00:17:09.737358 containerd[1561]: time="2025-09-11T00:17:09.737291242Z" level=info msg="StartContainer for \"3ed5e5c5d453204496168142c8daee7eeabf307a59fc4f21cd6b916159cb48c8\" returns successfully" Sep 11 00:17:09.752153 systemd[1]: cri-containerd-3ed5e5c5d453204496168142c8daee7eeabf307a59fc4f21cd6b916159cb48c8.scope: Deactivated successfully. Sep 11 00:17:09.754057 containerd[1561]: time="2025-09-11T00:17:09.754002616Z" level=info msg="received exit event container_id:\"3ed5e5c5d453204496168142c8daee7eeabf307a59fc4f21cd6b916159cb48c8\" id:\"3ed5e5c5d453204496168142c8daee7eeabf307a59fc4f21cd6b916159cb48c8\" pid:3190 exited_at:{seconds:1757549829 nanos:753435300}" Sep 11 00:17:09.754223 containerd[1561]: time="2025-09-11T00:17:09.754018268Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3ed5e5c5d453204496168142c8daee7eeabf307a59fc4f21cd6b916159cb48c8\" id:\"3ed5e5c5d453204496168142c8daee7eeabf307a59fc4f21cd6b916159cb48c8\" pid:3190 exited_at:{seconds:1757549829 nanos:753435300}" Sep 11 00:17:09.779598 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ed5e5c5d453204496168142c8daee7eeabf307a59fc4f21cd6b916159cb48c8-rootfs.mount: Deactivated successfully. Sep 11 00:17:10.232629 kubelet[2770]: E0911 00:17:10.232590 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:11.236953 kubelet[2770]: E0911 00:17:11.236880 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:11.239578 containerd[1561]: time="2025-09-11T00:17:11.239294136Z" level=info msg="CreateContainer within sandbox \"5feb32fdb4edf96226ffd1f6f25dde11315e0e0928d69a8adc7a5b8757e1c948\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 11 00:17:11.253865 containerd[1561]: time="2025-09-11T00:17:11.253782473Z" level=info msg="Container 621839a0dea8040ef6503b1c51ffe909b4abc7b1efaac3b94cfbc351c0d0ae73: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:17:11.266638 containerd[1561]: time="2025-09-11T00:17:11.266537187Z" level=info msg="CreateContainer within sandbox \"5feb32fdb4edf96226ffd1f6f25dde11315e0e0928d69a8adc7a5b8757e1c948\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"621839a0dea8040ef6503b1c51ffe909b4abc7b1efaac3b94cfbc351c0d0ae73\"" Sep 11 00:17:11.267246 containerd[1561]: time="2025-09-11T00:17:11.267200160Z" level=info msg="StartContainer for \"621839a0dea8040ef6503b1c51ffe909b4abc7b1efaac3b94cfbc351c0d0ae73\"" Sep 11 00:17:11.268561 containerd[1561]: time="2025-09-11T00:17:11.268434012Z" level=info msg="connecting to shim 621839a0dea8040ef6503b1c51ffe909b4abc7b1efaac3b94cfbc351c0d0ae73" address="unix:///run/containerd/s/5de2deb067eeef59e07d9ab3f94284c4f5ef824f16196436f6d8cdc3109b35a9" protocol=ttrpc version=3 Sep 11 00:17:11.304857 systemd[1]: Started cri-containerd-621839a0dea8040ef6503b1c51ffe909b4abc7b1efaac3b94cfbc351c0d0ae73.scope - libcontainer container 621839a0dea8040ef6503b1c51ffe909b4abc7b1efaac3b94cfbc351c0d0ae73. Sep 11 00:17:11.345677 containerd[1561]: time="2025-09-11T00:17:11.345604310Z" level=info msg="StartContainer for \"621839a0dea8040ef6503b1c51ffe909b4abc7b1efaac3b94cfbc351c0d0ae73\" returns successfully" Sep 11 00:17:11.362948 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 11 00:17:11.363216 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:17:11.364792 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:17:11.367130 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:17:11.369431 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 11 00:17:11.370214 systemd[1]: cri-containerd-621839a0dea8040ef6503b1c51ffe909b4abc7b1efaac3b94cfbc351c0d0ae73.scope: Deactivated successfully. Sep 11 00:17:11.370907 containerd[1561]: time="2025-09-11T00:17:11.370839496Z" level=info msg="received exit event container_id:\"621839a0dea8040ef6503b1c51ffe909b4abc7b1efaac3b94cfbc351c0d0ae73\" id:\"621839a0dea8040ef6503b1c51ffe909b4abc7b1efaac3b94cfbc351c0d0ae73\" pid:3236 exited_at:{seconds:1757549831 nanos:370125733}" Sep 11 00:17:11.371113 containerd[1561]: time="2025-09-11T00:17:11.371086696Z" level=info msg="TaskExit event in podsandbox handler container_id:\"621839a0dea8040ef6503b1c51ffe909b4abc7b1efaac3b94cfbc351c0d0ae73\" id:\"621839a0dea8040ef6503b1c51ffe909b4abc7b1efaac3b94cfbc351c0d0ae73\" pid:3236 exited_at:{seconds:1757549831 nanos:370125733}" Sep 11 00:17:11.407366 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:17:12.020221 containerd[1561]: time="2025-09-11T00:17:12.020162303Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:17:12.021046 containerd[1561]: time="2025-09-11T00:17:12.021005730Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 11 00:17:12.022348 containerd[1561]: time="2025-09-11T00:17:12.022302644Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:17:12.023748 containerd[1561]: time="2025-09-11T00:17:12.023682773Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.431627141s" Sep 11 00:17:12.023748 containerd[1561]: time="2025-09-11T00:17:12.023740898Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 11 00:17:12.026019 containerd[1561]: time="2025-09-11T00:17:12.025990886Z" level=info msg="CreateContainer within sandbox \"c77568eccc275c2a769d58774ea85520940fa78f9c890504dd0e7579db5d6d9e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 11 00:17:12.035667 containerd[1561]: time="2025-09-11T00:17:12.035630414Z" level=info msg="Container 8e2b29f368741acf22f82f1dc4cf42afbf579f4ba6f91ed4d4b1f20b75578630: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:17:12.043281 containerd[1561]: time="2025-09-11T00:17:12.043227074Z" level=info msg="CreateContainer within sandbox \"c77568eccc275c2a769d58774ea85520940fa78f9c890504dd0e7579db5d6d9e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8e2b29f368741acf22f82f1dc4cf42afbf579f4ba6f91ed4d4b1f20b75578630\"" Sep 11 00:17:12.043977 containerd[1561]: time="2025-09-11T00:17:12.043922700Z" level=info msg="StartContainer for \"8e2b29f368741acf22f82f1dc4cf42afbf579f4ba6f91ed4d4b1f20b75578630\"" Sep 11 00:17:12.045247 containerd[1561]: time="2025-09-11T00:17:12.045209443Z" level=info msg="connecting to shim 8e2b29f368741acf22f82f1dc4cf42afbf579f4ba6f91ed4d4b1f20b75578630" address="unix:///run/containerd/s/e876c07dd6525582bac2cbd466d3150470519d74a2e40b98988d5ce991197da9" protocol=ttrpc version=3 Sep 11 00:17:12.076725 systemd[1]: Started cri-containerd-8e2b29f368741acf22f82f1dc4cf42afbf579f4ba6f91ed4d4b1f20b75578630.scope - libcontainer container 8e2b29f368741acf22f82f1dc4cf42afbf579f4ba6f91ed4d4b1f20b75578630. Sep 11 00:17:12.113962 containerd[1561]: time="2025-09-11T00:17:12.113844674Z" level=info msg="StartContainer for \"8e2b29f368741acf22f82f1dc4cf42afbf579f4ba6f91ed4d4b1f20b75578630\" returns successfully" Sep 11 00:17:12.244616 kubelet[2770]: E0911 00:17:12.244566 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:12.248122 containerd[1561]: time="2025-09-11T00:17:12.247887072Z" level=info msg="CreateContainer within sandbox \"5feb32fdb4edf96226ffd1f6f25dde11315e0e0928d69a8adc7a5b8757e1c948\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 11 00:17:12.250214 kubelet[2770]: E0911 00:17:12.250099 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:12.260522 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-621839a0dea8040ef6503b1c51ffe909b4abc7b1efaac3b94cfbc351c0d0ae73-rootfs.mount: Deactivated successfully. Sep 11 00:17:12.275599 containerd[1561]: time="2025-09-11T00:17:12.273526103Z" level=info msg="Container 7ee196202eb8c78da472024cd2351815dcc9af702c35d93534bce3733c845967: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:17:12.294386 containerd[1561]: time="2025-09-11T00:17:12.294317461Z" level=info msg="CreateContainer within sandbox \"5feb32fdb4edf96226ffd1f6f25dde11315e0e0928d69a8adc7a5b8757e1c948\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7ee196202eb8c78da472024cd2351815dcc9af702c35d93534bce3733c845967\"" Sep 11 00:17:12.296271 containerd[1561]: time="2025-09-11T00:17:12.296239530Z" level=info msg="StartContainer for \"7ee196202eb8c78da472024cd2351815dcc9af702c35d93534bce3733c845967\"" Sep 11 00:17:12.299175 containerd[1561]: time="2025-09-11T00:17:12.299108553Z" level=info msg="connecting to shim 7ee196202eb8c78da472024cd2351815dcc9af702c35d93534bce3733c845967" address="unix:///run/containerd/s/5de2deb067eeef59e07d9ab3f94284c4f5ef824f16196436f6d8cdc3109b35a9" protocol=ttrpc version=3 Sep 11 00:17:12.353783 systemd[1]: Started cri-containerd-7ee196202eb8c78da472024cd2351815dcc9af702c35d93534bce3733c845967.scope - libcontainer container 7ee196202eb8c78da472024cd2351815dcc9af702c35d93534bce3733c845967. Sep 11 00:17:12.412487 systemd[1]: cri-containerd-7ee196202eb8c78da472024cd2351815dcc9af702c35d93534bce3733c845967.scope: Deactivated successfully. Sep 11 00:17:12.415738 containerd[1561]: time="2025-09-11T00:17:12.415684448Z" level=info msg="received exit event container_id:\"7ee196202eb8c78da472024cd2351815dcc9af702c35d93534bce3733c845967\" id:\"7ee196202eb8c78da472024cd2351815dcc9af702c35d93534bce3733c845967\" pid:3334 exited_at:{seconds:1757549832 nanos:415418111}" Sep 11 00:17:12.415907 containerd[1561]: time="2025-09-11T00:17:12.415868872Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7ee196202eb8c78da472024cd2351815dcc9af702c35d93534bce3733c845967\" id:\"7ee196202eb8c78da472024cd2351815dcc9af702c35d93534bce3733c845967\" pid:3334 exited_at:{seconds:1757549832 nanos:415418111}" Sep 11 00:17:12.418001 containerd[1561]: time="2025-09-11T00:17:12.417969706Z" level=info msg="StartContainer for \"7ee196202eb8c78da472024cd2351815dcc9af702c35d93534bce3733c845967\" returns successfully" Sep 11 00:17:13.255185 kubelet[2770]: E0911 00:17:13.254890 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:13.255915 kubelet[2770]: E0911 00:17:13.255559 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:13.256458 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ee196202eb8c78da472024cd2351815dcc9af702c35d93534bce3733c845967-rootfs.mount: Deactivated successfully. Sep 11 00:17:13.258761 containerd[1561]: time="2025-09-11T00:17:13.258714175Z" level=info msg="CreateContainer within sandbox \"5feb32fdb4edf96226ffd1f6f25dde11315e0e0928d69a8adc7a5b8757e1c948\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 11 00:17:13.272567 containerd[1561]: time="2025-09-11T00:17:13.272332804Z" level=info msg="Container 5821ffe59c3eef7bf919be35161001228286d49fc2f3c43cda2316553e2d3773: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:17:13.277060 kubelet[2770]: I0911 00:17:13.276902 2770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-6s594" podStartSLOduration=2.194337161 podStartE2EDuration="19.276838641s" podCreationTimestamp="2025-09-11 00:16:54 +0000 UTC" firstStartedPulling="2025-09-11 00:16:54.942024189 +0000 UTC m=+4.934881661" lastFinishedPulling="2025-09-11 00:17:12.024525669 +0000 UTC m=+22.017383141" observedRunningTime="2025-09-11 00:17:12.314229629 +0000 UTC m=+22.307087101" watchObservedRunningTime="2025-09-11 00:17:13.276838641 +0000 UTC m=+23.269696113" Sep 11 00:17:13.278759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1402591422.mount: Deactivated successfully. Sep 11 00:17:13.285158 containerd[1561]: time="2025-09-11T00:17:13.285089892Z" level=info msg="CreateContainer within sandbox \"5feb32fdb4edf96226ffd1f6f25dde11315e0e0928d69a8adc7a5b8757e1c948\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5821ffe59c3eef7bf919be35161001228286d49fc2f3c43cda2316553e2d3773\"" Sep 11 00:17:13.285897 containerd[1561]: time="2025-09-11T00:17:13.285836386Z" level=info msg="StartContainer for \"5821ffe59c3eef7bf919be35161001228286d49fc2f3c43cda2316553e2d3773\"" Sep 11 00:17:13.287050 containerd[1561]: time="2025-09-11T00:17:13.287021736Z" level=info msg="connecting to shim 5821ffe59c3eef7bf919be35161001228286d49fc2f3c43cda2316553e2d3773" address="unix:///run/containerd/s/5de2deb067eeef59e07d9ab3f94284c4f5ef824f16196436f6d8cdc3109b35a9" protocol=ttrpc version=3 Sep 11 00:17:13.318939 systemd[1]: Started cri-containerd-5821ffe59c3eef7bf919be35161001228286d49fc2f3c43cda2316553e2d3773.scope - libcontainer container 5821ffe59c3eef7bf919be35161001228286d49fc2f3c43cda2316553e2d3773. Sep 11 00:17:13.357319 systemd[1]: cri-containerd-5821ffe59c3eef7bf919be35161001228286d49fc2f3c43cda2316553e2d3773.scope: Deactivated successfully. Sep 11 00:17:13.358092 containerd[1561]: time="2025-09-11T00:17:13.358010147Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5821ffe59c3eef7bf919be35161001228286d49fc2f3c43cda2316553e2d3773\" id:\"5821ffe59c3eef7bf919be35161001228286d49fc2f3c43cda2316553e2d3773\" pid:3374 exited_at:{seconds:1757549833 nanos:357444981}" Sep 11 00:17:13.360254 containerd[1561]: time="2025-09-11T00:17:13.360204399Z" level=info msg="received exit event container_id:\"5821ffe59c3eef7bf919be35161001228286d49fc2f3c43cda2316553e2d3773\" id:\"5821ffe59c3eef7bf919be35161001228286d49fc2f3c43cda2316553e2d3773\" pid:3374 exited_at:{seconds:1757549833 nanos:357444981}" Sep 11 00:17:13.369181 containerd[1561]: time="2025-09-11T00:17:13.369132808Z" level=info msg="StartContainer for \"5821ffe59c3eef7bf919be35161001228286d49fc2f3c43cda2316553e2d3773\" returns successfully" Sep 11 00:17:13.384835 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5821ffe59c3eef7bf919be35161001228286d49fc2f3c43cda2316553e2d3773-rootfs.mount: Deactivated successfully. Sep 11 00:17:14.261109 kubelet[2770]: E0911 00:17:14.261044 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:14.263706 containerd[1561]: time="2025-09-11T00:17:14.263640357Z" level=info msg="CreateContainer within sandbox \"5feb32fdb4edf96226ffd1f6f25dde11315e0e0928d69a8adc7a5b8757e1c948\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 11 00:17:14.290782 containerd[1561]: time="2025-09-11T00:17:14.290684965Z" level=info msg="Container 0ff15ad71a1a1fea7e187098ed68db1edfa4981674ed7cd293b2a1b0f129f500: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:17:14.294658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2311508085.mount: Deactivated successfully. Sep 11 00:17:14.301730 containerd[1561]: time="2025-09-11T00:17:14.301670458Z" level=info msg="CreateContainer within sandbox \"5feb32fdb4edf96226ffd1f6f25dde11315e0e0928d69a8adc7a5b8757e1c948\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0ff15ad71a1a1fea7e187098ed68db1edfa4981674ed7cd293b2a1b0f129f500\"" Sep 11 00:17:14.302304 containerd[1561]: time="2025-09-11T00:17:14.302257385Z" level=info msg="StartContainer for \"0ff15ad71a1a1fea7e187098ed68db1edfa4981674ed7cd293b2a1b0f129f500\"" Sep 11 00:17:14.303254 containerd[1561]: time="2025-09-11T00:17:14.303210504Z" level=info msg="connecting to shim 0ff15ad71a1a1fea7e187098ed68db1edfa4981674ed7cd293b2a1b0f129f500" address="unix:///run/containerd/s/5de2deb067eeef59e07d9ab3f94284c4f5ef824f16196436f6d8cdc3109b35a9" protocol=ttrpc version=3 Sep 11 00:17:14.330804 systemd[1]: Started cri-containerd-0ff15ad71a1a1fea7e187098ed68db1edfa4981674ed7cd293b2a1b0f129f500.scope - libcontainer container 0ff15ad71a1a1fea7e187098ed68db1edfa4981674ed7cd293b2a1b0f129f500. Sep 11 00:17:14.374827 containerd[1561]: time="2025-09-11T00:17:14.374773419Z" level=info msg="StartContainer for \"0ff15ad71a1a1fea7e187098ed68db1edfa4981674ed7cd293b2a1b0f129f500\" returns successfully" Sep 11 00:17:14.473815 containerd[1561]: time="2025-09-11T00:17:14.473731152Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0ff15ad71a1a1fea7e187098ed68db1edfa4981674ed7cd293b2a1b0f129f500\" id:\"9a7e6a6f37f606fe97b96db2e4f901550ac682a0535d33d05c41724c234fcb48\" pid:3441 exited_at:{seconds:1757549834 nanos:473140667}" Sep 11 00:17:14.537804 kubelet[2770]: I0911 00:17:14.537636 2770 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 11 00:17:14.585053 systemd[1]: Created slice kubepods-burstable-pod0adf3a46_d7c3_4812_bd5b_9866cd695802.slice - libcontainer container kubepods-burstable-pod0adf3a46_d7c3_4812_bd5b_9866cd695802.slice. Sep 11 00:17:14.597310 systemd[1]: Created slice kubepods-burstable-pod6e5f26ab_f8cf_4eab_b642_b6ed53d93cf9.slice - libcontainer container kubepods-burstable-pod6e5f26ab_f8cf_4eab_b642_b6ed53d93cf9.slice. Sep 11 00:17:14.598790 kubelet[2770]: I0911 00:17:14.598754 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2frlm\" (UniqueName: \"kubernetes.io/projected/0adf3a46-d7c3-4812-bd5b-9866cd695802-kube-api-access-2frlm\") pod \"coredns-7c65d6cfc9-ws98w\" (UID: \"0adf3a46-d7c3-4812-bd5b-9866cd695802\") " pod="kube-system/coredns-7c65d6cfc9-ws98w" Sep 11 00:17:14.598986 kubelet[2770]: I0911 00:17:14.598938 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0adf3a46-d7c3-4812-bd5b-9866cd695802-config-volume\") pod \"coredns-7c65d6cfc9-ws98w\" (UID: \"0adf3a46-d7c3-4812-bd5b-9866cd695802\") " pod="kube-system/coredns-7c65d6cfc9-ws98w" Sep 11 00:17:14.600290 kubelet[2770]: I0911 00:17:14.600219 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcccr\" (UniqueName: \"kubernetes.io/projected/6e5f26ab-f8cf-4eab-b642-b6ed53d93cf9-kube-api-access-rcccr\") pod \"coredns-7c65d6cfc9-nh6rd\" (UID: \"6e5f26ab-f8cf-4eab-b642-b6ed53d93cf9\") " pod="kube-system/coredns-7c65d6cfc9-nh6rd" Sep 11 00:17:14.601080 kubelet[2770]: I0911 00:17:14.601021 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e5f26ab-f8cf-4eab-b642-b6ed53d93cf9-config-volume\") pod \"coredns-7c65d6cfc9-nh6rd\" (UID: \"6e5f26ab-f8cf-4eab-b642-b6ed53d93cf9\") " pod="kube-system/coredns-7c65d6cfc9-nh6rd" Sep 11 00:17:14.892659 kubelet[2770]: E0911 00:17:14.892606 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:14.903006 containerd[1561]: time="2025-09-11T00:17:14.902929793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ws98w,Uid:0adf3a46-d7c3-4812-bd5b-9866cd695802,Namespace:kube-system,Attempt:0,}" Sep 11 00:17:14.904144 kubelet[2770]: E0911 00:17:14.904085 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:14.905033 containerd[1561]: time="2025-09-11T00:17:14.904988893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nh6rd,Uid:6e5f26ab-f8cf-4eab-b642-b6ed53d93cf9,Namespace:kube-system,Attempt:0,}" Sep 11 00:17:15.270063 kubelet[2770]: E0911 00:17:15.269897 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:16.272601 kubelet[2770]: E0911 00:17:16.272539 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:16.754248 systemd-networkd[1484]: cilium_host: Link UP Sep 11 00:17:16.754494 systemd-networkd[1484]: cilium_net: Link UP Sep 11 00:17:16.754800 systemd-networkd[1484]: cilium_net: Gained carrier Sep 11 00:17:16.755027 systemd-networkd[1484]: cilium_host: Gained carrier Sep 11 00:17:16.926009 systemd-networkd[1484]: cilium_vxlan: Link UP Sep 11 00:17:16.926025 systemd-networkd[1484]: cilium_vxlan: Gained carrier Sep 11 00:17:17.267567 kernel: NET: Registered PF_ALG protocol family Sep 11 00:17:17.275127 kubelet[2770]: E0911 00:17:17.275085 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:17.333830 systemd-networkd[1484]: cilium_host: Gained IPv6LL Sep 11 00:17:17.653988 systemd-networkd[1484]: cilium_net: Gained IPv6LL Sep 11 00:17:18.134803 systemd-networkd[1484]: lxc_health: Link UP Sep 11 00:17:18.136097 systemd-networkd[1484]: lxc_health: Gained carrier Sep 11 00:17:18.363660 kubelet[2770]: E0911 00:17:18.363393 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:18.417271 kubelet[2770]: I0911 00:17:18.416692 2770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2bmkp" podStartSLOduration=9.40620294 podStartE2EDuration="24.416675128s" podCreationTimestamp="2025-09-11 00:16:54 +0000 UTC" firstStartedPulling="2025-09-11 00:16:54.580240797 +0000 UTC m=+4.573098269" lastFinishedPulling="2025-09-11 00:17:09.590712984 +0000 UTC m=+19.583570457" observedRunningTime="2025-09-11 00:17:15.286838709 +0000 UTC m=+25.279696181" watchObservedRunningTime="2025-09-11 00:17:18.416675128 +0000 UTC m=+28.409532600" Sep 11 00:17:18.494807 kernel: eth0: renamed from tmp84534 Sep 11 00:17:18.496868 systemd-networkd[1484]: lxc1a2bea224753: Link UP Sep 11 00:17:18.498720 systemd-networkd[1484]: lxc1a2bea224753: Gained carrier Sep 11 00:17:18.516650 kernel: eth0: renamed from tmpe940c Sep 11 00:17:18.518473 systemd-networkd[1484]: lxce2989a0a7c4f: Link UP Sep 11 00:17:18.518893 systemd-networkd[1484]: lxce2989a0a7c4f: Gained carrier Sep 11 00:17:18.872776 systemd-networkd[1484]: cilium_vxlan: Gained IPv6LL Sep 11 00:17:19.286703 kubelet[2770]: E0911 00:17:19.286549 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:19.637750 systemd-networkd[1484]: lxc_health: Gained IPv6LL Sep 11 00:17:19.829839 systemd-networkd[1484]: lxce2989a0a7c4f: Gained IPv6LL Sep 11 00:17:20.288568 kubelet[2770]: E0911 00:17:20.288495 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:20.405754 systemd-networkd[1484]: lxc1a2bea224753: Gained IPv6LL Sep 11 00:17:20.554692 systemd[1]: Started sshd@9-10.0.0.58:22-10.0.0.1:33312.service - OpenSSH per-connection server daemon (10.0.0.1:33312). Sep 11 00:17:20.632489 sshd[3906]: Accepted publickey for core from 10.0.0.1 port 33312 ssh2: RSA SHA256:y/XwUTkYMtMNacauLj7j4r7D0OZbB+8bKKbHTNwhPa4 Sep 11 00:17:20.634435 sshd-session[3906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:17:20.641342 systemd-logind[1548]: New session 10 of user core. Sep 11 00:17:20.655782 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 11 00:17:20.817168 sshd[3909]: Connection closed by 10.0.0.1 port 33312 Sep 11 00:17:20.818745 sshd-session[3906]: pam_unix(sshd:session): session closed for user core Sep 11 00:17:20.824138 systemd[1]: sshd@9-10.0.0.58:22-10.0.0.1:33312.service: Deactivated successfully. Sep 11 00:17:20.827168 systemd[1]: session-10.scope: Deactivated successfully. Sep 11 00:17:20.829446 systemd-logind[1548]: Session 10 logged out. Waiting for processes to exit. Sep 11 00:17:20.831216 systemd-logind[1548]: Removed session 10. Sep 11 00:17:22.214609 containerd[1561]: time="2025-09-11T00:17:22.214538741Z" level=info msg="connecting to shim e940c72e31f99e590e7b5b60661448e00d3e8cea707236df9c99a83c69c511ee" address="unix:///run/containerd/s/a5f80c1951a977792e61cdab662e53dc91288cd21ccdbc55c7c0e322a4eaf8f6" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:17:22.217337 containerd[1561]: time="2025-09-11T00:17:22.217289343Z" level=info msg="connecting to shim 84534dde89a038f7e3c9f8d6476643ff17fe8b4678ab4f674961f1645a0ad67c" address="unix:///run/containerd/s/386bb72e49d51f77f44f6cad1eb31cdaa8167158e302bcd872523d047a4ed200" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:17:22.254803 systemd[1]: Started cri-containerd-e940c72e31f99e590e7b5b60661448e00d3e8cea707236df9c99a83c69c511ee.scope - libcontainer container e940c72e31f99e590e7b5b60661448e00d3e8cea707236df9c99a83c69c511ee. Sep 11 00:17:22.259281 systemd[1]: Started cri-containerd-84534dde89a038f7e3c9f8d6476643ff17fe8b4678ab4f674961f1645a0ad67c.scope - libcontainer container 84534dde89a038f7e3c9f8d6476643ff17fe8b4678ab4f674961f1645a0ad67c. Sep 11 00:17:22.272400 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:17:22.276144 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:17:22.318027 containerd[1561]: time="2025-09-11T00:17:22.317947998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ws98w,Uid:0adf3a46-d7c3-4812-bd5b-9866cd695802,Namespace:kube-system,Attempt:0,} returns sandbox id \"e940c72e31f99e590e7b5b60661448e00d3e8cea707236df9c99a83c69c511ee\"" Sep 11 00:17:22.319044 kubelet[2770]: E0911 00:17:22.318911 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:22.319571 containerd[1561]: time="2025-09-11T00:17:22.318917845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nh6rd,Uid:6e5f26ab-f8cf-4eab-b642-b6ed53d93cf9,Namespace:kube-system,Attempt:0,} returns sandbox id \"84534dde89a038f7e3c9f8d6476643ff17fe8b4678ab4f674961f1645a0ad67c\"" Sep 11 00:17:22.319632 kubelet[2770]: E0911 00:17:22.319488 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:22.321112 containerd[1561]: time="2025-09-11T00:17:22.321076690Z" level=info msg="CreateContainer within sandbox \"e940c72e31f99e590e7b5b60661448e00d3e8cea707236df9c99a83c69c511ee\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 00:17:22.321538 containerd[1561]: time="2025-09-11T00:17:22.321490459Z" level=info msg="CreateContainer within sandbox \"84534dde89a038f7e3c9f8d6476643ff17fe8b4678ab4f674961f1645a0ad67c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 00:17:22.339837 containerd[1561]: time="2025-09-11T00:17:22.339774674Z" level=info msg="Container 7e9fe803deaa70ce67f494ad6753c2d2ea981c17027a9189e1647a22be8d5de2: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:17:22.348951 containerd[1561]: time="2025-09-11T00:17:22.348885319Z" level=info msg="Container 34147ef712eff838b89f084f292d98da5d4ca3be8827943463dfcf7aa1a32953: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:17:22.354715 containerd[1561]: time="2025-09-11T00:17:22.354655538Z" level=info msg="CreateContainer within sandbox \"e940c72e31f99e590e7b5b60661448e00d3e8cea707236df9c99a83c69c511ee\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7e9fe803deaa70ce67f494ad6753c2d2ea981c17027a9189e1647a22be8d5de2\"" Sep 11 00:17:22.355925 containerd[1561]: time="2025-09-11T00:17:22.355882810Z" level=info msg="StartContainer for \"7e9fe803deaa70ce67f494ad6753c2d2ea981c17027a9189e1647a22be8d5de2\"" Sep 11 00:17:22.358214 containerd[1561]: time="2025-09-11T00:17:22.358167018Z" level=info msg="connecting to shim 7e9fe803deaa70ce67f494ad6753c2d2ea981c17027a9189e1647a22be8d5de2" address="unix:///run/containerd/s/a5f80c1951a977792e61cdab662e53dc91288cd21ccdbc55c7c0e322a4eaf8f6" protocol=ttrpc version=3 Sep 11 00:17:22.360578 containerd[1561]: time="2025-09-11T00:17:22.360543399Z" level=info msg="CreateContainer within sandbox \"84534dde89a038f7e3c9f8d6476643ff17fe8b4678ab4f674961f1645a0ad67c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"34147ef712eff838b89f084f292d98da5d4ca3be8827943463dfcf7aa1a32953\"" Sep 11 00:17:22.361731 containerd[1561]: time="2025-09-11T00:17:22.361698689Z" level=info msg="StartContainer for \"34147ef712eff838b89f084f292d98da5d4ca3be8827943463dfcf7aa1a32953\"" Sep 11 00:17:22.362960 containerd[1561]: time="2025-09-11T00:17:22.362884629Z" level=info msg="connecting to shim 34147ef712eff838b89f084f292d98da5d4ca3be8827943463dfcf7aa1a32953" address="unix:///run/containerd/s/386bb72e49d51f77f44f6cad1eb31cdaa8167158e302bcd872523d047a4ed200" protocol=ttrpc version=3 Sep 11 00:17:22.385780 systemd[1]: Started cri-containerd-7e9fe803deaa70ce67f494ad6753c2d2ea981c17027a9189e1647a22be8d5de2.scope - libcontainer container 7e9fe803deaa70ce67f494ad6753c2d2ea981c17027a9189e1647a22be8d5de2. Sep 11 00:17:22.390406 systemd[1]: Started cri-containerd-34147ef712eff838b89f084f292d98da5d4ca3be8827943463dfcf7aa1a32953.scope - libcontainer container 34147ef712eff838b89f084f292d98da5d4ca3be8827943463dfcf7aa1a32953. Sep 11 00:17:22.446194 containerd[1561]: time="2025-09-11T00:17:22.446136356Z" level=info msg="StartContainer for \"34147ef712eff838b89f084f292d98da5d4ca3be8827943463dfcf7aa1a32953\" returns successfully" Sep 11 00:17:22.446467 containerd[1561]: time="2025-09-11T00:17:22.446246422Z" level=info msg="StartContainer for \"7e9fe803deaa70ce67f494ad6753c2d2ea981c17027a9189e1647a22be8d5de2\" returns successfully" Sep 11 00:17:23.309739 kubelet[2770]: E0911 00:17:23.309649 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:23.311909 kubelet[2770]: E0911 00:17:23.311881 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:23.323845 kubelet[2770]: I0911 00:17:23.323754 2770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-nh6rd" podStartSLOduration=29.3237308 podStartE2EDuration="29.3237308s" podCreationTimestamp="2025-09-11 00:16:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:17:23.321107531 +0000 UTC m=+33.313965003" watchObservedRunningTime="2025-09-11 00:17:23.3237308 +0000 UTC m=+33.316588292" Sep 11 00:17:23.349486 kubelet[2770]: I0911 00:17:23.349409 2770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-ws98w" podStartSLOduration=29.34938631 podStartE2EDuration="29.34938631s" podCreationTimestamp="2025-09-11 00:16:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:17:23.335672448 +0000 UTC m=+33.328529950" watchObservedRunningTime="2025-09-11 00:17:23.34938631 +0000 UTC m=+33.342243772" Sep 11 00:17:24.313929 kubelet[2770]: E0911 00:17:24.313880 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:24.893668 kubelet[2770]: E0911 00:17:24.893473 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:25.316113 kubelet[2770]: E0911 00:17:25.315956 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:25.316276 kubelet[2770]: E0911 00:17:25.316184 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:25.839538 systemd[1]: Started sshd@10-10.0.0.58:22-10.0.0.1:33328.service - OpenSSH per-connection server daemon (10.0.0.1:33328). Sep 11 00:17:25.886792 sshd[4110]: Accepted publickey for core from 10.0.0.1 port 33328 ssh2: RSA SHA256:y/XwUTkYMtMNacauLj7j4r7D0OZbB+8bKKbHTNwhPa4 Sep 11 00:17:25.888536 sshd-session[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:17:25.893252 systemd-logind[1548]: New session 11 of user core. Sep 11 00:17:25.904733 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 11 00:17:26.034061 sshd[4113]: Connection closed by 10.0.0.1 port 33328 Sep 11 00:17:26.034477 sshd-session[4110]: pam_unix(sshd:session): session closed for user core Sep 11 00:17:26.039788 systemd[1]: sshd@10-10.0.0.58:22-10.0.0.1:33328.service: Deactivated successfully. Sep 11 00:17:26.042815 systemd[1]: session-11.scope: Deactivated successfully. Sep 11 00:17:26.044021 systemd-logind[1548]: Session 11 logged out. Waiting for processes to exit. Sep 11 00:17:26.045845 systemd-logind[1548]: Removed session 11. Sep 11 00:17:31.048078 systemd[1]: Started sshd@11-10.0.0.58:22-10.0.0.1:40628.service - OpenSSH per-connection server daemon (10.0.0.1:40628). Sep 11 00:17:31.113680 sshd[4128]: Accepted publickey for core from 10.0.0.1 port 40628 ssh2: RSA SHA256:y/XwUTkYMtMNacauLj7j4r7D0OZbB+8bKKbHTNwhPa4 Sep 11 00:17:31.115776 sshd-session[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:17:31.121769 systemd-logind[1548]: New session 12 of user core. Sep 11 00:17:31.136858 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 11 00:17:31.254715 sshd[4131]: Connection closed by 10.0.0.1 port 40628 Sep 11 00:17:31.255088 sshd-session[4128]: pam_unix(sshd:session): session closed for user core Sep 11 00:17:31.260162 systemd[1]: sshd@11-10.0.0.58:22-10.0.0.1:40628.service: Deactivated successfully. Sep 11 00:17:31.262168 systemd[1]: session-12.scope: Deactivated successfully. Sep 11 00:17:31.263361 systemd-logind[1548]: Session 12 logged out. Waiting for processes to exit. Sep 11 00:17:31.265022 systemd-logind[1548]: Removed session 12. Sep 11 00:17:36.271755 systemd[1]: Started sshd@12-10.0.0.58:22-10.0.0.1:40638.service - OpenSSH per-connection server daemon (10.0.0.1:40638). Sep 11 00:17:36.329842 sshd[4145]: Accepted publickey for core from 10.0.0.1 port 40638 ssh2: RSA SHA256:y/XwUTkYMtMNacauLj7j4r7D0OZbB+8bKKbHTNwhPa4 Sep 11 00:17:36.331480 sshd-session[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:17:36.336617 systemd-logind[1548]: New session 13 of user core. Sep 11 00:17:36.351780 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 11 00:17:36.465623 sshd[4148]: Connection closed by 10.0.0.1 port 40638 Sep 11 00:17:36.466007 sshd-session[4145]: pam_unix(sshd:session): session closed for user core Sep 11 00:17:36.470687 systemd[1]: sshd@12-10.0.0.58:22-10.0.0.1:40638.service: Deactivated successfully. Sep 11 00:17:36.472693 systemd[1]: session-13.scope: Deactivated successfully. Sep 11 00:17:36.473592 systemd-logind[1548]: Session 13 logged out. Waiting for processes to exit. Sep 11 00:17:36.475086 systemd-logind[1548]: Removed session 13. Sep 11 00:17:41.479282 systemd[1]: Started sshd@13-10.0.0.58:22-10.0.0.1:45770.service - OpenSSH per-connection server daemon (10.0.0.1:45770). Sep 11 00:17:41.531084 sshd[4163]: Accepted publickey for core from 10.0.0.1 port 45770 ssh2: RSA SHA256:y/XwUTkYMtMNacauLj7j4r7D0OZbB+8bKKbHTNwhPa4 Sep 11 00:17:41.532933 sshd-session[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:17:41.537439 systemd-logind[1548]: New session 14 of user core. Sep 11 00:17:41.551698 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 11 00:17:41.662960 sshd[4166]: Connection closed by 10.0.0.1 port 45770 Sep 11 00:17:41.663369 sshd-session[4163]: pam_unix(sshd:session): session closed for user core Sep 11 00:17:41.667017 systemd[1]: sshd@13-10.0.0.58:22-10.0.0.1:45770.service: Deactivated successfully. Sep 11 00:17:41.669531 systemd[1]: session-14.scope: Deactivated successfully. Sep 11 00:17:41.671169 systemd-logind[1548]: Session 14 logged out. Waiting for processes to exit. Sep 11 00:17:41.672850 systemd-logind[1548]: Removed session 14. Sep 11 00:17:46.678331 systemd[1]: Started sshd@14-10.0.0.58:22-10.0.0.1:45780.service - OpenSSH per-connection server daemon (10.0.0.1:45780). Sep 11 00:17:46.744120 sshd[4180]: Accepted publickey for core from 10.0.0.1 port 45780 ssh2: RSA SHA256:y/XwUTkYMtMNacauLj7j4r7D0OZbB+8bKKbHTNwhPa4 Sep 11 00:17:46.745951 sshd-session[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:17:46.751015 systemd-logind[1548]: New session 15 of user core. Sep 11 00:17:46.760695 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 11 00:17:46.875586 sshd[4183]: Connection closed by 10.0.0.1 port 45780 Sep 11 00:17:46.875991 sshd-session[4180]: pam_unix(sshd:session): session closed for user core Sep 11 00:17:46.887777 systemd[1]: sshd@14-10.0.0.58:22-10.0.0.1:45780.service: Deactivated successfully. Sep 11 00:17:46.890050 systemd[1]: session-15.scope: Deactivated successfully. Sep 11 00:17:46.891256 systemd-logind[1548]: Session 15 logged out. Waiting for processes to exit. Sep 11 00:17:46.894451 systemd[1]: Started sshd@15-10.0.0.58:22-10.0.0.1:45782.service - OpenSSH per-connection server daemon (10.0.0.1:45782). Sep 11 00:17:46.895148 systemd-logind[1548]: Removed session 15. Sep 11 00:17:46.959951 sshd[4197]: Accepted publickey for core from 10.0.0.1 port 45782 ssh2: RSA SHA256:y/XwUTkYMtMNacauLj7j4r7D0OZbB+8bKKbHTNwhPa4 Sep 11 00:17:46.961684 sshd-session[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:17:46.967164 systemd-logind[1548]: New session 16 of user core. Sep 11 00:17:46.976788 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 11 00:17:47.141963 sshd[4201]: Connection closed by 10.0.0.1 port 45782 Sep 11 00:17:47.142941 sshd-session[4197]: pam_unix(sshd:session): session closed for user core Sep 11 00:17:47.152998 systemd[1]: sshd@15-10.0.0.58:22-10.0.0.1:45782.service: Deactivated successfully. Sep 11 00:17:47.155915 systemd[1]: session-16.scope: Deactivated successfully. Sep 11 00:17:47.158145 systemd-logind[1548]: Session 16 logged out. Waiting for processes to exit. Sep 11 00:17:47.161548 systemd[1]: Started sshd@16-10.0.0.58:22-10.0.0.1:45796.service - OpenSSH per-connection server daemon (10.0.0.1:45796). Sep 11 00:17:47.162711 systemd-logind[1548]: Removed session 16. Sep 11 00:17:47.209166 sshd[4213]: Accepted publickey for core from 10.0.0.1 port 45796 ssh2: RSA SHA256:y/XwUTkYMtMNacauLj7j4r7D0OZbB+8bKKbHTNwhPa4 Sep 11 00:17:47.211147 sshd-session[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:17:47.216970 systemd-logind[1548]: New session 17 of user core. Sep 11 00:17:47.234749 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 11 00:17:47.354211 sshd[4216]: Connection closed by 10.0.0.1 port 45796 Sep 11 00:17:47.354730 sshd-session[4213]: pam_unix(sshd:session): session closed for user core Sep 11 00:17:47.359532 systemd[1]: sshd@16-10.0.0.58:22-10.0.0.1:45796.service: Deactivated successfully. Sep 11 00:17:47.362743 systemd[1]: session-17.scope: Deactivated successfully. Sep 11 00:17:47.364855 systemd-logind[1548]: Session 17 logged out. Waiting for processes to exit. Sep 11 00:17:47.366493 systemd-logind[1548]: Removed session 17. Sep 11 00:17:52.381074 systemd[1]: Started sshd@17-10.0.0.58:22-10.0.0.1:58538.service - OpenSSH per-connection server daemon (10.0.0.1:58538). Sep 11 00:17:52.438351 sshd[4233]: Accepted publickey for core from 10.0.0.1 port 58538 ssh2: RSA SHA256:y/XwUTkYMtMNacauLj7j4r7D0OZbB+8bKKbHTNwhPa4 Sep 11 00:17:52.440164 sshd-session[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:17:52.445125 systemd-logind[1548]: New session 18 of user core. Sep 11 00:17:52.453710 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 11 00:17:52.577267 sshd[4236]: Connection closed by 10.0.0.1 port 58538 Sep 11 00:17:52.577708 sshd-session[4233]: pam_unix(sshd:session): session closed for user core Sep 11 00:17:52.582128 systemd[1]: sshd@17-10.0.0.58:22-10.0.0.1:58538.service: Deactivated successfully. Sep 11 00:17:52.586068 systemd[1]: session-18.scope: Deactivated successfully. Sep 11 00:17:52.587098 systemd-logind[1548]: Session 18 logged out. Waiting for processes to exit. Sep 11 00:17:52.589439 systemd-logind[1548]: Removed session 18. Sep 11 00:17:57.595015 systemd[1]: Started sshd@18-10.0.0.58:22-10.0.0.1:58554.service - OpenSSH per-connection server daemon (10.0.0.1:58554). Sep 11 00:17:57.653626 sshd[4252]: Accepted publickey for core from 10.0.0.1 port 58554 ssh2: RSA SHA256:y/XwUTkYMtMNacauLj7j4r7D0OZbB+8bKKbHTNwhPa4 Sep 11 00:17:57.655764 sshd-session[4252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:17:57.661045 systemd-logind[1548]: New session 19 of user core. Sep 11 00:17:57.671857 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 11 00:17:57.800540 sshd[4255]: Connection closed by 10.0.0.1 port 58554 Sep 11 00:17:57.801088 sshd-session[4252]: pam_unix(sshd:session): session closed for user core Sep 11 00:17:57.813286 systemd[1]: sshd@18-10.0.0.58:22-10.0.0.1:58554.service: Deactivated successfully. Sep 11 00:17:57.816180 systemd[1]: session-19.scope: Deactivated successfully. Sep 11 00:17:57.817094 systemd-logind[1548]: Session 19 logged out. Waiting for processes to exit. Sep 11 00:17:57.819590 systemd-logind[1548]: Removed session 19. Sep 11 00:17:57.821275 systemd[1]: Started sshd@19-10.0.0.58:22-10.0.0.1:58570.service - OpenSSH per-connection server daemon (10.0.0.1:58570). Sep 11 00:17:57.896414 sshd[4268]: Accepted publickey for core from 10.0.0.1 port 58570 ssh2: RSA SHA256:y/XwUTkYMtMNacauLj7j4r7D0OZbB+8bKKbHTNwhPa4 Sep 11 00:17:57.898606 sshd-session[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:17:57.905286 systemd-logind[1548]: New session 20 of user core. Sep 11 00:17:57.914817 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 11 00:17:59.055224 sshd[4271]: Connection closed by 10.0.0.1 port 58570 Sep 11 00:17:59.055800 sshd-session[4268]: pam_unix(sshd:session): session closed for user core Sep 11 00:17:59.071103 systemd[1]: sshd@19-10.0.0.58:22-10.0.0.1:58570.service: Deactivated successfully. Sep 11 00:17:59.074249 systemd[1]: session-20.scope: Deactivated successfully. Sep 11 00:17:59.075332 systemd-logind[1548]: Session 20 logged out. Waiting for processes to exit. Sep 11 00:17:59.080421 systemd[1]: Started sshd@20-10.0.0.58:22-10.0.0.1:58574.service - OpenSSH per-connection server daemon (10.0.0.1:58574). Sep 11 00:17:59.081182 systemd-logind[1548]: Removed session 20. Sep 11 00:17:59.138156 sshd[4282]: Accepted publickey for core from 10.0.0.1 port 58574 ssh2: RSA SHA256:y/XwUTkYMtMNacauLj7j4r7D0OZbB+8bKKbHTNwhPa4 Sep 11 00:17:59.139713 sshd-session[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:17:59.145382 systemd-logind[1548]: New session 21 of user core. Sep 11 00:17:59.152699 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 11 00:18:00.855254 sshd[4285]: Connection closed by 10.0.0.1 port 58574 Sep 11 00:18:00.857646 sshd-session[4282]: pam_unix(sshd:session): session closed for user core Sep 11 00:18:00.868035 systemd[1]: sshd@20-10.0.0.58:22-10.0.0.1:58574.service: Deactivated successfully. Sep 11 00:18:00.871148 systemd[1]: session-21.scope: Deactivated successfully. Sep 11 00:18:00.872962 systemd-logind[1548]: Session 21 logged out. Waiting for processes to exit. Sep 11 00:18:00.877459 systemd[1]: Started sshd@21-10.0.0.58:22-10.0.0.1:59122.service - OpenSSH per-connection server daemon (10.0.0.1:59122). Sep 11 00:18:00.881421 systemd-logind[1548]: Removed session 21. Sep 11 00:18:00.940752 sshd[4317]: Accepted publickey for core from 10.0.0.1 port 59122 ssh2: RSA SHA256:y/XwUTkYMtMNacauLj7j4r7D0OZbB+8bKKbHTNwhPa4 Sep 11 00:18:00.942639 sshd-session[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:18:00.949811 systemd-logind[1548]: New session 22 of user core. Sep 11 00:18:00.965844 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 11 00:18:01.498808 sshd[4320]: Connection closed by 10.0.0.1 port 59122 Sep 11 00:18:01.499249 sshd-session[4317]: pam_unix(sshd:session): session closed for user core Sep 11 00:18:01.515070 systemd[1]: sshd@21-10.0.0.58:22-10.0.0.1:59122.service: Deactivated successfully. Sep 11 00:18:01.517852 systemd[1]: session-22.scope: Deactivated successfully. Sep 11 00:18:01.518867 systemd-logind[1548]: Session 22 logged out. Waiting for processes to exit. Sep 11 00:18:01.523670 systemd[1]: Started sshd@22-10.0.0.58:22-10.0.0.1:59128.service - OpenSSH per-connection server daemon (10.0.0.1:59128). Sep 11 00:18:01.524441 systemd-logind[1548]: Removed session 22. Sep 11 00:18:01.577442 sshd[4332]: Accepted publickey for core from 10.0.0.1 port 59128 ssh2: RSA SHA256:y/XwUTkYMtMNacauLj7j4r7D0OZbB+8bKKbHTNwhPa4 Sep 11 00:18:01.579738 sshd-session[4332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:18:01.585627 systemd-logind[1548]: New session 23 of user core. Sep 11 00:18:01.594683 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 11 00:18:01.725780 sshd[4335]: Connection closed by 10.0.0.1 port 59128 Sep 11 00:18:01.726185 sshd-session[4332]: pam_unix(sshd:session): session closed for user core Sep 11 00:18:01.731621 systemd[1]: sshd@22-10.0.0.58:22-10.0.0.1:59128.service: Deactivated successfully. Sep 11 00:18:01.734496 systemd[1]: session-23.scope: Deactivated successfully. Sep 11 00:18:01.735910 systemd-logind[1548]: Session 23 logged out. Waiting for processes to exit. Sep 11 00:18:01.738106 systemd-logind[1548]: Removed session 23. Sep 11 00:18:05.131102 kubelet[2770]: E0911 00:18:05.131002 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:18:06.752624 systemd[1]: Started sshd@23-10.0.0.58:22-10.0.0.1:59136.service - OpenSSH per-connection server daemon (10.0.0.1:59136). Sep 11 00:18:06.814931 sshd[4348]: Accepted publickey for core from 10.0.0.1 port 59136 ssh2: RSA SHA256:y/XwUTkYMtMNacauLj7j4r7D0OZbB+8bKKbHTNwhPa4 Sep 11 00:18:06.817339 sshd-session[4348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:18:06.823024 systemd-logind[1548]: New session 24 of user core. Sep 11 00:18:06.832886 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 11 00:18:06.954686 sshd[4351]: Connection closed by 10.0.0.1 port 59136 Sep 11 00:18:06.955073 sshd-session[4348]: pam_unix(sshd:session): session closed for user core Sep 11 00:18:06.961213 systemd[1]: sshd@23-10.0.0.58:22-10.0.0.1:59136.service: Deactivated successfully. Sep 11 00:18:06.963826 systemd[1]: session-24.scope: Deactivated successfully. Sep 11 00:18:06.964847 systemd-logind[1548]: Session 24 logged out. Waiting for processes to exit. Sep 11 00:18:06.966614 systemd-logind[1548]: Removed session 24. Sep 11 00:18:11.981236 systemd[1]: Started sshd@24-10.0.0.58:22-10.0.0.1:49220.service - OpenSSH per-connection server daemon (10.0.0.1:49220). Sep 11 00:18:12.046630 sshd[4367]: Accepted publickey for core from 10.0.0.1 port 49220 ssh2: RSA SHA256:y/XwUTkYMtMNacauLj7j4r7D0OZbB+8bKKbHTNwhPa4 Sep 11 00:18:12.048726 sshd-session[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:18:12.054064 systemd-logind[1548]: New session 25 of user core. Sep 11 00:18:12.067797 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 11 00:18:12.191962 sshd[4370]: Connection closed by 10.0.0.1 port 49220 Sep 11 00:18:12.192411 sshd-session[4367]: pam_unix(sshd:session): session closed for user core Sep 11 00:18:12.198630 systemd[1]: sshd@24-10.0.0.58:22-10.0.0.1:49220.service: Deactivated successfully. Sep 11 00:18:12.201227 systemd[1]: session-25.scope: Deactivated successfully. Sep 11 00:18:12.202133 systemd-logind[1548]: Session 25 logged out. Waiting for processes to exit. Sep 11 00:18:12.204016 systemd-logind[1548]: Removed session 25. Sep 11 00:18:17.207230 systemd[1]: Started sshd@25-10.0.0.58:22-10.0.0.1:49230.service - OpenSSH per-connection server daemon (10.0.0.1:49230). Sep 11 00:18:17.276190 sshd[4384]: Accepted publickey for core from 10.0.0.1 port 49230 ssh2: RSA SHA256:y/XwUTkYMtMNacauLj7j4r7D0OZbB+8bKKbHTNwhPa4 Sep 11 00:18:17.278637 sshd-session[4384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:18:17.285693 systemd-logind[1548]: New session 26 of user core. Sep 11 00:18:17.294844 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 11 00:18:17.424356 sshd[4387]: Connection closed by 10.0.0.1 port 49230 Sep 11 00:18:17.424846 sshd-session[4384]: pam_unix(sshd:session): session closed for user core Sep 11 00:18:17.430193 systemd[1]: sshd@25-10.0.0.58:22-10.0.0.1:49230.service: Deactivated successfully. Sep 11 00:18:17.432613 systemd[1]: session-26.scope: Deactivated successfully. Sep 11 00:18:17.433857 systemd-logind[1548]: Session 26 logged out. Waiting for processes to exit. Sep 11 00:18:17.435099 systemd-logind[1548]: Removed session 26. Sep 11 00:18:22.440481 systemd[1]: Started sshd@26-10.0.0.58:22-10.0.0.1:58662.service - OpenSSH per-connection server daemon (10.0.0.1:58662). Sep 11 00:18:22.510856 sshd[4401]: Accepted publickey for core from 10.0.0.1 port 58662 ssh2: RSA SHA256:y/XwUTkYMtMNacauLj7j4r7D0OZbB+8bKKbHTNwhPa4 Sep 11 00:18:22.512902 sshd-session[4401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:18:22.518552 systemd-logind[1548]: New session 27 of user core. Sep 11 00:18:22.537908 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 11 00:18:22.662215 sshd[4404]: Connection closed by 10.0.0.1 port 58662 Sep 11 00:18:22.662626 sshd-session[4401]: pam_unix(sshd:session): session closed for user core Sep 11 00:18:22.672165 systemd[1]: sshd@26-10.0.0.58:22-10.0.0.1:58662.service: Deactivated successfully. Sep 11 00:18:22.674544 systemd[1]: session-27.scope: Deactivated successfully. Sep 11 00:18:22.675343 systemd-logind[1548]: Session 27 logged out. Waiting for processes to exit. Sep 11 00:18:22.678197 systemd[1]: Started sshd@27-10.0.0.58:22-10.0.0.1:58670.service - OpenSSH per-connection server daemon (10.0.0.1:58670). Sep 11 00:18:22.679130 systemd-logind[1548]: Removed session 27. Sep 11 00:18:22.738627 sshd[4417]: Accepted publickey for core from 10.0.0.1 port 58670 ssh2: RSA SHA256:y/XwUTkYMtMNacauLj7j4r7D0OZbB+8bKKbHTNwhPa4 Sep 11 00:18:22.740333 sshd-session[4417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:18:22.744990 systemd-logind[1548]: New session 28 of user core. Sep 11 00:18:22.762677 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 11 00:18:23.130209 kubelet[2770]: E0911 00:18:23.130153 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:18:24.130367 kubelet[2770]: E0911 00:18:24.130239 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:18:24.932642 containerd[1561]: time="2025-09-11T00:18:24.932579726Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 11 00:18:24.939261 containerd[1561]: time="2025-09-11T00:18:24.939214930Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0ff15ad71a1a1fea7e187098ed68db1edfa4981674ed7cd293b2a1b0f129f500\" id:\"a8a63867b4b598fab050dfe15885065a0d9cccf689e5063ce96c03c2085b17a7\" pid:4441 exited_at:{seconds:1757549904 nanos:938828392}" Sep 11 00:18:24.941170 containerd[1561]: time="2025-09-11T00:18:24.941121466Z" level=info msg="StopContainer for \"0ff15ad71a1a1fea7e187098ed68db1edfa4981674ed7cd293b2a1b0f129f500\" with timeout 2 (s)" Sep 11 00:18:24.950053 containerd[1561]: time="2025-09-11T00:18:24.950002782Z" level=info msg="StopContainer for \"8e2b29f368741acf22f82f1dc4cf42afbf579f4ba6f91ed4d4b1f20b75578630\" with timeout 30 (s)" Sep 11 00:18:24.950715 containerd[1561]: time="2025-09-11T00:18:24.950653584Z" level=info msg="Stop container \"0ff15ad71a1a1fea7e187098ed68db1edfa4981674ed7cd293b2a1b0f129f500\" with signal terminated" Sep 11 00:18:24.950827 containerd[1561]: time="2025-09-11T00:18:24.950667800Z" level=info msg="Stop container \"8e2b29f368741acf22f82f1dc4cf42afbf579f4ba6f91ed4d4b1f20b75578630\" with signal terminated" Sep 11 00:18:24.961550 systemd-networkd[1484]: lxc_health: Link DOWN Sep 11 00:18:24.961951 systemd-networkd[1484]: lxc_health: Lost carrier Sep 11 00:18:24.969152 systemd[1]: cri-containerd-8e2b29f368741acf22f82f1dc4cf42afbf579f4ba6f91ed4d4b1f20b75578630.scope: Deactivated successfully. Sep 11 00:18:24.972545 containerd[1561]: time="2025-09-11T00:18:24.972383983Z" level=info msg="received exit event container_id:\"8e2b29f368741acf22f82f1dc4cf42afbf579f4ba6f91ed4d4b1f20b75578630\" id:\"8e2b29f368741acf22f82f1dc4cf42afbf579f4ba6f91ed4d4b1f20b75578630\" pid:3300 exited_at:{seconds:1757549904 nanos:972020821}" Sep 11 00:18:24.972629 containerd[1561]: time="2025-09-11T00:18:24.972591349Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8e2b29f368741acf22f82f1dc4cf42afbf579f4ba6f91ed4d4b1f20b75578630\" id:\"8e2b29f368741acf22f82f1dc4cf42afbf579f4ba6f91ed4d4b1f20b75578630\" pid:3300 exited_at:{seconds:1757549904 nanos:972020821}" Sep 11 00:18:24.979933 systemd[1]: cri-containerd-0ff15ad71a1a1fea7e187098ed68db1edfa4981674ed7cd293b2a1b0f129f500.scope: Deactivated successfully. Sep 11 00:18:24.980312 systemd[1]: cri-containerd-0ff15ad71a1a1fea7e187098ed68db1edfa4981674ed7cd293b2a1b0f129f500.scope: Consumed 7.554s CPU time, 129.6M memory peak, 224K read from disk, 13.3M written to disk. Sep 11 00:18:24.980840 containerd[1561]: time="2025-09-11T00:18:24.980796786Z" level=info msg="received exit event container_id:\"0ff15ad71a1a1fea7e187098ed68db1edfa4981674ed7cd293b2a1b0f129f500\" id:\"0ff15ad71a1a1fea7e187098ed68db1edfa4981674ed7cd293b2a1b0f129f500\" pid:3410 exited_at:{seconds:1757549904 nanos:980414468}" Sep 11 00:18:24.981807 containerd[1561]: time="2025-09-11T00:18:24.981676364Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0ff15ad71a1a1fea7e187098ed68db1edfa4981674ed7cd293b2a1b0f129f500\" id:\"0ff15ad71a1a1fea7e187098ed68db1edfa4981674ed7cd293b2a1b0f129f500\" pid:3410 exited_at:{seconds:1757549904 nanos:980414468}" Sep 11 00:18:24.999946 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e2b29f368741acf22f82f1dc4cf42afbf579f4ba6f91ed4d4b1f20b75578630-rootfs.mount: Deactivated successfully. Sep 11 00:18:25.008201 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ff15ad71a1a1fea7e187098ed68db1edfa4981674ed7cd293b2a1b0f129f500-rootfs.mount: Deactivated successfully. Sep 11 00:18:25.203697 kubelet[2770]: E0911 00:18:25.203453 2770 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 11 00:18:25.409665 containerd[1561]: time="2025-09-11T00:18:25.409019624Z" level=info msg="StopContainer for \"8e2b29f368741acf22f82f1dc4cf42afbf579f4ba6f91ed4d4b1f20b75578630\" returns successfully" Sep 11 00:18:25.410227 containerd[1561]: time="2025-09-11T00:18:25.410180368Z" level=info msg="StopContainer for \"0ff15ad71a1a1fea7e187098ed68db1edfa4981674ed7cd293b2a1b0f129f500\" returns successfully" Sep 11 00:18:25.410961 containerd[1561]: time="2025-09-11T00:18:25.410913186Z" level=info msg="StopPodSandbox for \"5feb32fdb4edf96226ffd1f6f25dde11315e0e0928d69a8adc7a5b8757e1c948\"" Sep 11 00:18:25.411157 containerd[1561]: time="2025-09-11T00:18:25.410990244Z" level=info msg="Container to stop \"5821ffe59c3eef7bf919be35161001228286d49fc2f3c43cda2316553e2d3773\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 11 00:18:25.411157 containerd[1561]: time="2025-09-11T00:18:25.411002527Z" level=info msg="Container to stop \"621839a0dea8040ef6503b1c51ffe909b4abc7b1efaac3b94cfbc351c0d0ae73\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 11 00:18:25.411157 containerd[1561]: time="2025-09-11T00:18:25.411011854Z" level=info msg="Container to stop \"0ff15ad71a1a1fea7e187098ed68db1edfa4981674ed7cd293b2a1b0f129f500\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 11 00:18:25.411157 containerd[1561]: time="2025-09-11T00:18:25.411020892Z" level=info msg="Container to stop \"3ed5e5c5d453204496168142c8daee7eeabf307a59fc4f21cd6b916159cb48c8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 11 00:18:25.411157 containerd[1561]: time="2025-09-11T00:18:25.411028597Z" level=info msg="Container to stop \"7ee196202eb8c78da472024cd2351815dcc9af702c35d93534bce3733c845967\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 11 00:18:25.412320 containerd[1561]: time="2025-09-11T00:18:25.412267169Z" level=info msg="StopPodSandbox for \"c77568eccc275c2a769d58774ea85520940fa78f9c890504dd0e7579db5d6d9e\"" Sep 11 00:18:25.412394 containerd[1561]: time="2025-09-11T00:18:25.412334156Z" level=info msg="Container to stop \"8e2b29f368741acf22f82f1dc4cf42afbf579f4ba6f91ed4d4b1f20b75578630\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 11 00:18:25.417976 systemd[1]: cri-containerd-5feb32fdb4edf96226ffd1f6f25dde11315e0e0928d69a8adc7a5b8757e1c948.scope: Deactivated successfully. Sep 11 00:18:25.420298 containerd[1561]: time="2025-09-11T00:18:25.420256239Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5feb32fdb4edf96226ffd1f6f25dde11315e0e0928d69a8adc7a5b8757e1c948\" id:\"5feb32fdb4edf96226ffd1f6f25dde11315e0e0928d69a8adc7a5b8757e1c948\" pid:2883 exit_status:137 exited_at:{seconds:1757549905 nanos:419931900}" Sep 11 00:18:25.421292 systemd[1]: cri-containerd-c77568eccc275c2a769d58774ea85520940fa78f9c890504dd0e7579db5d6d9e.scope: Deactivated successfully. Sep 11 00:18:25.456704 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5feb32fdb4edf96226ffd1f6f25dde11315e0e0928d69a8adc7a5b8757e1c948-rootfs.mount: Deactivated successfully. Sep 11 00:18:25.462657 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c77568eccc275c2a769d58774ea85520940fa78f9c890504dd0e7579db5d6d9e-rootfs.mount: Deactivated successfully. Sep 11 00:18:25.527346 containerd[1561]: time="2025-09-11T00:18:25.527274622Z" level=info msg="shim disconnected" id=5feb32fdb4edf96226ffd1f6f25dde11315e0e0928d69a8adc7a5b8757e1c948 namespace=k8s.io Sep 11 00:18:25.527346 containerd[1561]: time="2025-09-11T00:18:25.527315640Z" level=warning msg="cleaning up after shim disconnected" id=5feb32fdb4edf96226ffd1f6f25dde11315e0e0928d69a8adc7a5b8757e1c948 namespace=k8s.io Sep 11 00:18:25.542195 containerd[1561]: time="2025-09-11T00:18:25.527325249Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 11 00:18:25.568081 containerd[1561]: time="2025-09-11T00:18:25.568029124Z" level=info msg="shim disconnected" id=c77568eccc275c2a769d58774ea85520940fa78f9c890504dd0e7579db5d6d9e namespace=k8s.io Sep 11 00:18:25.568081 containerd[1561]: time="2025-09-11T00:18:25.568066525Z" level=warning msg="cleaning up after shim disconnected" id=c77568eccc275c2a769d58774ea85520940fa78f9c890504dd0e7579db5d6d9e namespace=k8s.io Sep 11 00:18:25.568081 containerd[1561]: time="2025-09-11T00:18:25.568074410Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 11 00:18:25.572594 containerd[1561]: time="2025-09-11T00:18:25.570321817Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c77568eccc275c2a769d58774ea85520940fa78f9c890504dd0e7579db5d6d9e\" id:\"c77568eccc275c2a769d58774ea85520940fa78f9c890504dd0e7579db5d6d9e\" pid:2931 exit_status:137 exited_at:{seconds:1757549905 nanos:421820062}" Sep 11 00:18:25.572594 containerd[1561]: time="2025-09-11T00:18:25.570530916Z" level=info msg="received exit event sandbox_id:\"5feb32fdb4edf96226ffd1f6f25dde11315e0e0928d69a8adc7a5b8757e1c948\" exit_status:137 exited_at:{seconds:1757549905 nanos:419931900}" Sep 11 00:18:25.572594 containerd[1561]: time="2025-09-11T00:18:25.572101702Z" level=info msg="received exit event sandbox_id:\"c77568eccc275c2a769d58774ea85520940fa78f9c890504dd0e7579db5d6d9e\" exit_status:137 exited_at:{seconds:1757549905 nanos:421820062}" Sep 11 00:18:25.573264 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c77568eccc275c2a769d58774ea85520940fa78f9c890504dd0e7579db5d6d9e-shm.mount: Deactivated successfully. Sep 11 00:18:25.573432 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5feb32fdb4edf96226ffd1f6f25dde11315e0e0928d69a8adc7a5b8757e1c948-shm.mount: Deactivated successfully. Sep 11 00:18:25.574051 containerd[1561]: time="2025-09-11T00:18:25.574014802Z" level=info msg="TearDown network for sandbox \"c77568eccc275c2a769d58774ea85520940fa78f9c890504dd0e7579db5d6d9e\" successfully" Sep 11 00:18:25.574051 containerd[1561]: time="2025-09-11T00:18:25.574051492Z" level=info msg="StopPodSandbox for \"c77568eccc275c2a769d58774ea85520940fa78f9c890504dd0e7579db5d6d9e\" returns successfully" Sep 11 00:18:25.581406 containerd[1561]: time="2025-09-11T00:18:25.581351887Z" level=info msg="TearDown network for sandbox \"5feb32fdb4edf96226ffd1f6f25dde11315e0e0928d69a8adc7a5b8757e1c948\" successfully" Sep 11 00:18:25.581406 containerd[1561]: time="2025-09-11T00:18:25.581379971Z" level=info msg="StopPodSandbox for \"5feb32fdb4edf96226ffd1f6f25dde11315e0e0928d69a8adc7a5b8757e1c948\" returns successfully" Sep 11 00:18:25.688064 kubelet[2770]: I0911 00:18:25.687825 2770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-lib-modules\") pod \"42e150ed-be6b-4713-ac8f-63896a7aff87\" (UID: \"42e150ed-be6b-4713-ac8f-63896a7aff87\") " Sep 11 00:18:25.688064 kubelet[2770]: I0911 00:18:25.688022 2770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-etc-cni-netd\") pod \"42e150ed-be6b-4713-ac8f-63896a7aff87\" (UID: \"42e150ed-be6b-4713-ac8f-63896a7aff87\") " Sep 11 00:18:25.688064 kubelet[2770]: I0911 00:18:25.688052 2770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "42e150ed-be6b-4713-ac8f-63896a7aff87" (UID: "42e150ed-be6b-4713-ac8f-63896a7aff87"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 11 00:18:25.688064 kubelet[2770]: I0911 00:18:25.688183 2770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/42e150ed-be6b-4713-ac8f-63896a7aff87-cilium-config-path\") pod \"42e150ed-be6b-4713-ac8f-63896a7aff87\" (UID: \"42e150ed-be6b-4713-ac8f-63896a7aff87\") " Sep 11 00:18:25.688064 kubelet[2770]: I0911 00:18:25.688219 2770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-host-proc-sys-kernel\") pod \"42e150ed-be6b-4713-ac8f-63896a7aff87\" (UID: \"42e150ed-be6b-4713-ac8f-63896a7aff87\") " Sep 11 00:18:25.688666 kubelet[2770]: I0911 00:18:25.688344 2770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "42e150ed-be6b-4713-ac8f-63896a7aff87" (UID: "42e150ed-be6b-4713-ac8f-63896a7aff87"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 11 00:18:25.688666 kubelet[2770]: I0911 00:18:25.688439 2770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-cni-path\") pod \"42e150ed-be6b-4713-ac8f-63896a7aff87\" (UID: \"42e150ed-be6b-4713-ac8f-63896a7aff87\") " Sep 11 00:18:25.688733 kubelet[2770]: I0911 00:18:25.688559 2770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/42e150ed-be6b-4713-ac8f-63896a7aff87-clustermesh-secrets\") pod \"42e150ed-be6b-4713-ac8f-63896a7aff87\" (UID: \"42e150ed-be6b-4713-ac8f-63896a7aff87\") " Sep 11 00:18:25.688733 kubelet[2770]: I0911 00:18:25.688718 2770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-hostproc\") pod \"42e150ed-be6b-4713-ac8f-63896a7aff87\" (UID: \"42e150ed-be6b-4713-ac8f-63896a7aff87\") " Sep 11 00:18:25.690347 kubelet[2770]: I0911 00:18:25.688873 2770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-cilium-run\") pod \"42e150ed-be6b-4713-ac8f-63896a7aff87\" (UID: \"42e150ed-be6b-4713-ac8f-63896a7aff87\") " Sep 11 00:18:25.690347 kubelet[2770]: I0911 00:18:25.688968 2770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c41fe9c-3c2c-412b-a69d-89d1d41ef025-cilium-config-path\") pod \"2c41fe9c-3c2c-412b-a69d-89d1d41ef025\" (UID: \"2c41fe9c-3c2c-412b-a69d-89d1d41ef025\") " Sep 11 00:18:25.690347 kubelet[2770]: I0911 00:18:25.689147 2770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4gpw\" (UniqueName: \"kubernetes.io/projected/42e150ed-be6b-4713-ac8f-63896a7aff87-kube-api-access-c4gpw\") pod \"42e150ed-be6b-4713-ac8f-63896a7aff87\" (UID: \"42e150ed-be6b-4713-ac8f-63896a7aff87\") " Sep 11 00:18:25.690347 kubelet[2770]: I0911 00:18:25.689179 2770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-host-proc-sys-net\") pod \"42e150ed-be6b-4713-ac8f-63896a7aff87\" (UID: \"42e150ed-be6b-4713-ac8f-63896a7aff87\") " Sep 11 00:18:25.690347 kubelet[2770]: I0911 00:18:25.689284 2770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xw87\" (UniqueName: \"kubernetes.io/projected/2c41fe9c-3c2c-412b-a69d-89d1d41ef025-kube-api-access-7xw87\") pod \"2c41fe9c-3c2c-412b-a69d-89d1d41ef025\" (UID: \"2c41fe9c-3c2c-412b-a69d-89d1d41ef025\") " Sep 11 00:18:25.690347 kubelet[2770]: I0911 00:18:25.689352 2770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-cilium-cgroup\") pod \"42e150ed-be6b-4713-ac8f-63896a7aff87\" (UID: \"42e150ed-be6b-4713-ac8f-63896a7aff87\") " Sep 11 00:18:25.690599 kubelet[2770]: I0911 00:18:25.689329 2770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-hostproc" (OuterVolumeSpecName: "hostproc") pod "42e150ed-be6b-4713-ac8f-63896a7aff87" (UID: "42e150ed-be6b-4713-ac8f-63896a7aff87"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 11 00:18:25.690599 kubelet[2770]: I0911 00:18:25.689404 2770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "42e150ed-be6b-4713-ac8f-63896a7aff87" (UID: "42e150ed-be6b-4713-ac8f-63896a7aff87"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 11 00:18:25.690599 kubelet[2770]: I0911 00:18:25.689411 2770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-bpf-maps\") pod \"42e150ed-be6b-4713-ac8f-63896a7aff87\" (UID: \"42e150ed-be6b-4713-ac8f-63896a7aff87\") " Sep 11 00:18:25.690599 kubelet[2770]: I0911 00:18:25.689428 2770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-cni-path" (OuterVolumeSpecName: "cni-path") pod "42e150ed-be6b-4713-ac8f-63896a7aff87" (UID: "42e150ed-be6b-4713-ac8f-63896a7aff87"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 11 00:18:25.690599 kubelet[2770]: I0911 00:18:25.689441 2770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-xtables-lock\") pod \"42e150ed-be6b-4713-ac8f-63896a7aff87\" (UID: \"42e150ed-be6b-4713-ac8f-63896a7aff87\") " Sep 11 00:18:25.690750 kubelet[2770]: I0911 00:18:25.689541 2770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/42e150ed-be6b-4713-ac8f-63896a7aff87-hubble-tls\") pod \"42e150ed-be6b-4713-ac8f-63896a7aff87\" (UID: \"42e150ed-be6b-4713-ac8f-63896a7aff87\") " Sep 11 00:18:25.690750 kubelet[2770]: I0911 00:18:25.689634 2770 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 11 00:18:25.690750 kubelet[2770]: I0911 00:18:25.689654 2770 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 11 00:18:25.690750 kubelet[2770]: I0911 00:18:25.689666 2770 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 11 00:18:25.690750 kubelet[2770]: I0911 00:18:25.689676 2770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "42e150ed-be6b-4713-ac8f-63896a7aff87" (UID: "42e150ed-be6b-4713-ac8f-63896a7aff87"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 11 00:18:25.690750 kubelet[2770]: I0911 00:18:25.689679 2770 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 11 00:18:25.690750 kubelet[2770]: I0911 00:18:25.689731 2770 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 11 00:18:25.690957 kubelet[2770]: I0911 00:18:25.689711 2770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "42e150ed-be6b-4713-ac8f-63896a7aff87" (UID: "42e150ed-be6b-4713-ac8f-63896a7aff87"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 11 00:18:25.694538 kubelet[2770]: I0911 00:18:25.693238 2770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42e150ed-be6b-4713-ac8f-63896a7aff87-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "42e150ed-be6b-4713-ac8f-63896a7aff87" (UID: "42e150ed-be6b-4713-ac8f-63896a7aff87"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 11 00:18:25.694538 kubelet[2770]: I0911 00:18:25.693734 2770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "42e150ed-be6b-4713-ac8f-63896a7aff87" (UID: "42e150ed-be6b-4713-ac8f-63896a7aff87"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 11 00:18:25.694538 kubelet[2770]: I0911 00:18:25.693797 2770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "42e150ed-be6b-4713-ac8f-63896a7aff87" (UID: "42e150ed-be6b-4713-ac8f-63896a7aff87"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 11 00:18:25.694538 kubelet[2770]: I0911 00:18:25.693823 2770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "42e150ed-be6b-4713-ac8f-63896a7aff87" (UID: "42e150ed-be6b-4713-ac8f-63896a7aff87"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 11 00:18:25.695876 kubelet[2770]: I0911 00:18:25.695824 2770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42e150ed-be6b-4713-ac8f-63896a7aff87-kube-api-access-c4gpw" (OuterVolumeSpecName: "kube-api-access-c4gpw") pod "42e150ed-be6b-4713-ac8f-63896a7aff87" (UID: "42e150ed-be6b-4713-ac8f-63896a7aff87"). InnerVolumeSpecName "kube-api-access-c4gpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 11 00:18:25.695938 kubelet[2770]: I0911 00:18:25.695881 2770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42e150ed-be6b-4713-ac8f-63896a7aff87-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "42e150ed-be6b-4713-ac8f-63896a7aff87" (UID: "42e150ed-be6b-4713-ac8f-63896a7aff87"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 11 00:18:25.696455 kubelet[2770]: I0911 00:18:25.696405 2770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c41fe9c-3c2c-412b-a69d-89d1d41ef025-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2c41fe9c-3c2c-412b-a69d-89d1d41ef025" (UID: "2c41fe9c-3c2c-412b-a69d-89d1d41ef025"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 11 00:18:25.697329 kubelet[2770]: I0911 00:18:25.697292 2770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c41fe9c-3c2c-412b-a69d-89d1d41ef025-kube-api-access-7xw87" (OuterVolumeSpecName: "kube-api-access-7xw87") pod "2c41fe9c-3c2c-412b-a69d-89d1d41ef025" (UID: "2c41fe9c-3c2c-412b-a69d-89d1d41ef025"). InnerVolumeSpecName "kube-api-access-7xw87". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 11 00:18:25.698647 kubelet[2770]: I0911 00:18:25.698605 2770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42e150ed-be6b-4713-ac8f-63896a7aff87-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "42e150ed-be6b-4713-ac8f-63896a7aff87" (UID: "42e150ed-be6b-4713-ac8f-63896a7aff87"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 11 00:18:25.790217 kubelet[2770]: I0911 00:18:25.789936 2770 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 11 00:18:25.790217 kubelet[2770]: I0911 00:18:25.789995 2770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4gpw\" (UniqueName: \"kubernetes.io/projected/42e150ed-be6b-4713-ac8f-63896a7aff87-kube-api-access-c4gpw\") on node \"localhost\" DevicePath \"\"" Sep 11 00:18:25.790217 kubelet[2770]: I0911 00:18:25.790007 2770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xw87\" (UniqueName: \"kubernetes.io/projected/2c41fe9c-3c2c-412b-a69d-89d1d41ef025-kube-api-access-7xw87\") on node \"localhost\" DevicePath \"\"" Sep 11 00:18:25.790217 kubelet[2770]: I0911 00:18:25.790017 2770 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 11 00:18:25.790217 kubelet[2770]: I0911 00:18:25.790029 2770 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 11 00:18:25.790217 kubelet[2770]: I0911 00:18:25.790039 2770 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 11 00:18:25.790217 kubelet[2770]: I0911 00:18:25.790048 2770 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/42e150ed-be6b-4713-ac8f-63896a7aff87-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 11 00:18:25.790217 kubelet[2770]: I0911 00:18:25.790058 2770 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/42e150ed-be6b-4713-ac8f-63896a7aff87-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 11 00:18:25.790592 kubelet[2770]: I0911 00:18:25.790070 2770 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/42e150ed-be6b-4713-ac8f-63896a7aff87-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 11 00:18:25.790592 kubelet[2770]: I0911 00:18:25.790081 2770 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c41fe9c-3c2c-412b-a69d-89d1d41ef025-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 11 00:18:25.790592 kubelet[2770]: I0911 00:18:25.790093 2770 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/42e150ed-be6b-4713-ac8f-63896a7aff87-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 11 00:18:25.999656 systemd[1]: var-lib-kubelet-pods-2c41fe9c\x2d3c2c\x2d412b\x2da69d\x2d89d1d41ef025-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7xw87.mount: Deactivated successfully. Sep 11 00:18:25.999796 systemd[1]: var-lib-kubelet-pods-42e150ed\x2dbe6b\x2d4713\x2dac8f\x2d63896a7aff87-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc4gpw.mount: Deactivated successfully. Sep 11 00:18:25.999903 systemd[1]: var-lib-kubelet-pods-42e150ed\x2dbe6b\x2d4713\x2dac8f\x2d63896a7aff87-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 11 00:18:26.000009 systemd[1]: var-lib-kubelet-pods-42e150ed\x2dbe6b\x2d4713\x2dac8f\x2d63896a7aff87-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 11 00:18:26.139783 systemd[1]: Removed slice kubepods-besteffort-pod2c41fe9c_3c2c_412b_a69d_89d1d41ef025.slice - libcontainer container kubepods-besteffort-pod2c41fe9c_3c2c_412b_a69d_89d1d41ef025.slice. Sep 11 00:18:26.141689 systemd[1]: Removed slice kubepods-burstable-pod42e150ed_be6b_4713_ac8f_63896a7aff87.slice - libcontainer container kubepods-burstable-pod42e150ed_be6b_4713_ac8f_63896a7aff87.slice. Sep 11 00:18:26.141784 systemd[1]: kubepods-burstable-pod42e150ed_be6b_4713_ac8f_63896a7aff87.slice: Consumed 7.690s CPU time, 130M memory peak, 308K read from disk, 13.3M written to disk. Sep 11 00:18:26.207978 sshd[4420]: Connection closed by 10.0.0.1 port 58670 Sep 11 00:18:26.208392 sshd-session[4417]: pam_unix(sshd:session): session closed for user core Sep 11 00:18:26.222015 systemd[1]: sshd@27-10.0.0.58:22-10.0.0.1:58670.service: Deactivated successfully. Sep 11 00:18:26.224882 systemd[1]: session-28.scope: Deactivated successfully. Sep 11 00:18:26.225911 systemd-logind[1548]: Session 28 logged out. Waiting for processes to exit. Sep 11 00:18:26.230606 systemd[1]: Started sshd@28-10.0.0.58:22-10.0.0.1:58676.service - OpenSSH per-connection server daemon (10.0.0.1:58676). Sep 11 00:18:26.231409 systemd-logind[1548]: Removed session 28. Sep 11 00:18:26.285741 sshd[4576]: Accepted publickey for core from 10.0.0.1 port 58676 ssh2: RSA SHA256:y/XwUTkYMtMNacauLj7j4r7D0OZbB+8bKKbHTNwhPa4 Sep 11 00:18:26.287460 sshd-session[4576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:18:26.293391 systemd-logind[1548]: New session 29 of user core. Sep 11 00:18:26.312589 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 11 00:18:26.465464 kubelet[2770]: I0911 00:18:26.465027 2770 scope.go:117] "RemoveContainer" containerID="8e2b29f368741acf22f82f1dc4cf42afbf579f4ba6f91ed4d4b1f20b75578630" Sep 11 00:18:26.472656 containerd[1561]: time="2025-09-11T00:18:26.472293119Z" level=info msg="RemoveContainer for \"8e2b29f368741acf22f82f1dc4cf42afbf579f4ba6f91ed4d4b1f20b75578630\"" Sep 11 00:18:26.505997 containerd[1561]: time="2025-09-11T00:18:26.505915263Z" level=info msg="RemoveContainer for \"8e2b29f368741acf22f82f1dc4cf42afbf579f4ba6f91ed4d4b1f20b75578630\" returns successfully" Sep 11 00:18:26.506433 kubelet[2770]: I0911 00:18:26.506373 2770 scope.go:117] "RemoveContainer" containerID="0ff15ad71a1a1fea7e187098ed68db1edfa4981674ed7cd293b2a1b0f129f500" Sep 11 00:18:26.510807 containerd[1561]: time="2025-09-11T00:18:26.510751872Z" level=info msg="RemoveContainer for \"0ff15ad71a1a1fea7e187098ed68db1edfa4981674ed7cd293b2a1b0f129f500\"" Sep 11 00:18:26.540613 containerd[1561]: time="2025-09-11T00:18:26.540523970Z" level=info msg="RemoveContainer for \"0ff15ad71a1a1fea7e187098ed68db1edfa4981674ed7cd293b2a1b0f129f500\" returns successfully" Sep 11 00:18:26.540982 kubelet[2770]: I0911 00:18:26.540928 2770 scope.go:117] "RemoveContainer" containerID="5821ffe59c3eef7bf919be35161001228286d49fc2f3c43cda2316553e2d3773" Sep 11 00:18:26.544432 containerd[1561]: time="2025-09-11T00:18:26.544385097Z" level=info msg="RemoveContainer for \"5821ffe59c3eef7bf919be35161001228286d49fc2f3c43cda2316553e2d3773\"" Sep 11 00:18:26.599364 containerd[1561]: time="2025-09-11T00:18:26.599290482Z" level=info msg="RemoveContainer for \"5821ffe59c3eef7bf919be35161001228286d49fc2f3c43cda2316553e2d3773\" returns successfully" Sep 11 00:18:26.599682 kubelet[2770]: I0911 00:18:26.599650 2770 scope.go:117] "RemoveContainer" containerID="7ee196202eb8c78da472024cd2351815dcc9af702c35d93534bce3733c845967" Sep 11 00:18:26.602316 containerd[1561]: time="2025-09-11T00:18:26.602240961Z" level=info msg="RemoveContainer for \"7ee196202eb8c78da472024cd2351815dcc9af702c35d93534bce3733c845967\"" Sep 11 00:18:26.667183 containerd[1561]: time="2025-09-11T00:18:26.667090410Z" level=info msg="RemoveContainer for \"7ee196202eb8c78da472024cd2351815dcc9af702c35d93534bce3733c845967\" returns successfully" Sep 11 00:18:26.667467 kubelet[2770]: I0911 00:18:26.667421 2770 scope.go:117] "RemoveContainer" containerID="621839a0dea8040ef6503b1c51ffe909b4abc7b1efaac3b94cfbc351c0d0ae73" Sep 11 00:18:26.669024 containerd[1561]: time="2025-09-11T00:18:26.669000525Z" level=info msg="RemoveContainer for \"621839a0dea8040ef6503b1c51ffe909b4abc7b1efaac3b94cfbc351c0d0ae73\"" Sep 11 00:18:26.726203 containerd[1561]: time="2025-09-11T00:18:26.725999524Z" level=info msg="RemoveContainer for \"621839a0dea8040ef6503b1c51ffe909b4abc7b1efaac3b94cfbc351c0d0ae73\" returns successfully" Sep 11 00:18:26.726380 kubelet[2770]: I0911 00:18:26.726347 2770 scope.go:117] "RemoveContainer" containerID="3ed5e5c5d453204496168142c8daee7eeabf307a59fc4f21cd6b916159cb48c8" Sep 11 00:18:26.728432 containerd[1561]: time="2025-09-11T00:18:26.728393182Z" level=info msg="RemoveContainer for \"3ed5e5c5d453204496168142c8daee7eeabf307a59fc4f21cd6b916159cb48c8\"" Sep 11 00:18:26.745950 containerd[1561]: time="2025-09-11T00:18:26.745866522Z" level=info msg="RemoveContainer for \"3ed5e5c5d453204496168142c8daee7eeabf307a59fc4f21cd6b916159cb48c8\" returns successfully" Sep 11 00:18:27.005892 sshd[4579]: Connection closed by 10.0.0.1 port 58676 Sep 11 00:18:27.006805 sshd-session[4576]: pam_unix(sshd:session): session closed for user core Sep 11 00:18:27.018653 systemd[1]: sshd@28-10.0.0.58:22-10.0.0.1:58676.service: Deactivated successfully. Sep 11 00:18:27.022341 systemd[1]: session-29.scope: Deactivated successfully. Sep 11 00:18:27.024623 systemd-logind[1548]: Session 29 logged out. Waiting for processes to exit. Sep 11 00:18:27.032899 systemd[1]: Started sshd@29-10.0.0.58:22-10.0.0.1:58690.service - OpenSSH per-connection server daemon (10.0.0.1:58690). Sep 11 00:18:27.036473 systemd-logind[1548]: Removed session 29. Sep 11 00:18:27.053187 kubelet[2770]: E0911 00:18:27.053103 2770 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="42e150ed-be6b-4713-ac8f-63896a7aff87" containerName="mount-cgroup" Sep 11 00:18:27.053187 kubelet[2770]: E0911 00:18:27.053157 2770 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="42e150ed-be6b-4713-ac8f-63896a7aff87" containerName="apply-sysctl-overwrites" Sep 11 00:18:27.053187 kubelet[2770]: E0911 00:18:27.053167 2770 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2c41fe9c-3c2c-412b-a69d-89d1d41ef025" containerName="cilium-operator" Sep 11 00:18:27.053187 kubelet[2770]: E0911 00:18:27.053174 2770 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="42e150ed-be6b-4713-ac8f-63896a7aff87" containerName="clean-cilium-state" Sep 11 00:18:27.053187 kubelet[2770]: E0911 00:18:27.053181 2770 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="42e150ed-be6b-4713-ac8f-63896a7aff87" containerName="mount-bpf-fs" Sep 11 00:18:27.053187 kubelet[2770]: E0911 00:18:27.053187 2770 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="42e150ed-be6b-4713-ac8f-63896a7aff87" containerName="cilium-agent" Sep 11 00:18:27.054050 kubelet[2770]: I0911 00:18:27.053219 2770 memory_manager.go:354] "RemoveStaleState removing state" podUID="42e150ed-be6b-4713-ac8f-63896a7aff87" containerName="cilium-agent" Sep 11 00:18:27.054050 kubelet[2770]: I0911 00:18:27.053229 2770 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c41fe9c-3c2c-412b-a69d-89d1d41ef025" containerName="cilium-operator" Sep 11 00:18:27.067354 systemd[1]: Created slice kubepods-burstable-pod544c7e91_3510_4f9b_a3b2_7683c8011f2d.slice - libcontainer container kubepods-burstable-pod544c7e91_3510_4f9b_a3b2_7683c8011f2d.slice. Sep 11 00:18:27.092913 sshd[4591]: Accepted publickey for core from 10.0.0.1 port 58690 ssh2: RSA SHA256:y/XwUTkYMtMNacauLj7j4r7D0OZbB+8bKKbHTNwhPa4 Sep 11 00:18:27.095181 sshd-session[4591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:18:27.097580 kubelet[2770]: I0911 00:18:27.097527 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/544c7e91-3510-4f9b-a3b2-7683c8011f2d-bpf-maps\") pod \"cilium-zpfbp\" (UID: \"544c7e91-3510-4f9b-a3b2-7683c8011f2d\") " pod="kube-system/cilium-zpfbp" Sep 11 00:18:27.097791 kubelet[2770]: I0911 00:18:27.097769 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/544c7e91-3510-4f9b-a3b2-7683c8011f2d-cilium-cgroup\") pod \"cilium-zpfbp\" (UID: \"544c7e91-3510-4f9b-a3b2-7683c8011f2d\") " pod="kube-system/cilium-zpfbp" Sep 11 00:18:27.097876 kubelet[2770]: I0911 00:18:27.097850 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/544c7e91-3510-4f9b-a3b2-7683c8011f2d-xtables-lock\") pod \"cilium-zpfbp\" (UID: \"544c7e91-3510-4f9b-a3b2-7683c8011f2d\") " pod="kube-system/cilium-zpfbp" Sep 11 00:18:27.097958 kubelet[2770]: I0911 00:18:27.097936 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/544c7e91-3510-4f9b-a3b2-7683c8011f2d-cilium-config-path\") pod \"cilium-zpfbp\" (UID: \"544c7e91-3510-4f9b-a3b2-7683c8011f2d\") " pod="kube-system/cilium-zpfbp" Sep 11 00:18:27.098014 kubelet[2770]: I0911 00:18:27.097986 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/544c7e91-3510-4f9b-a3b2-7683c8011f2d-host-proc-sys-net\") pod \"cilium-zpfbp\" (UID: \"544c7e91-3510-4f9b-a3b2-7683c8011f2d\") " pod="kube-system/cilium-zpfbp" Sep 11 00:18:27.098059 kubelet[2770]: I0911 00:18:27.098030 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/544c7e91-3510-4f9b-a3b2-7683c8011f2d-host-proc-sys-kernel\") pod \"cilium-zpfbp\" (UID: \"544c7e91-3510-4f9b-a3b2-7683c8011f2d\") " pod="kube-system/cilium-zpfbp" Sep 11 00:18:27.098101 kubelet[2770]: I0911 00:18:27.098062 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/544c7e91-3510-4f9b-a3b2-7683c8011f2d-etc-cni-netd\") pod \"cilium-zpfbp\" (UID: \"544c7e91-3510-4f9b-a3b2-7683c8011f2d\") " pod="kube-system/cilium-zpfbp" Sep 11 00:18:27.098136 kubelet[2770]: I0911 00:18:27.098097 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/544c7e91-3510-4f9b-a3b2-7683c8011f2d-cilium-run\") pod \"cilium-zpfbp\" (UID: \"544c7e91-3510-4f9b-a3b2-7683c8011f2d\") " pod="kube-system/cilium-zpfbp" Sep 11 00:18:27.098136 kubelet[2770]: I0911 00:18:27.098117 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/544c7e91-3510-4f9b-a3b2-7683c8011f2d-hostproc\") pod \"cilium-zpfbp\" (UID: \"544c7e91-3510-4f9b-a3b2-7683c8011f2d\") " pod="kube-system/cilium-zpfbp" Sep 11 00:18:27.098423 kubelet[2770]: I0911 00:18:27.098135 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/544c7e91-3510-4f9b-a3b2-7683c8011f2d-cni-path\") pod \"cilium-zpfbp\" (UID: \"544c7e91-3510-4f9b-a3b2-7683c8011f2d\") " pod="kube-system/cilium-zpfbp" Sep 11 00:18:27.098423 kubelet[2770]: I0911 00:18:27.098166 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/544c7e91-3510-4f9b-a3b2-7683c8011f2d-lib-modules\") pod \"cilium-zpfbp\" (UID: \"544c7e91-3510-4f9b-a3b2-7683c8011f2d\") " pod="kube-system/cilium-zpfbp" Sep 11 00:18:27.098423 kubelet[2770]: I0911 00:18:27.098185 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/544c7e91-3510-4f9b-a3b2-7683c8011f2d-cilium-ipsec-secrets\") pod \"cilium-zpfbp\" (UID: \"544c7e91-3510-4f9b-a3b2-7683c8011f2d\") " pod="kube-system/cilium-zpfbp" Sep 11 00:18:27.098423 kubelet[2770]: I0911 00:18:27.098214 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/544c7e91-3510-4f9b-a3b2-7683c8011f2d-hubble-tls\") pod \"cilium-zpfbp\" (UID: \"544c7e91-3510-4f9b-a3b2-7683c8011f2d\") " pod="kube-system/cilium-zpfbp" Sep 11 00:18:27.098423 kubelet[2770]: I0911 00:18:27.098240 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/544c7e91-3510-4f9b-a3b2-7683c8011f2d-clustermesh-secrets\") pod \"cilium-zpfbp\" (UID: \"544c7e91-3510-4f9b-a3b2-7683c8011f2d\") " pod="kube-system/cilium-zpfbp" Sep 11 00:18:27.098423 kubelet[2770]: I0911 00:18:27.098265 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h7f4\" (UniqueName: \"kubernetes.io/projected/544c7e91-3510-4f9b-a3b2-7683c8011f2d-kube-api-access-9h7f4\") pod \"cilium-zpfbp\" (UID: \"544c7e91-3510-4f9b-a3b2-7683c8011f2d\") " pod="kube-system/cilium-zpfbp" Sep 11 00:18:27.101611 systemd-logind[1548]: New session 30 of user core. Sep 11 00:18:27.111701 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 11 00:18:27.169693 sshd[4594]: Connection closed by 10.0.0.1 port 58690 Sep 11 00:18:27.170133 sshd-session[4591]: pam_unix(sshd:session): session closed for user core Sep 11 00:18:27.186117 systemd[1]: sshd@29-10.0.0.58:22-10.0.0.1:58690.service: Deactivated successfully. Sep 11 00:18:27.188823 systemd[1]: session-30.scope: Deactivated successfully. Sep 11 00:18:27.190003 systemd-logind[1548]: Session 30 logged out. Waiting for processes to exit. Sep 11 00:18:27.193898 systemd[1]: Started sshd@30-10.0.0.58:22-10.0.0.1:58706.service - OpenSSH per-connection server daemon (10.0.0.1:58706). Sep 11 00:18:27.194877 systemd-logind[1548]: Removed session 30. Sep 11 00:18:27.250705 sshd[4601]: Accepted publickey for core from 10.0.0.1 port 58706 ssh2: RSA SHA256:y/XwUTkYMtMNacauLj7j4r7D0OZbB+8bKKbHTNwhPa4 Sep 11 00:18:27.252333 sshd-session[4601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:18:27.256987 systemd-logind[1548]: New session 31 of user core. Sep 11 00:18:27.269693 systemd[1]: Started session-31.scope - Session 31 of User core. Sep 11 00:18:27.376400 kubelet[2770]: E0911 00:18:27.376329 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:18:27.377168 containerd[1561]: time="2025-09-11T00:18:27.377094657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zpfbp,Uid:544c7e91-3510-4f9b-a3b2-7683c8011f2d,Namespace:kube-system,Attempt:0,}" Sep 11 00:18:27.646599 containerd[1561]: time="2025-09-11T00:18:27.646499605Z" level=info msg="connecting to shim 41f83b03c88ac22f5da749c2a61154d9b422b58941ff4d589f7e60a10c24cceb" address="unix:///run/containerd/s/30d55bb3c41c1d844536c9149f3354f9614b49e0e769f30aeec3fcc7c15fcfee" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:18:27.677891 systemd[1]: Started cri-containerd-41f83b03c88ac22f5da749c2a61154d9b422b58941ff4d589f7e60a10c24cceb.scope - libcontainer container 41f83b03c88ac22f5da749c2a61154d9b422b58941ff4d589f7e60a10c24cceb. Sep 11 00:18:27.735540 containerd[1561]: time="2025-09-11T00:18:27.735443344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zpfbp,Uid:544c7e91-3510-4f9b-a3b2-7683c8011f2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"41f83b03c88ac22f5da749c2a61154d9b422b58941ff4d589f7e60a10c24cceb\"" Sep 11 00:18:27.736466 kubelet[2770]: E0911 00:18:27.736435 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:18:27.739932 containerd[1561]: time="2025-09-11T00:18:27.739885161Z" level=info msg="CreateContainer within sandbox \"41f83b03c88ac22f5da749c2a61154d9b422b58941ff4d589f7e60a10c24cceb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 11 00:18:27.808896 containerd[1561]: time="2025-09-11T00:18:27.808831784Z" level=info msg="Container 0c0da9ab3657316139d7527292b013ab89dbb481a85523b97a93159244499058: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:18:27.823408 containerd[1561]: time="2025-09-11T00:18:27.823314195Z" level=info msg="CreateContainer within sandbox \"41f83b03c88ac22f5da749c2a61154d9b422b58941ff4d589f7e60a10c24cceb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0c0da9ab3657316139d7527292b013ab89dbb481a85523b97a93159244499058\"" Sep 11 00:18:27.824303 containerd[1561]: time="2025-09-11T00:18:27.824220144Z" level=info msg="StartContainer for \"0c0da9ab3657316139d7527292b013ab89dbb481a85523b97a93159244499058\"" Sep 11 00:18:27.825748 containerd[1561]: time="2025-09-11T00:18:27.825709066Z" level=info msg="connecting to shim 0c0da9ab3657316139d7527292b013ab89dbb481a85523b97a93159244499058" address="unix:///run/containerd/s/30d55bb3c41c1d844536c9149f3354f9614b49e0e769f30aeec3fcc7c15fcfee" protocol=ttrpc version=3 Sep 11 00:18:27.863970 systemd[1]: Started cri-containerd-0c0da9ab3657316139d7527292b013ab89dbb481a85523b97a93159244499058.scope - libcontainer container 0c0da9ab3657316139d7527292b013ab89dbb481a85523b97a93159244499058. Sep 11 00:18:27.925493 systemd[1]: cri-containerd-0c0da9ab3657316139d7527292b013ab89dbb481a85523b97a93159244499058.scope: Deactivated successfully. Sep 11 00:18:27.927003 containerd[1561]: time="2025-09-11T00:18:27.926953842Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0c0da9ab3657316139d7527292b013ab89dbb481a85523b97a93159244499058\" id:\"0c0da9ab3657316139d7527292b013ab89dbb481a85523b97a93159244499058\" pid:4673 exited_at:{seconds:1757549907 nanos:926420043}" Sep 11 00:18:27.961030 containerd[1561]: time="2025-09-11T00:18:27.960953815Z" level=info msg="received exit event container_id:\"0c0da9ab3657316139d7527292b013ab89dbb481a85523b97a93159244499058\" id:\"0c0da9ab3657316139d7527292b013ab89dbb481a85523b97a93159244499058\" pid:4673 exited_at:{seconds:1757549907 nanos:926420043}" Sep 11 00:18:27.962118 containerd[1561]: time="2025-09-11T00:18:27.962083932Z" level=info msg="StartContainer for \"0c0da9ab3657316139d7527292b013ab89dbb481a85523b97a93159244499058\" returns successfully" Sep 11 00:18:28.169024 kubelet[2770]: I0911 00:18:28.168965 2770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c41fe9c-3c2c-412b-a69d-89d1d41ef025" path="/var/lib/kubelet/pods/2c41fe9c-3c2c-412b-a69d-89d1d41ef025/volumes" Sep 11 00:18:28.169661 kubelet[2770]: I0911 00:18:28.169622 2770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42e150ed-be6b-4713-ac8f-63896a7aff87" path="/var/lib/kubelet/pods/42e150ed-be6b-4713-ac8f-63896a7aff87/volumes" Sep 11 00:18:28.489419 kubelet[2770]: E0911 00:18:28.489192 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:18:28.491237 containerd[1561]: time="2025-09-11T00:18:28.491194291Z" level=info msg="CreateContainer within sandbox \"41f83b03c88ac22f5da749c2a61154d9b422b58941ff4d589f7e60a10c24cceb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 11 00:18:28.595902 containerd[1561]: time="2025-09-11T00:18:28.595843467Z" level=info msg="Container 6dbd842a0fedbc9b5649436b995be2eaccf99c47121a408cd8d2e4204410f2eb: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:18:28.600060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount774076122.mount: Deactivated successfully. Sep 11 00:18:28.621246 containerd[1561]: time="2025-09-11T00:18:28.621180378Z" level=info msg="CreateContainer within sandbox \"41f83b03c88ac22f5da749c2a61154d9b422b58941ff4d589f7e60a10c24cceb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6dbd842a0fedbc9b5649436b995be2eaccf99c47121a408cd8d2e4204410f2eb\"" Sep 11 00:18:28.622401 containerd[1561]: time="2025-09-11T00:18:28.622341455Z" level=info msg="StartContainer for \"6dbd842a0fedbc9b5649436b995be2eaccf99c47121a408cd8d2e4204410f2eb\"" Sep 11 00:18:28.623652 containerd[1561]: time="2025-09-11T00:18:28.623611309Z" level=info msg="connecting to shim 6dbd842a0fedbc9b5649436b995be2eaccf99c47121a408cd8d2e4204410f2eb" address="unix:///run/containerd/s/30d55bb3c41c1d844536c9149f3354f9614b49e0e769f30aeec3fcc7c15fcfee" protocol=ttrpc version=3 Sep 11 00:18:28.655895 systemd[1]: Started cri-containerd-6dbd842a0fedbc9b5649436b995be2eaccf99c47121a408cd8d2e4204410f2eb.scope - libcontainer container 6dbd842a0fedbc9b5649436b995be2eaccf99c47121a408cd8d2e4204410f2eb. Sep 11 00:18:28.698705 containerd[1561]: time="2025-09-11T00:18:28.698634737Z" level=info msg="StartContainer for \"6dbd842a0fedbc9b5649436b995be2eaccf99c47121a408cd8d2e4204410f2eb\" returns successfully" Sep 11 00:18:28.706995 systemd[1]: cri-containerd-6dbd842a0fedbc9b5649436b995be2eaccf99c47121a408cd8d2e4204410f2eb.scope: Deactivated successfully. Sep 11 00:18:28.707840 containerd[1561]: time="2025-09-11T00:18:28.707678092Z" level=info msg="received exit event container_id:\"6dbd842a0fedbc9b5649436b995be2eaccf99c47121a408cd8d2e4204410f2eb\" id:\"6dbd842a0fedbc9b5649436b995be2eaccf99c47121a408cd8d2e4204410f2eb\" pid:4718 exited_at:{seconds:1757549908 nanos:707182916}" Sep 11 00:18:28.708679 containerd[1561]: time="2025-09-11T00:18:28.708631082Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6dbd842a0fedbc9b5649436b995be2eaccf99c47121a408cd8d2e4204410f2eb\" id:\"6dbd842a0fedbc9b5649436b995be2eaccf99c47121a408cd8d2e4204410f2eb\" pid:4718 exited_at:{seconds:1757549908 nanos:707182916}" Sep 11 00:18:28.735737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6dbd842a0fedbc9b5649436b995be2eaccf99c47121a408cd8d2e4204410f2eb-rootfs.mount: Deactivated successfully. Sep 11 00:18:29.495842 kubelet[2770]: E0911 00:18:29.495501 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:18:29.497346 containerd[1561]: time="2025-09-11T00:18:29.497304064Z" level=info msg="CreateContainer within sandbox \"41f83b03c88ac22f5da749c2a61154d9b422b58941ff4d589f7e60a10c24cceb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 11 00:18:29.518609 containerd[1561]: time="2025-09-11T00:18:29.518545878Z" level=info msg="Container e5ca88fb3ef85ef1241de55487ce66d0ccef10f6fba97d40acf0bd3400e84a81: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:18:29.523924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount941988513.mount: Deactivated successfully. Sep 11 00:18:29.528378 containerd[1561]: time="2025-09-11T00:18:29.528298843Z" level=info msg="CreateContainer within sandbox \"41f83b03c88ac22f5da749c2a61154d9b422b58941ff4d589f7e60a10c24cceb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e5ca88fb3ef85ef1241de55487ce66d0ccef10f6fba97d40acf0bd3400e84a81\"" Sep 11 00:18:29.529130 containerd[1561]: time="2025-09-11T00:18:29.529074374Z" level=info msg="StartContainer for \"e5ca88fb3ef85ef1241de55487ce66d0ccef10f6fba97d40acf0bd3400e84a81\"" Sep 11 00:18:29.530962 containerd[1561]: time="2025-09-11T00:18:29.530924376Z" level=info msg="connecting to shim e5ca88fb3ef85ef1241de55487ce66d0ccef10f6fba97d40acf0bd3400e84a81" address="unix:///run/containerd/s/30d55bb3c41c1d844536c9149f3354f9614b49e0e769f30aeec3fcc7c15fcfee" protocol=ttrpc version=3 Sep 11 00:18:29.558801 systemd[1]: Started cri-containerd-e5ca88fb3ef85ef1241de55487ce66d0ccef10f6fba97d40acf0bd3400e84a81.scope - libcontainer container e5ca88fb3ef85ef1241de55487ce66d0ccef10f6fba97d40acf0bd3400e84a81. Sep 11 00:18:29.613081 systemd[1]: cri-containerd-e5ca88fb3ef85ef1241de55487ce66d0ccef10f6fba97d40acf0bd3400e84a81.scope: Deactivated successfully. Sep 11 00:18:29.614060 containerd[1561]: time="2025-09-11T00:18:29.613918491Z" level=info msg="received exit event container_id:\"e5ca88fb3ef85ef1241de55487ce66d0ccef10f6fba97d40acf0bd3400e84a81\" id:\"e5ca88fb3ef85ef1241de55487ce66d0ccef10f6fba97d40acf0bd3400e84a81\" pid:4761 exited_at:{seconds:1757549909 nanos:613708700}" Sep 11 00:18:29.614747 containerd[1561]: time="2025-09-11T00:18:29.614703150Z" level=info msg="StartContainer for \"e5ca88fb3ef85ef1241de55487ce66d0ccef10f6fba97d40acf0bd3400e84a81\" returns successfully" Sep 11 00:18:29.615032 containerd[1561]: time="2025-09-11T00:18:29.614998484Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5ca88fb3ef85ef1241de55487ce66d0ccef10f6fba97d40acf0bd3400e84a81\" id:\"e5ca88fb3ef85ef1241de55487ce66d0ccef10f6fba97d40acf0bd3400e84a81\" pid:4761 exited_at:{seconds:1757549909 nanos:613708700}" Sep 11 00:18:29.638663 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5ca88fb3ef85ef1241de55487ce66d0ccef10f6fba97d40acf0bd3400e84a81-rootfs.mount: Deactivated successfully. Sep 11 00:18:30.204927 kubelet[2770]: E0911 00:18:30.204879 2770 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 11 00:18:30.502589 kubelet[2770]: E0911 00:18:30.502400 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:18:30.505451 containerd[1561]: time="2025-09-11T00:18:30.505387842Z" level=info msg="CreateContainer within sandbox \"41f83b03c88ac22f5da749c2a61154d9b422b58941ff4d589f7e60a10c24cceb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 11 00:18:30.527871 containerd[1561]: time="2025-09-11T00:18:30.527797836Z" level=info msg="Container abdbba1cd165af68061bd7b3459fcf5e9d320db5f7a59fc313535e9c640f3cab: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:18:30.543350 containerd[1561]: time="2025-09-11T00:18:30.543286421Z" level=info msg="CreateContainer within sandbox \"41f83b03c88ac22f5da749c2a61154d9b422b58941ff4d589f7e60a10c24cceb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"abdbba1cd165af68061bd7b3459fcf5e9d320db5f7a59fc313535e9c640f3cab\"" Sep 11 00:18:30.545728 containerd[1561]: time="2025-09-11T00:18:30.545684461Z" level=info msg="StartContainer for \"abdbba1cd165af68061bd7b3459fcf5e9d320db5f7a59fc313535e9c640f3cab\"" Sep 11 00:18:30.548454 containerd[1561]: time="2025-09-11T00:18:30.548401391Z" level=info msg="connecting to shim abdbba1cd165af68061bd7b3459fcf5e9d320db5f7a59fc313535e9c640f3cab" address="unix:///run/containerd/s/30d55bb3c41c1d844536c9149f3354f9614b49e0e769f30aeec3fcc7c15fcfee" protocol=ttrpc version=3 Sep 11 00:18:30.589797 systemd[1]: Started cri-containerd-abdbba1cd165af68061bd7b3459fcf5e9d320db5f7a59fc313535e9c640f3cab.scope - libcontainer container abdbba1cd165af68061bd7b3459fcf5e9d320db5f7a59fc313535e9c640f3cab. Sep 11 00:18:30.622550 systemd[1]: cri-containerd-abdbba1cd165af68061bd7b3459fcf5e9d320db5f7a59fc313535e9c640f3cab.scope: Deactivated successfully. Sep 11 00:18:30.623437 containerd[1561]: time="2025-09-11T00:18:30.622625133Z" level=info msg="TaskExit event in podsandbox handler container_id:\"abdbba1cd165af68061bd7b3459fcf5e9d320db5f7a59fc313535e9c640f3cab\" id:\"abdbba1cd165af68061bd7b3459fcf5e9d320db5f7a59fc313535e9c640f3cab\" pid:4801 exited_at:{seconds:1757549910 nanos:622449277}" Sep 11 00:18:30.626846 containerd[1561]: time="2025-09-11T00:18:30.626776221Z" level=info msg="received exit event container_id:\"abdbba1cd165af68061bd7b3459fcf5e9d320db5f7a59fc313535e9c640f3cab\" id:\"abdbba1cd165af68061bd7b3459fcf5e9d320db5f7a59fc313535e9c640f3cab\" pid:4801 exited_at:{seconds:1757549910 nanos:622449277}" Sep 11 00:18:30.636063 containerd[1561]: time="2025-09-11T00:18:30.635989109Z" level=info msg="StartContainer for \"abdbba1cd165af68061bd7b3459fcf5e9d320db5f7a59fc313535e9c640f3cab\" returns successfully" Sep 11 00:18:30.655499 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-abdbba1cd165af68061bd7b3459fcf5e9d320db5f7a59fc313535e9c640f3cab-rootfs.mount: Deactivated successfully. Sep 11 00:18:31.129437 kubelet[2770]: E0911 00:18:31.129365 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:18:31.510773 kubelet[2770]: E0911 00:18:31.510613 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:18:31.514995 containerd[1561]: time="2025-09-11T00:18:31.514940499Z" level=info msg="CreateContainer within sandbox \"41f83b03c88ac22f5da749c2a61154d9b422b58941ff4d589f7e60a10c24cceb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 11 00:18:31.545551 containerd[1561]: time="2025-09-11T00:18:31.542956214Z" level=info msg="Container cc4c556ac2a401c50f9eaf57e497930238e4d18fc0f26b69068c87c5ba4c85b6: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:18:31.559448 containerd[1561]: time="2025-09-11T00:18:31.559364763Z" level=info msg="CreateContainer within sandbox \"41f83b03c88ac22f5da749c2a61154d9b422b58941ff4d589f7e60a10c24cceb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cc4c556ac2a401c50f9eaf57e497930238e4d18fc0f26b69068c87c5ba4c85b6\"" Sep 11 00:18:31.561534 containerd[1561]: time="2025-09-11T00:18:31.559989737Z" level=info msg="StartContainer for \"cc4c556ac2a401c50f9eaf57e497930238e4d18fc0f26b69068c87c5ba4c85b6\"" Sep 11 00:18:31.561534 containerd[1561]: time="2025-09-11T00:18:31.561165493Z" level=info msg="connecting to shim cc4c556ac2a401c50f9eaf57e497930238e4d18fc0f26b69068c87c5ba4c85b6" address="unix:///run/containerd/s/30d55bb3c41c1d844536c9149f3354f9614b49e0e769f30aeec3fcc7c15fcfee" protocol=ttrpc version=3 Sep 11 00:18:31.588051 systemd[1]: Started cri-containerd-cc4c556ac2a401c50f9eaf57e497930238e4d18fc0f26b69068c87c5ba4c85b6.scope - libcontainer container cc4c556ac2a401c50f9eaf57e497930238e4d18fc0f26b69068c87c5ba4c85b6. Sep 11 00:18:31.709380 containerd[1561]: time="2025-09-11T00:18:31.709290520Z" level=info msg="StartContainer for \"cc4c556ac2a401c50f9eaf57e497930238e4d18fc0f26b69068c87c5ba4c85b6\" returns successfully" Sep 11 00:18:31.793367 containerd[1561]: time="2025-09-11T00:18:31.793236480Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cc4c556ac2a401c50f9eaf57e497930238e4d18fc0f26b69068c87c5ba4c85b6\" id:\"87d247c81c7d0481337c6b182bc459378f2bef1c5dd7143995f68b6a59f72c97\" pid:4873 exited_at:{seconds:1757549911 nanos:792830003}" Sep 11 00:18:32.129407 kubelet[2770]: E0911 00:18:32.129339 2770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-nh6rd" podUID="6e5f26ab-f8cf-4eab-b642-b6ed53d93cf9" Sep 11 00:18:32.216611 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 11 00:18:32.519120 kubelet[2770]: E0911 00:18:32.518578 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:18:32.802349 kubelet[2770]: I0911 00:18:32.802100 2770 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-11T00:18:32Z","lastTransitionTime":"2025-09-11T00:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 11 00:18:33.519933 kubelet[2770]: E0911 00:18:33.519877 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:18:33.761175 containerd[1561]: time="2025-09-11T00:18:33.761118105Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cc4c556ac2a401c50f9eaf57e497930238e4d18fc0f26b69068c87c5ba4c85b6\" id:\"7f33b2b501f95cc0e7b3c50c6e82e6ab99488e3dae21e38c2e9bbc6ba7dd58b8\" pid:4945 exit_status:1 exited_at:{seconds:1757549913 nanos:760620434}" Sep 11 00:18:34.129850 kubelet[2770]: E0911 00:18:34.129745 2770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-nh6rd" podUID="6e5f26ab-f8cf-4eab-b642-b6ed53d93cf9" Sep 11 00:18:35.765724 systemd-networkd[1484]: lxc_health: Link UP Sep 11 00:18:35.766130 systemd-networkd[1484]: lxc_health: Gained carrier Sep 11 00:18:35.926224 containerd[1561]: time="2025-09-11T00:18:35.926162896Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cc4c556ac2a401c50f9eaf57e497930238e4d18fc0f26b69068c87c5ba4c85b6\" id:\"c4143e8d31b21589119cffe145df16b53292fa49c991c7405d6d69878173e2bf\" pid:5393 exit_status:1 exited_at:{seconds:1757549915 nanos:923446863}" Sep 11 00:18:36.132547 kubelet[2770]: E0911 00:18:36.130581 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:18:37.378718 kubelet[2770]: E0911 00:18:37.378650 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:18:37.402658 kubelet[2770]: I0911 00:18:37.402561 2770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zpfbp" podStartSLOduration=10.401488704 podStartE2EDuration="10.401488704s" podCreationTimestamp="2025-09-11 00:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:18:32.961025382 +0000 UTC m=+102.953882854" watchObservedRunningTime="2025-09-11 00:18:37.401488704 +0000 UTC m=+107.394346176" Sep 11 00:18:37.532271 kubelet[2770]: E0911 00:18:37.532127 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:18:37.589994 systemd-networkd[1484]: lxc_health: Gained IPv6LL Sep 11 00:18:38.022088 containerd[1561]: time="2025-09-11T00:18:38.022037657Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cc4c556ac2a401c50f9eaf57e497930238e4d18fc0f26b69068c87c5ba4c85b6\" id:\"c975afac75391c296ab27d36e120747c3387423b0f36989ba2f2fd878f449633\" pid:5430 exited_at:{seconds:1757549918 nanos:21696314}" Sep 11 00:18:38.534793 kubelet[2770]: E0911 00:18:38.534725 2770 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:18:40.123888 containerd[1561]: time="2025-09-11T00:18:40.123834567Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cc4c556ac2a401c50f9eaf57e497930238e4d18fc0f26b69068c87c5ba4c85b6\" id:\"8cd76151b528908a30686417f9ae8e9e5dbbd76bcf85d19ecc8b9698df2b2cd7\" pid:5462 exited_at:{seconds:1757549920 nanos:123280177}" Sep 11 00:18:42.260899 containerd[1561]: time="2025-09-11T00:18:42.260801473Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cc4c556ac2a401c50f9eaf57e497930238e4d18fc0f26b69068c87c5ba4c85b6\" id:\"11f238d19d85e9a484f56ec5f163339182312a86545fc2d56e60d3746e056b0e\" pid:5485 exited_at:{seconds:1757549922 nanos:260265798}" Sep 11 00:18:42.269275 sshd[4608]: Connection closed by 10.0.0.1 port 58706 Sep 11 00:18:42.269868 sshd-session[4601]: pam_unix(sshd:session): session closed for user core Sep 11 00:18:42.276567 systemd[1]: sshd@30-10.0.0.58:22-10.0.0.1:58706.service: Deactivated successfully. Sep 11 00:18:42.279440 systemd[1]: session-31.scope: Deactivated successfully. Sep 11 00:18:42.280440 systemd-logind[1548]: Session 31 logged out. Waiting for processes to exit. Sep 11 00:18:42.282160 systemd-logind[1548]: Removed session 31.