Sep 10 00:30:42.880252 kernel: Linux version 6.6.104-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Sep 9 22:56:44 -00 2025 Sep 10 00:30:42.880276 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a3dbdfb50e14c8de85dda26f853cdd6055239b4b8b15c08fb0eb00b67ce87a58 Sep 10 00:30:42.880288 kernel: BIOS-provided physical RAM map: Sep 10 00:30:42.880294 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 10 00:30:42.880301 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 10 00:30:42.880307 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 10 00:30:42.880314 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 10 00:30:42.880321 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 10 00:30:42.880327 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 10 00:30:42.880336 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 10 00:30:42.880343 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 10 00:30:42.880349 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 10 00:30:42.880359 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 10 00:30:42.880366 kernel: NX (Execute Disable) protection: active Sep 10 00:30:42.880373 kernel: APIC: Static calls initialized Sep 10 00:30:42.880385 kernel: SMBIOS 2.8 present. Sep 10 00:30:42.880392 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 10 00:30:42.880399 kernel: Hypervisor detected: KVM Sep 10 00:30:42.880406 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 10 00:30:42.880413 kernel: kvm-clock: using sched offset of 2878168763 cycles Sep 10 00:30:42.880420 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 10 00:30:42.880427 kernel: tsc: Detected 2794.748 MHz processor Sep 10 00:30:42.880434 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 10 00:30:42.880442 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 10 00:30:42.880449 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 10 00:30:42.880458 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 10 00:30:42.880465 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 10 00:30:42.880472 kernel: Using GB pages for direct mapping Sep 10 00:30:42.880480 kernel: ACPI: Early table checksum verification disabled Sep 10 00:30:42.880487 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 10 00:30:42.880494 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:30:42.880501 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:30:42.880508 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:30:42.880518 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 10 00:30:42.880525 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:30:42.880532 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:30:42.880539 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:30:42.880546 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:30:42.880553 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 10 00:30:42.880560 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 10 00:30:42.880571 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 10 00:30:42.880581 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 10 00:30:42.880588 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 10 00:30:42.880596 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 10 00:30:42.880603 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 10 00:30:42.880612 kernel: No NUMA configuration found Sep 10 00:30:42.880620 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 10 00:30:42.880627 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Sep 10 00:30:42.880637 kernel: Zone ranges: Sep 10 00:30:42.880644 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 10 00:30:42.880652 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 10 00:30:42.880659 kernel: Normal empty Sep 10 00:30:42.880666 kernel: Movable zone start for each node Sep 10 00:30:42.880673 kernel: Early memory node ranges Sep 10 00:30:42.880681 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 10 00:30:42.880688 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 10 00:30:42.880695 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 10 00:30:42.880705 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 10 00:30:42.880714 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 10 00:30:42.880721 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 10 00:30:42.880729 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 10 00:30:42.880736 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 10 00:30:42.880743 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 10 00:30:42.880760 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 10 00:30:42.880770 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 10 00:30:42.880778 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 10 00:30:42.880788 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 10 00:30:42.880812 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 10 00:30:42.880822 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 10 00:30:42.880839 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 10 00:30:42.880863 kernel: TSC deadline timer available Sep 10 00:30:42.880880 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 10 00:30:42.880904 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 10 00:30:42.880913 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 10 00:30:42.880929 kernel: kvm-guest: setup PV sched yield Sep 10 00:30:42.880950 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 10 00:30:42.880957 kernel: Booting paravirtualized kernel on KVM Sep 10 00:30:42.880965 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 10 00:30:42.880972 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 10 00:30:42.880979 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 10 00:30:42.880987 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 10 00:30:42.880994 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 10 00:30:42.881001 kernel: kvm-guest: PV spinlocks enabled Sep 10 00:30:42.881008 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 10 00:30:42.881019 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a3dbdfb50e14c8de85dda26f853cdd6055239b4b8b15c08fb0eb00b67ce87a58 Sep 10 00:30:42.881027 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 10 00:30:42.881035 kernel: random: crng init done Sep 10 00:30:42.881042 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 10 00:30:42.881049 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 10 00:30:42.881057 kernel: Fallback order for Node 0: 0 Sep 10 00:30:42.881064 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Sep 10 00:30:42.881071 kernel: Policy zone: DMA32 Sep 10 00:30:42.881081 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 10 00:30:42.881089 kernel: Memory: 2434588K/2571752K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 136904K reserved, 0K cma-reserved) Sep 10 00:30:42.881097 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 10 00:30:42.881104 kernel: ftrace: allocating 37969 entries in 149 pages Sep 10 00:30:42.881111 kernel: ftrace: allocated 149 pages with 4 groups Sep 10 00:30:42.881127 kernel: Dynamic Preempt: voluntary Sep 10 00:30:42.881134 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 10 00:30:42.881143 kernel: rcu: RCU event tracing is enabled. Sep 10 00:30:42.881151 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 10 00:30:42.881173 kernel: Trampoline variant of Tasks RCU enabled. Sep 10 00:30:42.881181 kernel: Rude variant of Tasks RCU enabled. Sep 10 00:30:42.881188 kernel: Tracing variant of Tasks RCU enabled. Sep 10 00:30:42.881195 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 10 00:30:42.881205 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 10 00:30:42.881213 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 10 00:30:42.881220 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 10 00:30:42.881228 kernel: Console: colour VGA+ 80x25 Sep 10 00:30:42.881235 kernel: printk: console [ttyS0] enabled Sep 10 00:30:42.881242 kernel: ACPI: Core revision 20230628 Sep 10 00:30:42.881253 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 10 00:30:42.881260 kernel: APIC: Switch to symmetric I/O mode setup Sep 10 00:30:42.881267 kernel: x2apic enabled Sep 10 00:30:42.881275 kernel: APIC: Switched APIC routing to: physical x2apic Sep 10 00:30:42.881282 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 10 00:30:42.881289 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 10 00:30:42.881297 kernel: kvm-guest: setup PV IPIs Sep 10 00:30:42.881314 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 10 00:30:42.881322 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 10 00:30:42.881329 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 10 00:30:42.881337 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 10 00:30:42.881347 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 10 00:30:42.881355 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 10 00:30:42.881363 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 10 00:30:42.881370 kernel: Spectre V2 : Mitigation: Retpolines Sep 10 00:30:42.881378 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 10 00:30:42.881388 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 10 00:30:42.881396 kernel: active return thunk: retbleed_return_thunk Sep 10 00:30:42.881406 kernel: RETBleed: Mitigation: untrained return thunk Sep 10 00:30:42.881414 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 10 00:30:42.881422 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 10 00:30:42.881430 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 10 00:30:42.881438 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 10 00:30:42.881446 kernel: active return thunk: srso_return_thunk Sep 10 00:30:42.881456 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 10 00:30:42.881464 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 10 00:30:42.881472 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 10 00:30:42.881480 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 10 00:30:42.881488 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 10 00:30:42.881496 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 10 00:30:42.881504 kernel: Freeing SMP alternatives memory: 32K Sep 10 00:30:42.881512 kernel: pid_max: default: 32768 minimum: 301 Sep 10 00:30:42.881520 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 10 00:30:42.881531 kernel: landlock: Up and running. Sep 10 00:30:42.881538 kernel: SELinux: Initializing. Sep 10 00:30:42.881546 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 00:30:42.881554 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 00:30:42.881563 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 10 00:30:42.881571 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 00:30:42.881579 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 00:30:42.881587 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 00:30:42.881597 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 10 00:30:42.881608 kernel: ... version: 0 Sep 10 00:30:42.881616 kernel: ... bit width: 48 Sep 10 00:30:42.881624 kernel: ... generic registers: 6 Sep 10 00:30:42.881632 kernel: ... value mask: 0000ffffffffffff Sep 10 00:30:42.881640 kernel: ... max period: 00007fffffffffff Sep 10 00:30:42.881647 kernel: ... fixed-purpose events: 0 Sep 10 00:30:42.881655 kernel: ... event mask: 000000000000003f Sep 10 00:30:42.881663 kernel: signal: max sigframe size: 1776 Sep 10 00:30:42.881671 kernel: rcu: Hierarchical SRCU implementation. Sep 10 00:30:42.881682 kernel: rcu: Max phase no-delay instances is 400. Sep 10 00:30:42.881689 kernel: smp: Bringing up secondary CPUs ... Sep 10 00:30:42.881697 kernel: smpboot: x86: Booting SMP configuration: Sep 10 00:30:42.881705 kernel: .... node #0, CPUs: #1 #2 #3 Sep 10 00:30:42.881713 kernel: smp: Brought up 1 node, 4 CPUs Sep 10 00:30:42.881721 kernel: smpboot: Max logical packages: 1 Sep 10 00:30:42.881729 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 10 00:30:42.881737 kernel: devtmpfs: initialized Sep 10 00:30:42.881744 kernel: x86/mm: Memory block size: 128MB Sep 10 00:30:42.881755 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 10 00:30:42.881762 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 10 00:30:42.881770 kernel: pinctrl core: initialized pinctrl subsystem Sep 10 00:30:42.881778 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 10 00:30:42.881785 kernel: audit: initializing netlink subsys (disabled) Sep 10 00:30:42.881793 kernel: audit: type=2000 audit(1757464241.588:1): state=initialized audit_enabled=0 res=1 Sep 10 00:30:42.881801 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 10 00:30:42.881808 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 10 00:30:42.881816 kernel: cpuidle: using governor menu Sep 10 00:30:42.881826 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 10 00:30:42.881833 kernel: dca service started, version 1.12.1 Sep 10 00:30:42.881841 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 10 00:30:42.881849 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 10 00:30:42.881857 kernel: PCI: Using configuration type 1 for base access Sep 10 00:30:42.881864 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 10 00:30:42.881872 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 10 00:30:42.881880 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 10 00:30:42.881888 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 10 00:30:42.881898 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 10 00:30:42.881906 kernel: ACPI: Added _OSI(Module Device) Sep 10 00:30:42.881914 kernel: ACPI: Added _OSI(Processor Device) Sep 10 00:30:42.881921 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 10 00:30:42.881929 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 10 00:30:42.881937 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 10 00:30:42.881944 kernel: ACPI: Interpreter enabled Sep 10 00:30:42.881952 kernel: ACPI: PM: (supports S0 S3 S5) Sep 10 00:30:42.881960 kernel: ACPI: Using IOAPIC for interrupt routing Sep 10 00:30:42.881970 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 10 00:30:42.881978 kernel: PCI: Using E820 reservations for host bridge windows Sep 10 00:30:42.881986 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 10 00:30:42.881993 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 10 00:30:42.882326 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 10 00:30:42.882497 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 10 00:30:42.882670 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 10 00:30:42.882682 kernel: PCI host bridge to bus 0000:00 Sep 10 00:30:42.882833 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 10 00:30:42.882953 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 10 00:30:42.883070 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 10 00:30:42.883225 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 10 00:30:42.883350 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 10 00:30:42.883465 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 10 00:30:42.883586 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 10 00:30:42.883746 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 10 00:30:42.883956 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 10 00:30:42.884089 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Sep 10 00:30:42.884252 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Sep 10 00:30:42.884378 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Sep 10 00:30:42.884504 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 10 00:30:42.884655 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 10 00:30:42.884836 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 10 00:30:42.884978 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Sep 10 00:30:42.885105 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Sep 10 00:30:42.885292 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 10 00:30:42.885422 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Sep 10 00:30:42.885549 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Sep 10 00:30:42.885681 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Sep 10 00:30:42.885833 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 10 00:30:42.885963 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Sep 10 00:30:42.886094 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Sep 10 00:30:42.886292 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 10 00:30:42.886422 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Sep 10 00:30:42.886563 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 10 00:30:42.886698 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 10 00:30:42.886835 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 10 00:30:42.886961 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Sep 10 00:30:42.887085 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Sep 10 00:30:42.887269 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 10 00:30:42.887401 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 10 00:30:42.887416 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 10 00:30:42.887424 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 10 00:30:42.887432 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 10 00:30:42.887440 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 10 00:30:42.887448 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 10 00:30:42.887456 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 10 00:30:42.887463 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 10 00:30:42.887471 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 10 00:30:42.887479 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 10 00:30:42.887489 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 10 00:30:42.887497 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 10 00:30:42.887505 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 10 00:30:42.887513 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 10 00:30:42.887521 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 10 00:30:42.887529 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 10 00:30:42.887536 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 10 00:30:42.887544 kernel: iommu: Default domain type: Translated Sep 10 00:30:42.887551 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 10 00:30:42.887562 kernel: PCI: Using ACPI for IRQ routing Sep 10 00:30:42.887569 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 10 00:30:42.887577 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 10 00:30:42.887585 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 10 00:30:42.887711 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 10 00:30:42.887839 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 10 00:30:42.887966 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 10 00:30:42.887976 kernel: vgaarb: loaded Sep 10 00:30:42.887988 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 10 00:30:42.887996 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 10 00:30:42.888004 kernel: clocksource: Switched to clocksource kvm-clock Sep 10 00:30:42.888012 kernel: VFS: Disk quotas dquot_6.6.0 Sep 10 00:30:42.888019 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 10 00:30:42.888027 kernel: pnp: PnP ACPI init Sep 10 00:30:42.888235 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 10 00:30:42.888248 kernel: pnp: PnP ACPI: found 6 devices Sep 10 00:30:42.888261 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 10 00:30:42.888268 kernel: NET: Registered PF_INET protocol family Sep 10 00:30:42.888276 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 10 00:30:42.888284 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 10 00:30:42.888292 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 10 00:30:42.888300 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 10 00:30:42.888307 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 10 00:30:42.888315 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 10 00:30:42.888323 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 00:30:42.888333 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 00:30:42.888341 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 10 00:30:42.888349 kernel: NET: Registered PF_XDP protocol family Sep 10 00:30:42.888496 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 10 00:30:42.888638 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 10 00:30:42.888753 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 10 00:30:42.888867 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 10 00:30:42.888981 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 10 00:30:42.889094 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 10 00:30:42.889109 kernel: PCI: CLS 0 bytes, default 64 Sep 10 00:30:42.889125 kernel: Initialise system trusted keyrings Sep 10 00:30:42.889133 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 10 00:30:42.889141 kernel: Key type asymmetric registered Sep 10 00:30:42.889150 kernel: Asymmetric key parser 'x509' registered Sep 10 00:30:42.889158 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 10 00:30:42.889191 kernel: io scheduler mq-deadline registered Sep 10 00:30:42.889199 kernel: io scheduler kyber registered Sep 10 00:30:42.889206 kernel: io scheduler bfq registered Sep 10 00:30:42.889218 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 10 00:30:42.889226 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 10 00:30:42.889234 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 10 00:30:42.889242 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 10 00:30:42.889249 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 10 00:30:42.889257 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 10 00:30:42.889265 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 10 00:30:42.889273 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 10 00:30:42.889280 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 10 00:30:42.889424 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 10 00:30:42.889545 kernel: rtc_cmos 00:04: registered as rtc0 Sep 10 00:30:42.889556 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 10 00:30:42.889672 kernel: rtc_cmos 00:04: setting system clock to 2025-09-10T00:30:42 UTC (1757464242) Sep 10 00:30:42.889789 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 10 00:30:42.889799 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 10 00:30:42.889807 kernel: NET: Registered PF_INET6 protocol family Sep 10 00:30:42.889819 kernel: Segment Routing with IPv6 Sep 10 00:30:42.889826 kernel: In-situ OAM (IOAM) with IPv6 Sep 10 00:30:42.889834 kernel: NET: Registered PF_PACKET protocol family Sep 10 00:30:42.889842 kernel: Key type dns_resolver registered Sep 10 00:30:42.889849 kernel: IPI shorthand broadcast: enabled Sep 10 00:30:42.889857 kernel: sched_clock: Marking stable (1021002607, 100768978)->(1137709026, -15937441) Sep 10 00:30:42.889865 kernel: registered taskstats version 1 Sep 10 00:30:42.889872 kernel: Loading compiled-in X.509 certificates Sep 10 00:30:42.889880 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.104-flatcar: a614f1c62f27a560d677bbf0283703118c9005ec' Sep 10 00:30:42.889888 kernel: Key type .fscrypt registered Sep 10 00:30:42.889898 kernel: Key type fscrypt-provisioning registered Sep 10 00:30:42.889906 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 10 00:30:42.889913 kernel: ima: Allocated hash algorithm: sha1 Sep 10 00:30:42.889921 kernel: ima: No architecture policies found Sep 10 00:30:42.889929 kernel: clk: Disabling unused clocks Sep 10 00:30:42.889936 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 10 00:30:42.889944 kernel: Write protecting the kernel read-only data: 36864k Sep 10 00:30:42.889952 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 10 00:30:42.889962 kernel: Run /init as init process Sep 10 00:30:42.889969 kernel: with arguments: Sep 10 00:30:42.889977 kernel: /init Sep 10 00:30:42.889984 kernel: with environment: Sep 10 00:30:42.889992 kernel: HOME=/ Sep 10 00:30:42.890000 kernel: TERM=linux Sep 10 00:30:42.890007 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 10 00:30:42.890017 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 10 00:30:42.890029 systemd[1]: Detected virtualization kvm. Sep 10 00:30:42.890038 systemd[1]: Detected architecture x86-64. Sep 10 00:30:42.890046 systemd[1]: Running in initrd. Sep 10 00:30:42.890053 systemd[1]: No hostname configured, using default hostname. Sep 10 00:30:42.890061 systemd[1]: Hostname set to . Sep 10 00:30:42.890070 systemd[1]: Initializing machine ID from VM UUID. Sep 10 00:30:42.890078 systemd[1]: Queued start job for default target initrd.target. Sep 10 00:30:42.890086 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 00:30:42.890097 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 00:30:42.890106 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 10 00:30:42.890135 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 00:30:42.890147 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 10 00:30:42.890155 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 10 00:30:42.890244 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 10 00:30:42.890254 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 10 00:30:42.890263 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 00:30:42.890271 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 00:30:42.890280 systemd[1]: Reached target paths.target - Path Units. Sep 10 00:30:42.890288 systemd[1]: Reached target slices.target - Slice Units. Sep 10 00:30:42.890297 systemd[1]: Reached target swap.target - Swaps. Sep 10 00:30:42.890305 systemd[1]: Reached target timers.target - Timer Units. Sep 10 00:30:42.890316 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 00:30:42.890324 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 00:30:42.890333 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 10 00:30:42.890342 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 10 00:30:42.890350 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 00:30:42.890358 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 00:30:42.890367 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 00:30:42.890375 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 00:30:42.890384 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 10 00:30:42.890395 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 00:30:42.890403 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 10 00:30:42.890411 systemd[1]: Starting systemd-fsck-usr.service... Sep 10 00:30:42.890420 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 00:30:42.890429 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 00:30:42.890437 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:30:42.890446 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 10 00:30:42.890455 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 00:30:42.890466 systemd[1]: Finished systemd-fsck-usr.service. Sep 10 00:30:42.890475 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 00:30:42.890504 systemd-journald[193]: Collecting audit messages is disabled. Sep 10 00:30:42.890526 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 00:30:42.890535 systemd-journald[193]: Journal started Sep 10 00:30:42.890556 systemd-journald[193]: Runtime Journal (/run/log/journal/f2e8ebfcd2d14eb59f6fddd91b5aa61b) is 6.0M, max 48.4M, 42.3M free. Sep 10 00:30:42.882691 systemd-modules-load[194]: Inserted module 'overlay' Sep 10 00:30:42.920685 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 10 00:30:42.920701 kernel: Bridge firewalling registered Sep 10 00:30:42.909759 systemd-modules-load[194]: Inserted module 'br_netfilter' Sep 10 00:30:42.923300 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 00:30:42.923717 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 00:30:42.925970 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:30:42.942343 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 00:30:42.945345 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 00:30:42.947911 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 00:30:42.951740 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 00:30:42.961062 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 00:30:42.963095 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:30:42.964310 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:30:42.966372 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 00:30:42.981317 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 10 00:30:42.983633 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 00:30:42.995679 dracut-cmdline[229]: dracut-dracut-053 Sep 10 00:30:42.999032 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a3dbdfb50e14c8de85dda26f853cdd6055239b4b8b15c08fb0eb00b67ce87a58 Sep 10 00:30:43.017803 systemd-resolved[230]: Positive Trust Anchors: Sep 10 00:30:43.017818 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 00:30:43.017849 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 00:30:43.020433 systemd-resolved[230]: Defaulting to hostname 'linux'. Sep 10 00:30:43.021746 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 00:30:43.027061 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 00:30:43.089196 kernel: SCSI subsystem initialized Sep 10 00:30:43.098191 kernel: Loading iSCSI transport class v2.0-870. Sep 10 00:30:43.109199 kernel: iscsi: registered transport (tcp) Sep 10 00:30:43.130198 kernel: iscsi: registered transport (qla4xxx) Sep 10 00:30:43.130217 kernel: QLogic iSCSI HBA Driver Sep 10 00:30:43.185984 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 10 00:30:43.195287 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 10 00:30:43.221904 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 10 00:30:43.221930 kernel: device-mapper: uevent: version 1.0.3 Sep 10 00:30:43.221952 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 10 00:30:43.265201 kernel: raid6: avx2x4 gen() 30619 MB/s Sep 10 00:30:43.282189 kernel: raid6: avx2x2 gen() 31526 MB/s Sep 10 00:30:43.299212 kernel: raid6: avx2x1 gen() 26100 MB/s Sep 10 00:30:43.299232 kernel: raid6: using algorithm avx2x2 gen() 31526 MB/s Sep 10 00:30:43.317214 kernel: raid6: .... xor() 19922 MB/s, rmw enabled Sep 10 00:30:43.317244 kernel: raid6: using avx2x2 recovery algorithm Sep 10 00:30:43.337191 kernel: xor: automatically using best checksumming function avx Sep 10 00:30:43.496195 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 10 00:30:43.511035 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 10 00:30:43.521318 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 00:30:43.533147 systemd-udevd[413]: Using default interface naming scheme 'v255'. Sep 10 00:30:43.538702 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 00:30:43.555400 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 10 00:30:43.571834 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Sep 10 00:30:43.615584 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 00:30:43.630452 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 00:30:43.715834 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 00:30:43.724329 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 10 00:30:43.745257 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 10 00:30:43.747860 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 00:30:43.748507 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 00:30:43.748990 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 00:30:43.759424 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 10 00:30:43.764197 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 10 00:30:43.773644 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 10 00:30:43.775190 kernel: cryptd: max_cpu_qlen set to 1000 Sep 10 00:30:43.779295 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 10 00:30:43.790571 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 00:30:43.795761 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 10 00:30:43.795799 kernel: GPT:9289727 != 19775487 Sep 10 00:30:43.795817 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 10 00:30:43.795828 kernel: GPT:9289727 != 19775487 Sep 10 00:30:43.795842 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 10 00:30:43.795853 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:30:43.792033 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:30:43.798442 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 00:30:43.800923 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 00:30:43.801339 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:30:43.803357 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:30:43.810615 kernel: AVX2 version of gcm_enc/dec engaged. Sep 10 00:30:43.810644 kernel: libata version 3.00 loaded. Sep 10 00:30:43.810656 kernel: AES CTR mode by8 optimization enabled Sep 10 00:30:43.814268 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:30:43.817245 kernel: ahci 0000:00:1f.2: version 3.0 Sep 10 00:30:43.819431 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 10 00:30:43.822926 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 10 00:30:43.823134 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 10 00:30:44.190047 kernel: scsi host0: ahci Sep 10 00:30:44.191193 kernel: scsi host1: ahci Sep 10 00:30:44.194394 kernel: scsi host2: ahci Sep 10 00:30:44.194596 kernel: scsi host3: ahci Sep 10 00:30:44.197911 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (466) Sep 10 00:30:44.199184 kernel: BTRFS: device fsid 47ffa5df-7ab2-4f1a-b68f-595717991426 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (459) Sep 10 00:30:44.210857 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 10 00:30:44.231428 kernel: scsi host4: ahci Sep 10 00:30:44.231634 kernel: scsi host5: ahci Sep 10 00:30:44.231819 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Sep 10 00:30:44.231834 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Sep 10 00:30:44.231854 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Sep 10 00:30:44.231865 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Sep 10 00:30:44.231875 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Sep 10 00:30:44.231886 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Sep 10 00:30:44.231640 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:30:44.239070 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 10 00:30:44.246109 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 00:30:44.256506 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 10 00:30:44.259736 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 10 00:30:44.277405 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 10 00:30:44.279420 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 00:30:44.287913 disk-uuid[566]: Primary Header is updated. Sep 10 00:30:44.287913 disk-uuid[566]: Secondary Entries is updated. Sep 10 00:30:44.287913 disk-uuid[566]: Secondary Header is updated. Sep 10 00:30:44.293326 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:30:44.300193 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:30:44.300678 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:30:44.306245 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:30:44.308206 kernel: block device autoloading is deprecated and will be removed. Sep 10 00:30:44.524266 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 10 00:30:44.524341 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 10 00:30:44.524353 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 10 00:30:44.525194 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 10 00:30:44.526197 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 10 00:30:44.527203 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 10 00:30:44.528324 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 10 00:30:44.528336 kernel: ata3.00: applying bridge limits Sep 10 00:30:44.529196 kernel: ata3.00: configured for UDMA/100 Sep 10 00:30:44.529221 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 10 00:30:44.570699 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 10 00:30:44.570976 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 10 00:30:44.585188 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 10 00:30:45.304209 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:30:45.304785 disk-uuid[569]: The operation has completed successfully. Sep 10 00:30:45.341257 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 10 00:30:45.341395 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 10 00:30:45.365381 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 10 00:30:45.370882 sh[596]: Success Sep 10 00:30:45.384196 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 10 00:30:45.418373 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 10 00:30:45.441810 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 10 00:30:45.447276 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 10 00:30:45.461143 kernel: BTRFS info (device dm-0): first mount of filesystem 47ffa5df-7ab2-4f1a-b68f-595717991426 Sep 10 00:30:45.461205 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 10 00:30:45.461235 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 10 00:30:45.463596 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 10 00:30:45.463611 kernel: BTRFS info (device dm-0): using free space tree Sep 10 00:30:45.469040 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 10 00:30:45.470241 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 10 00:30:45.478322 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 10 00:30:45.479345 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 10 00:30:45.496554 kernel: BTRFS info (device vda6): first mount of filesystem 81146077-6e72-4c2f-a205-63f64096a038 Sep 10 00:30:45.496600 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 10 00:30:45.496612 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:30:45.500230 kernel: BTRFS info (device vda6): auto enabling async discard Sep 10 00:30:45.510240 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 10 00:30:45.511892 kernel: BTRFS info (device vda6): last unmount of filesystem 81146077-6e72-4c2f-a205-63f64096a038 Sep 10 00:30:45.520934 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 10 00:30:45.529351 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 10 00:30:45.626119 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 00:30:45.634313 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 00:30:45.648829 ignition[694]: Ignition 2.19.0 Sep 10 00:30:45.648841 ignition[694]: Stage: fetch-offline Sep 10 00:30:45.648881 ignition[694]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:30:45.648892 ignition[694]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:30:45.649009 ignition[694]: parsed url from cmdline: "" Sep 10 00:30:45.649013 ignition[694]: no config URL provided Sep 10 00:30:45.649019 ignition[694]: reading system config file "/usr/lib/ignition/user.ign" Sep 10 00:30:45.649030 ignition[694]: no config at "/usr/lib/ignition/user.ign" Sep 10 00:30:45.649070 ignition[694]: op(1): [started] loading QEMU firmware config module Sep 10 00:30:45.649076 ignition[694]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 10 00:30:45.659799 systemd-networkd[783]: lo: Link UP Sep 10 00:30:45.659816 systemd-networkd[783]: lo: Gained carrier Sep 10 00:30:45.661834 systemd-networkd[783]: Enumeration completed Sep 10 00:30:45.662281 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 00:30:45.663101 systemd[1]: Reached target network.target - Network. Sep 10 00:30:45.663620 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:30:45.663625 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 00:30:45.664652 systemd-networkd[783]: eth0: Link UP Sep 10 00:30:45.664656 systemd-networkd[783]: eth0: Gained carrier Sep 10 00:30:45.664663 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:30:45.673350 ignition[694]: op(1): [finished] loading QEMU firmware config module Sep 10 00:30:45.688210 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.14/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 00:30:45.711925 ignition[694]: parsing config with SHA512: c59332ae731752d8f3ce9b8a750717ad10464b05f604c56f4d53c2629d1a6967f1eea60c0d91da5b4827e95d163878edcee86938a9e2550c3fdeb51f8d2b32d2 Sep 10 00:30:45.715956 unknown[694]: fetched base config from "system" Sep 10 00:30:45.715970 unknown[694]: fetched user config from "qemu" Sep 10 00:30:45.717914 ignition[694]: fetch-offline: fetch-offline passed Sep 10 00:30:45.718090 ignition[694]: Ignition finished successfully Sep 10 00:30:45.720648 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 00:30:45.723076 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 10 00:30:45.728294 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 10 00:30:45.749570 ignition[789]: Ignition 2.19.0 Sep 10 00:30:45.749581 ignition[789]: Stage: kargs Sep 10 00:30:45.749756 ignition[789]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:30:45.749770 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:30:45.750736 ignition[789]: kargs: kargs passed Sep 10 00:30:45.750780 ignition[789]: Ignition finished successfully Sep 10 00:30:45.753709 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 10 00:30:45.765309 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 10 00:30:45.786238 ignition[797]: Ignition 2.19.0 Sep 10 00:30:45.786250 ignition[797]: Stage: disks Sep 10 00:30:45.786431 ignition[797]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:30:45.789586 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 10 00:30:45.786443 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:30:45.790842 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 10 00:30:45.787296 ignition[797]: disks: disks passed Sep 10 00:30:45.792667 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 10 00:30:45.787346 ignition[797]: Ignition finished successfully Sep 10 00:30:45.794890 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 00:30:45.796861 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 00:30:45.797855 systemd[1]: Reached target basic.target - Basic System. Sep 10 00:30:45.810299 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 10 00:30:45.821424 systemd-resolved[230]: Detected conflict on linux IN A 10.0.0.14 Sep 10 00:30:45.821441 systemd-resolved[230]: Hostname conflict, changing published hostname from 'linux' to 'linux7'. Sep 10 00:30:45.824028 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 10 00:30:45.830346 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 10 00:30:45.841254 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 10 00:30:45.933189 kernel: EXT4-fs (vda9): mounted filesystem 0a9bf3c7-f8cd-4d40-b949-283957ba2f96 r/w with ordered data mode. Quota mode: none. Sep 10 00:30:45.933582 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 10 00:30:45.935031 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 10 00:30:45.946241 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 00:30:45.947987 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 10 00:30:45.949145 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 10 00:30:45.949205 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 10 00:30:45.956654 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (815) Sep 10 00:30:45.956671 kernel: BTRFS info (device vda6): first mount of filesystem 81146077-6e72-4c2f-a205-63f64096a038 Sep 10 00:30:45.949228 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 00:30:45.961201 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 10 00:30:45.961217 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:30:45.957502 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 10 00:30:45.961995 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 10 00:30:45.965441 kernel: BTRFS info (device vda6): auto enabling async discard Sep 10 00:30:45.965820 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 00:30:46.002830 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Sep 10 00:30:46.008252 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Sep 10 00:30:46.013084 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Sep 10 00:30:46.017887 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Sep 10 00:30:46.107602 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 10 00:30:46.119250 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 10 00:30:46.120900 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 10 00:30:46.128198 kernel: BTRFS info (device vda6): last unmount of filesystem 81146077-6e72-4c2f-a205-63f64096a038 Sep 10 00:30:46.158985 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 10 00:30:46.169115 ignition[927]: INFO : Ignition 2.19.0 Sep 10 00:30:46.169115 ignition[927]: INFO : Stage: mount Sep 10 00:30:46.170801 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:30:46.170801 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:30:46.170801 ignition[927]: INFO : mount: mount passed Sep 10 00:30:46.170801 ignition[927]: INFO : Ignition finished successfully Sep 10 00:30:46.176309 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 10 00:30:46.189248 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 10 00:30:46.460941 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 10 00:30:46.474350 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 00:30:46.485182 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (941) Sep 10 00:30:46.487712 kernel: BTRFS info (device vda6): first mount of filesystem 81146077-6e72-4c2f-a205-63f64096a038 Sep 10 00:30:46.487736 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 10 00:30:46.487747 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:30:46.491197 kernel: BTRFS info (device vda6): auto enabling async discard Sep 10 00:30:46.493923 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 00:30:46.539839 ignition[958]: INFO : Ignition 2.19.0 Sep 10 00:30:46.539839 ignition[958]: INFO : Stage: files Sep 10 00:30:46.541991 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:30:46.541991 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:30:46.541991 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Sep 10 00:30:46.545783 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 10 00:30:46.545783 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 10 00:30:46.548586 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 10 00:30:46.548586 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 10 00:30:46.548586 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 10 00:30:46.547005 unknown[958]: wrote ssh authorized keys file for user: core Sep 10 00:30:46.554233 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 10 00:30:46.554233 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 10 00:30:46.660414 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 10 00:30:46.890840 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 10 00:30:46.892904 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 10 00:30:46.892904 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 10 00:30:47.155266 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 10 00:30:47.363126 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 10 00:30:47.363126 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 10 00:30:47.366658 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 10 00:30:47.366658 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 10 00:30:47.366658 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 10 00:30:47.366658 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 00:30:47.366658 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 00:30:47.366658 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 00:30:47.366658 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 00:30:47.366658 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 00:30:47.366658 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 00:30:47.366658 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 00:30:47.366658 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 00:30:47.366658 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 00:30:47.366658 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 10 00:30:47.605370 systemd-networkd[783]: eth0: Gained IPv6LL Sep 10 00:30:47.643727 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 10 00:30:48.053229 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 00:30:48.053229 ignition[958]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 10 00:30:48.057846 ignition[958]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 00:30:48.057846 ignition[958]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 00:30:48.057846 ignition[958]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 10 00:30:48.057846 ignition[958]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 10 00:30:48.057846 ignition[958]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 00:30:48.057846 ignition[958]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 00:30:48.057846 ignition[958]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 10 00:30:48.057846 ignition[958]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 10 00:30:48.099054 ignition[958]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 00:30:48.105907 ignition[958]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 00:30:48.107723 ignition[958]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 10 00:30:48.107723 ignition[958]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 10 00:30:48.107723 ignition[958]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 10 00:30:48.107723 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 10 00:30:48.107723 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 10 00:30:48.107723 ignition[958]: INFO : files: files passed Sep 10 00:30:48.107723 ignition[958]: INFO : Ignition finished successfully Sep 10 00:30:48.111108 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 10 00:30:48.120333 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 10 00:30:48.122354 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 10 00:30:48.129903 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 10 00:30:48.130131 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 10 00:30:48.138786 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Sep 10 00:30:48.143017 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 00:30:48.143017 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 10 00:30:48.146732 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 00:30:48.148143 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 00:30:48.150630 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 10 00:30:48.164388 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 10 00:30:48.205074 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 10 00:30:48.205281 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 10 00:30:48.207756 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 10 00:30:48.209866 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 10 00:30:48.211963 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 10 00:30:48.223345 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 10 00:30:48.239957 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 00:30:48.243634 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 10 00:30:48.256658 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 10 00:30:48.258860 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 00:30:48.260097 systemd[1]: Stopped target timers.target - Timer Units. Sep 10 00:30:48.261907 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 10 00:30:48.262031 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 00:30:48.264031 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 10 00:30:48.265666 systemd[1]: Stopped target basic.target - Basic System. Sep 10 00:30:48.267544 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 10 00:30:48.269451 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 00:30:48.271335 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 10 00:30:48.273352 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 10 00:30:48.275319 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 00:30:48.277492 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 10 00:30:48.279374 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 10 00:30:48.281434 systemd[1]: Stopped target swap.target - Swaps. Sep 10 00:30:48.283075 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 10 00:30:48.283215 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 10 00:30:48.285203 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 10 00:30:48.286682 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 00:30:48.288798 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 10 00:30:48.288952 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 00:30:48.290907 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 10 00:30:48.291030 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 10 00:30:48.293098 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 10 00:30:48.293232 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 00:30:48.295209 systemd[1]: Stopped target paths.target - Path Units. Sep 10 00:30:48.296791 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 10 00:30:48.300211 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 00:30:48.302249 systemd[1]: Stopped target slices.target - Slice Units. Sep 10 00:30:48.304096 systemd[1]: Stopped target sockets.target - Socket Units. Sep 10 00:30:48.305764 systemd[1]: iscsid.socket: Deactivated successfully. Sep 10 00:30:48.305858 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 00:30:48.307656 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 10 00:30:48.307748 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 00:30:48.309944 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 10 00:30:48.310067 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 00:30:48.311859 systemd[1]: ignition-files.service: Deactivated successfully. Sep 10 00:30:48.311966 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 10 00:30:48.329312 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 10 00:30:48.330857 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 10 00:30:48.331874 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 10 00:30:48.332002 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 00:30:48.333981 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 10 00:30:48.334143 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 00:30:48.338826 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 10 00:30:48.338933 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 10 00:30:48.348590 ignition[1012]: INFO : Ignition 2.19.0 Sep 10 00:30:48.348590 ignition[1012]: INFO : Stage: umount Sep 10 00:30:48.350200 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:30:48.350200 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:30:48.350200 ignition[1012]: INFO : umount: umount passed Sep 10 00:30:48.350200 ignition[1012]: INFO : Ignition finished successfully Sep 10 00:30:48.351816 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 10 00:30:48.351943 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 10 00:30:48.353542 systemd[1]: Stopped target network.target - Network. Sep 10 00:30:48.355002 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 10 00:30:48.355060 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 10 00:30:48.356770 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 10 00:30:48.356822 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 10 00:30:48.359004 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 10 00:30:48.359059 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 10 00:30:48.361225 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 10 00:30:48.361280 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 10 00:30:48.363342 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 10 00:30:48.365349 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 10 00:30:48.368466 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 10 00:30:48.371244 systemd-networkd[783]: eth0: DHCPv6 lease lost Sep 10 00:30:48.373743 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 10 00:30:48.373902 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 10 00:30:48.375460 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 10 00:30:48.375506 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 10 00:30:48.384350 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 10 00:30:48.385527 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 10 00:30:48.385610 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 00:30:48.387796 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 00:30:48.390704 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 10 00:30:48.390853 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 10 00:30:48.401490 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 00:30:48.401556 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:30:48.403630 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 10 00:30:48.403687 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 10 00:30:48.406620 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 10 00:30:48.406673 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 00:30:48.411219 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 10 00:30:48.412257 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 00:30:48.415007 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 10 00:30:48.415990 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 10 00:30:48.419007 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 10 00:30:48.420005 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 10 00:30:48.422187 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 10 00:30:48.422234 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 00:30:48.425056 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 10 00:30:48.425937 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 10 00:30:48.428004 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 10 00:30:48.428897 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 10 00:30:48.430905 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 00:30:48.431849 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:30:48.443328 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 10 00:30:48.443564 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 10 00:30:48.443623 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 00:30:48.443924 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 10 00:30:48.443982 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 00:30:48.444406 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 10 00:30:48.444451 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 00:30:48.444717 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 00:30:48.444766 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:30:48.454090 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 10 00:30:48.454224 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 10 00:30:48.538559 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 10 00:30:48.538711 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 10 00:30:48.540781 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 10 00:30:48.541758 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 10 00:30:48.541821 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 10 00:30:48.554323 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 10 00:30:48.563112 systemd[1]: Switching root. Sep 10 00:30:48.596004 systemd-journald[193]: Journal stopped Sep 10 00:30:50.024612 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Sep 10 00:30:50.024708 kernel: SELinux: policy capability network_peer_controls=1 Sep 10 00:30:50.024727 kernel: SELinux: policy capability open_perms=1 Sep 10 00:30:50.024741 kernel: SELinux: policy capability extended_socket_class=1 Sep 10 00:30:50.024756 kernel: SELinux: policy capability always_check_network=0 Sep 10 00:30:50.024774 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 10 00:30:50.024785 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 10 00:30:50.024802 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 10 00:30:50.024814 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 10 00:30:50.024828 kernel: audit: type=1403 audit(1757464249.140:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 10 00:30:50.024841 systemd[1]: Successfully loaded SELinux policy in 53.207ms. Sep 10 00:30:50.024860 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.217ms. Sep 10 00:30:50.024873 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 10 00:30:50.024886 systemd[1]: Detected virtualization kvm. Sep 10 00:30:50.024901 systemd[1]: Detected architecture x86-64. Sep 10 00:30:50.024913 systemd[1]: Detected first boot. Sep 10 00:30:50.024932 systemd[1]: Initializing machine ID from VM UUID. Sep 10 00:30:50.024944 zram_generator::config[1056]: No configuration found. Sep 10 00:30:50.024966 systemd[1]: Populated /etc with preset unit settings. Sep 10 00:30:50.024978 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 10 00:30:50.024993 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 10 00:30:50.025005 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 10 00:30:50.025021 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 10 00:30:50.025033 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 10 00:30:50.025051 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 10 00:30:50.025065 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 10 00:30:50.025081 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 10 00:30:50.025098 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 10 00:30:50.025111 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 10 00:30:50.025123 systemd[1]: Created slice user.slice - User and Session Slice. Sep 10 00:30:50.025135 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 00:30:50.025148 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 00:30:50.025172 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 10 00:30:50.025185 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 10 00:30:50.025197 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 10 00:30:50.025213 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 00:30:50.025226 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 10 00:30:50.025237 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 00:30:50.025250 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 10 00:30:50.025263 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 10 00:30:50.025275 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 10 00:30:50.025288 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 10 00:30:50.025302 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 00:30:50.025315 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 00:30:50.025327 systemd[1]: Reached target slices.target - Slice Units. Sep 10 00:30:50.025339 systemd[1]: Reached target swap.target - Swaps. Sep 10 00:30:50.025351 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 10 00:30:50.025364 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 10 00:30:50.025376 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 00:30:50.025388 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 00:30:50.025400 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 00:30:50.025413 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 10 00:30:50.025428 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 10 00:30:50.025440 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 10 00:30:50.025452 systemd[1]: Mounting media.mount - External Media Directory... Sep 10 00:30:50.025465 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:30:50.025483 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 10 00:30:50.025500 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 10 00:30:50.025512 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 10 00:30:50.025525 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 10 00:30:50.025541 systemd[1]: Reached target machines.target - Containers. Sep 10 00:30:50.025553 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 10 00:30:50.025566 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 00:30:50.025578 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 00:30:50.025590 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 10 00:30:50.025602 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 00:30:50.025614 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 00:30:50.025626 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 00:30:50.025639 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 10 00:30:50.025659 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 00:30:50.025676 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 10 00:30:50.025693 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 10 00:30:50.025707 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 10 00:30:50.025720 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 10 00:30:50.025736 systemd[1]: Stopped systemd-fsck-usr.service. Sep 10 00:30:50.025749 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 00:30:50.025760 kernel: fuse: init (API version 7.39) Sep 10 00:30:50.025772 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 00:30:50.025789 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 00:30:50.025801 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 10 00:30:50.025832 systemd-journald[1140]: Collecting audit messages is disabled. Sep 10 00:30:50.025859 systemd-journald[1140]: Journal started Sep 10 00:30:50.025883 systemd-journald[1140]: Runtime Journal (/run/log/journal/f2e8ebfcd2d14eb59f6fddd91b5aa61b) is 6.0M, max 48.4M, 42.3M free. Sep 10 00:30:49.727405 systemd[1]: Queued start job for default target multi-user.target. Sep 10 00:30:49.753235 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 10 00:30:49.753770 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 10 00:30:50.029795 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 00:30:50.029829 systemd[1]: verity-setup.service: Deactivated successfully. Sep 10 00:30:50.031190 systemd[1]: Stopped verity-setup.service. Sep 10 00:30:50.035379 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:30:50.035416 kernel: loop: module loaded Sep 10 00:30:50.035431 kernel: ACPI: bus type drm_connector registered Sep 10 00:30:50.039719 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 00:30:50.041604 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 10 00:30:50.042896 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 10 00:30:50.044237 systemd[1]: Mounted media.mount - External Media Directory. Sep 10 00:30:50.045517 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 10 00:30:50.046849 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 10 00:30:50.048141 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 10 00:30:50.050511 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 10 00:30:50.052261 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 00:30:50.053989 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 10 00:30:50.054230 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 10 00:30:50.055787 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:30:50.055996 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 00:30:50.057557 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 00:30:50.057745 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 00:30:50.059125 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:30:50.059320 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 00:30:50.060928 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 10 00:30:50.061147 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 10 00:30:50.062779 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:30:50.064073 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 00:30:50.065597 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 00:30:50.067068 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 00:30:50.068635 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 10 00:30:50.085039 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 00:30:50.097277 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 10 00:30:50.099619 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 10 00:30:50.100827 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 10 00:30:50.100917 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 00:30:50.102964 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 10 00:30:50.106401 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 10 00:30:50.110399 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 10 00:30:50.112065 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 00:30:50.114036 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 10 00:30:50.117106 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 10 00:30:50.119448 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:30:50.121416 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 10 00:30:50.123908 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 00:30:50.125041 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 00:30:50.130459 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 10 00:30:50.138396 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 00:30:50.142441 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 10 00:30:50.143935 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 10 00:30:50.146367 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 10 00:30:50.153193 kernel: loop0: detected capacity change from 0 to 140768 Sep 10 00:30:50.153296 systemd-journald[1140]: Time spent on flushing to /var/log/journal/f2e8ebfcd2d14eb59f6fddd91b5aa61b is 17.021ms for 964 entries. Sep 10 00:30:50.153296 systemd-journald[1140]: System Journal (/var/log/journal/f2e8ebfcd2d14eb59f6fddd91b5aa61b) is 8.0M, max 195.6M, 187.6M free. Sep 10 00:30:50.190660 systemd-journald[1140]: Received client request to flush runtime journal. Sep 10 00:30:50.162546 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 10 00:30:50.167038 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 10 00:30:50.185821 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 10 00:30:50.189058 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 00:30:50.192304 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:30:50.194787 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 10 00:30:50.202235 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 10 00:30:50.209867 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 10 00:30:50.225971 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 10 00:30:50.226275 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Sep 10 00:30:50.226289 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Sep 10 00:30:50.226878 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 10 00:30:50.235289 udevadm[1186]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 10 00:30:50.237281 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 00:30:50.240196 kernel: loop1: detected capacity change from 0 to 142488 Sep 10 00:30:50.252436 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 10 00:30:50.280192 kernel: loop2: detected capacity change from 0 to 221472 Sep 10 00:30:50.285010 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 10 00:30:50.335498 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 00:30:50.363652 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Sep 10 00:30:50.363696 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Sep 10 00:30:50.372846 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 00:30:50.379189 kernel: loop3: detected capacity change from 0 to 140768 Sep 10 00:30:50.393192 kernel: loop4: detected capacity change from 0 to 142488 Sep 10 00:30:50.411210 kernel: loop5: detected capacity change from 0 to 221472 Sep 10 00:30:50.420587 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 10 00:30:50.421271 (sd-merge)[1198]: Merged extensions into '/usr'. Sep 10 00:30:50.424986 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Sep 10 00:30:50.425006 systemd[1]: Reloading... Sep 10 00:30:50.508202 zram_generator::config[1224]: No configuration found. Sep 10 00:30:50.599774 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 10 00:30:50.645064 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:30:50.695416 systemd[1]: Reloading finished in 269 ms. Sep 10 00:30:50.737897 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 10 00:30:50.740330 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 10 00:30:50.768438 systemd[1]: Starting ensure-sysext.service... Sep 10 00:30:50.771105 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 00:30:50.778083 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... Sep 10 00:30:50.778105 systemd[1]: Reloading... Sep 10 00:30:50.813240 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 10 00:30:50.813971 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 10 00:30:50.824342 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 10 00:30:50.831019 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Sep 10 00:30:50.832748 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Sep 10 00:30:50.846198 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 00:30:50.846412 systemd-tmpfiles[1262]: Skipping /boot Sep 10 00:30:50.862194 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 00:30:50.863316 systemd-tmpfiles[1262]: Skipping /boot Sep 10 00:30:50.901241 zram_generator::config[1294]: No configuration found. Sep 10 00:30:51.044813 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:30:51.099811 systemd[1]: Reloading finished in 321 ms. Sep 10 00:30:51.119613 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 00:30:51.138522 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 10 00:30:51.142636 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 10 00:30:51.145499 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 10 00:30:51.150214 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 00:30:51.171353 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 10 00:30:51.179689 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 10 00:30:51.182047 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:30:51.182246 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 00:30:51.185106 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 00:30:51.189033 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 00:30:51.194671 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 00:30:51.224432 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 00:30:51.224907 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:30:51.227638 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:30:51.228009 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 00:30:51.230589 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:30:51.230857 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 00:30:51.232814 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:30:51.233069 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 00:30:51.241694 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 10 00:30:51.247719 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:30:51.248031 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 00:30:51.254402 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 00:30:51.268496 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 00:30:51.271278 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 00:30:51.272430 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 00:30:51.272559 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:30:51.273600 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 10 00:30:51.275592 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:30:51.275779 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 00:30:51.277444 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 10 00:30:51.281626 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:30:51.281826 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 00:30:51.283638 augenrules[1357]: No rules Sep 10 00:30:51.285557 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 10 00:30:51.292626 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:30:51.293016 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 00:30:51.296123 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 00:30:51.299905 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 00:30:51.301196 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 00:30:51.301395 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:30:51.301561 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:30:51.303131 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:30:51.303383 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 00:30:51.305489 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:30:51.305678 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 00:30:51.309817 systemd[1]: Finished ensure-sysext.service. Sep 10 00:30:51.311398 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 00:30:51.311614 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 00:30:51.318250 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 00:30:51.321579 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 10 00:30:51.403423 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 10 00:30:51.407437 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 10 00:30:51.409298 systemd[1]: Reached target time-set.target - System Time Set. Sep 10 00:30:51.410461 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 00:30:51.436008 systemd-resolved[1330]: Positive Trust Anchors: Sep 10 00:30:51.436034 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 00:30:51.436066 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 00:30:51.440648 systemd-resolved[1330]: Defaulting to hostname 'linux'. Sep 10 00:30:51.442775 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 00:30:51.444120 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 00:30:51.492569 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 10 00:30:51.505522 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 00:30:51.508392 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 10 00:30:51.526012 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 10 00:30:51.551625 systemd-udevd[1381]: Using default interface naming scheme 'v255'. Sep 10 00:30:51.569642 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 00:30:51.579992 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 00:30:51.617957 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 10 00:30:51.623214 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1399) Sep 10 00:30:51.637142 systemd-networkd[1388]: lo: Link UP Sep 10 00:30:51.637157 systemd-networkd[1388]: lo: Gained carrier Sep 10 00:30:51.638019 systemd-networkd[1388]: Enumeration completed Sep 10 00:30:51.640047 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 00:30:51.641432 systemd[1]: Reached target network.target - Network. Sep 10 00:30:51.647339 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 10 00:30:51.721195 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 10 00:30:51.723239 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:30:51.723253 systemd-networkd[1388]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 00:30:51.725595 systemd-networkd[1388]: eth0: Link UP Sep 10 00:30:51.725609 systemd-networkd[1388]: eth0: Gained carrier Sep 10 00:30:51.725623 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:30:51.727183 kernel: ACPI: button: Power Button [PWRF] Sep 10 00:30:51.732743 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 00:30:51.746408 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 10 00:30:51.759265 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Sep 10 00:30:51.754324 systemd-networkd[1388]: eth0: DHCPv4 address 10.0.0.14/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 00:30:51.755329 systemd-timesyncd[1376]: Network configuration changed, trying to establish connection. Sep 10 00:30:51.765282 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 10 00:30:51.765533 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 10 00:30:51.765727 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 10 00:30:51.762317 systemd-timesyncd[1376]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 10 00:30:51.762384 systemd-timesyncd[1376]: Initial clock synchronization to Wed 2025-09-10 00:30:51.773752 UTC. Sep 10 00:30:51.787429 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 10 00:30:51.839196 kernel: mousedev: PS/2 mouse device common for all mice Sep 10 00:30:51.844636 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:30:51.852304 kernel: kvm_amd: TSC scaling supported Sep 10 00:30:51.852355 kernel: kvm_amd: Nested Virtualization enabled Sep 10 00:30:51.852370 kernel: kvm_amd: Nested Paging enabled Sep 10 00:30:51.853362 kernel: kvm_amd: LBR virtualization supported Sep 10 00:30:51.853390 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 10 00:30:51.854434 kernel: kvm_amd: Virtual GIF supported Sep 10 00:30:51.875266 kernel: EDAC MC: Ver: 3.0.0 Sep 10 00:30:51.916077 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 10 00:30:51.943265 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:30:51.957573 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 10 00:30:51.967181 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 10 00:30:52.002427 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 10 00:30:52.003924 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 00:30:52.005020 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 00:30:52.006178 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 10 00:30:52.007575 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 10 00:30:52.009021 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 10 00:30:52.010218 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 10 00:30:52.011427 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 10 00:30:52.012634 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 10 00:30:52.012666 systemd[1]: Reached target paths.target - Path Units. Sep 10 00:30:52.013553 systemd[1]: Reached target timers.target - Timer Units. Sep 10 00:30:52.015386 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 10 00:30:52.018347 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 10 00:30:52.033962 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 10 00:30:52.036394 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 10 00:30:52.037957 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 10 00:30:52.039143 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 00:30:52.040116 systemd[1]: Reached target basic.target - Basic System. Sep 10 00:30:52.041054 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 10 00:30:52.041082 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 10 00:30:52.042125 systemd[1]: Starting containerd.service - containerd container runtime... Sep 10 00:30:52.044221 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 10 00:30:52.048088 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 10 00:30:52.048277 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 10 00:30:52.052419 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 10 00:30:52.053583 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 10 00:30:52.054899 jq[1435]: false Sep 10 00:30:52.056362 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 10 00:30:52.060127 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 10 00:30:52.063482 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 10 00:30:52.066479 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 10 00:30:52.071711 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 10 00:30:52.073271 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 10 00:30:52.073949 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 10 00:30:52.075793 systemd[1]: Starting update-engine.service - Update Engine... Sep 10 00:30:52.078560 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 10 00:30:52.086013 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 10 00:30:52.089098 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 10 00:30:52.090593 extend-filesystems[1436]: Found loop3 Sep 10 00:30:52.092232 extend-filesystems[1436]: Found loop4 Sep 10 00:30:52.092232 extend-filesystems[1436]: Found loop5 Sep 10 00:30:52.092232 extend-filesystems[1436]: Found sr0 Sep 10 00:30:52.092232 extend-filesystems[1436]: Found vda Sep 10 00:30:52.092232 extend-filesystems[1436]: Found vda1 Sep 10 00:30:52.092232 extend-filesystems[1436]: Found vda2 Sep 10 00:30:52.092232 extend-filesystems[1436]: Found vda3 Sep 10 00:30:52.092232 extend-filesystems[1436]: Found usr Sep 10 00:30:52.092232 extend-filesystems[1436]: Found vda4 Sep 10 00:30:52.092232 extend-filesystems[1436]: Found vda6 Sep 10 00:30:52.092232 extend-filesystems[1436]: Found vda7 Sep 10 00:30:52.092232 extend-filesystems[1436]: Found vda9 Sep 10 00:30:52.092232 extend-filesystems[1436]: Checking size of /dev/vda9 Sep 10 00:30:52.090699 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 10 00:30:52.102560 jq[1449]: true Sep 10 00:30:52.097549 systemd[1]: motdgen.service: Deactivated successfully. Sep 10 00:30:52.100352 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 10 00:30:52.108147 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 10 00:30:52.108500 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 10 00:30:52.116647 dbus-daemon[1434]: [system] SELinux support is enabled Sep 10 00:30:52.125520 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 10 00:30:52.133727 extend-filesystems[1436]: Resized partition /dev/vda9 Sep 10 00:30:52.135347 (ntainerd)[1461]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 10 00:30:52.180553 update_engine[1448]: I20250910 00:30:52.178798 1448 main.cc:92] Flatcar Update Engine starting Sep 10 00:30:52.180553 update_engine[1448]: I20250910 00:30:52.180403 1448 update_check_scheduler.cc:74] Next update check in 7m33s Sep 10 00:30:52.180948 extend-filesystems[1465]: resize2fs 1.47.1 (20-May-2024) Sep 10 00:30:52.194984 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 10 00:30:52.195045 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1386) Sep 10 00:30:52.200012 tar[1454]: linux-amd64/helm Sep 10 00:30:52.197818 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 10 00:30:52.197857 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 10 00:30:52.199372 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 10 00:30:52.199398 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 10 00:30:52.201999 systemd[1]: Started update-engine.service - Update Engine. Sep 10 00:30:52.208425 jq[1457]: true Sep 10 00:30:52.209348 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 10 00:30:52.227781 systemd-logind[1444]: Watching system buttons on /dev/input/event1 (Power Button) Sep 10 00:30:52.227811 systemd-logind[1444]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 10 00:30:52.231678 systemd-logind[1444]: New seat seat0. Sep 10 00:30:52.235263 systemd[1]: Started systemd-logind.service - User Login Management. Sep 10 00:30:52.290201 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 10 00:30:52.417866 locksmithd[1472]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 10 00:30:52.430069 extend-filesystems[1465]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 10 00:30:52.430069 extend-filesystems[1465]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 10 00:30:52.430069 extend-filesystems[1465]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 10 00:30:52.438811 extend-filesystems[1436]: Resized filesystem in /dev/vda9 Sep 10 00:30:52.433807 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 10 00:30:52.434141 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 10 00:30:52.452946 bash[1493]: Updated "/home/core/.ssh/authorized_keys" Sep 10 00:30:52.455591 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 10 00:30:52.458656 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 10 00:30:52.465593 sshd_keygen[1456]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 10 00:30:52.539527 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 10 00:30:52.558416 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 10 00:30:52.564830 systemd[1]: issuegen.service: Deactivated successfully. Sep 10 00:30:52.565153 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 10 00:30:52.570399 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 10 00:30:52.612930 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 10 00:30:52.617701 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 10 00:30:52.623290 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 10 00:30:52.624587 systemd[1]: Reached target getty.target - Login Prompts. Sep 10 00:30:52.718812 containerd[1461]: time="2025-09-10T00:30:52.718606885Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 10 00:30:52.750655 containerd[1461]: time="2025-09-10T00:30:52.750575035Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:30:52.752695 containerd[1461]: time="2025-09-10T00:30:52.752649674Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.104-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:30:52.752695 containerd[1461]: time="2025-09-10T00:30:52.752680142Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 10 00:30:52.752695 containerd[1461]: time="2025-09-10T00:30:52.752703355Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 10 00:30:52.753066 containerd[1461]: time="2025-09-10T00:30:52.753048476Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 10 00:30:52.753101 containerd[1461]: time="2025-09-10T00:30:52.753070355Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 10 00:30:52.753205 containerd[1461]: time="2025-09-10T00:30:52.753179950Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:30:52.753205 containerd[1461]: time="2025-09-10T00:30:52.753198591Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:30:52.753464 containerd[1461]: time="2025-09-10T00:30:52.753441004Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:30:52.753464 containerd[1461]: time="2025-09-10T00:30:52.753460798Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 10 00:30:52.753529 containerd[1461]: time="2025-09-10T00:30:52.753475671Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:30:52.753529 containerd[1461]: time="2025-09-10T00:30:52.753487006Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 10 00:30:52.753627 containerd[1461]: time="2025-09-10T00:30:52.753607506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:30:52.755110 containerd[1461]: time="2025-09-10T00:30:52.755067490Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:30:52.755279 containerd[1461]: time="2025-09-10T00:30:52.755254378Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:30:52.755315 containerd[1461]: time="2025-09-10T00:30:52.755277961Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 10 00:30:52.755611 containerd[1461]: time="2025-09-10T00:30:52.755583454Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 10 00:30:52.755680 containerd[1461]: time="2025-09-10T00:30:52.755662401Z" level=info msg="metadata content store policy set" policy=shared Sep 10 00:30:52.762259 containerd[1461]: time="2025-09-10T00:30:52.762208397Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 10 00:30:52.762314 containerd[1461]: time="2025-09-10T00:30:52.762286953Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 10 00:30:52.762314 containerd[1461]: time="2025-09-10T00:30:52.762306196Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 10 00:30:52.762370 containerd[1461]: time="2025-09-10T00:30:52.762321871Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 10 00:30:52.762370 containerd[1461]: time="2025-09-10T00:30:52.762338227Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 10 00:30:52.762538 containerd[1461]: time="2025-09-10T00:30:52.762515153Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 10 00:30:52.762795 containerd[1461]: time="2025-09-10T00:30:52.762761644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 10 00:30:52.762913 containerd[1461]: time="2025-09-10T00:30:52.762891906Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 10 00:30:52.762935 containerd[1461]: time="2025-09-10T00:30:52.762911239Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 10 00:30:52.762935 containerd[1461]: time="2025-09-10T00:30:52.762928417Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 10 00:30:52.762984 containerd[1461]: time="2025-09-10T00:30:52.762949815Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 10 00:30:52.763006 containerd[1461]: time="2025-09-10T00:30:52.762980884Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 10 00:30:52.763006 containerd[1461]: time="2025-09-10T00:30:52.762997241Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 10 00:30:52.763043 containerd[1461]: time="2025-09-10T00:30:52.763011342Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 10 00:30:52.763043 containerd[1461]: time="2025-09-10T00:30:52.763033883Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 10 00:30:52.763079 containerd[1461]: time="2025-09-10T00:30:52.763049809Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 10 00:30:52.763079 containerd[1461]: time="2025-09-10T00:30:52.763063008Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 10 00:30:52.763079 containerd[1461]: time="2025-09-10T00:30:52.763074955Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 10 00:30:52.763150 containerd[1461]: time="2025-09-10T00:30:52.763106646Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 10 00:30:52.763150 containerd[1461]: time="2025-09-10T00:30:52.763132664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 10 00:30:52.763220 containerd[1461]: time="2025-09-10T00:30:52.763153370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 10 00:30:52.763220 containerd[1461]: time="2025-09-10T00:30:52.763196777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 10 00:30:52.763220 containerd[1461]: time="2025-09-10T00:30:52.763215559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 10 00:30:52.763284 containerd[1461]: time="2025-09-10T00:30:52.763230623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 10 00:30:52.763284 containerd[1461]: time="2025-09-10T00:30:52.763243832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 10 00:30:52.763284 containerd[1461]: time="2025-09-10T00:30:52.763258054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 10 00:30:52.763284 containerd[1461]: time="2025-09-10T00:30:52.763272276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 10 00:30:52.763361 containerd[1461]: time="2025-09-10T00:30:52.763287360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 10 00:30:52.763361 containerd[1461]: time="2025-09-10T00:30:52.763300238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 10 00:30:52.763361 containerd[1461]: time="2025-09-10T00:30:52.763311675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 10 00:30:52.763361 containerd[1461]: time="2025-09-10T00:30:52.763324282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 10 00:30:52.763361 containerd[1461]: time="2025-09-10T00:30:52.763349599Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 10 00:30:52.763454 containerd[1461]: time="2025-09-10T00:30:52.763380708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 10 00:30:52.763454 containerd[1461]: time="2025-09-10T00:30:52.763395492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 10 00:30:52.763454 containerd[1461]: time="2025-09-10T00:30:52.763409894Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 10 00:30:52.763507 containerd[1461]: time="2025-09-10T00:30:52.763462892Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 10 00:30:52.763507 containerd[1461]: time="2025-09-10T00:30:52.763482095Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 10 00:30:52.763507 containerd[1461]: time="2025-09-10T00:30:52.763492909Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 10 00:30:52.763507 containerd[1461]: time="2025-09-10T00:30:52.763504716Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 10 00:30:52.763586 containerd[1461]: time="2025-09-10T00:30:52.763515500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 10 00:30:52.763586 containerd[1461]: time="2025-09-10T00:30:52.763528870Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 10 00:30:52.763586 containerd[1461]: time="2025-09-10T00:30:52.763550298Z" level=info msg="NRI interface is disabled by configuration." Sep 10 00:30:52.763586 containerd[1461]: time="2025-09-10T00:30:52.763565091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 10 00:30:52.763957 containerd[1461]: time="2025-09-10T00:30:52.763886630Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 10 00:30:52.763957 containerd[1461]: time="2025-09-10T00:30:52.763959663Z" level=info msg="Connect containerd service" Sep 10 00:30:52.764237 containerd[1461]: time="2025-09-10T00:30:52.764009845Z" level=info msg="using legacy CRI server" Sep 10 00:30:52.764237 containerd[1461]: time="2025-09-10T00:30:52.764018133Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 10 00:30:52.764237 containerd[1461]: time="2025-09-10T00:30:52.764147112Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 10 00:30:52.764860 containerd[1461]: time="2025-09-10T00:30:52.764823835Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 00:30:52.765301 containerd[1461]: time="2025-09-10T00:30:52.765253065Z" level=info msg="Start subscribing containerd event" Sep 10 00:30:52.765336 containerd[1461]: time="2025-09-10T00:30:52.765322851Z" level=info msg="Start recovering state" Sep 10 00:30:52.765470 containerd[1461]: time="2025-09-10T00:30:52.765446788Z" level=info msg="Start event monitor" Sep 10 00:30:52.765498 containerd[1461]: time="2025-09-10T00:30:52.765472756Z" level=info msg="Start snapshots syncer" Sep 10 00:30:52.765498 containerd[1461]: time="2025-09-10T00:30:52.765485014Z" level=info msg="Start cni network conf syncer for default" Sep 10 00:30:52.765498 containerd[1461]: time="2025-09-10T00:30:52.765495157Z" level=info msg="Start streaming server" Sep 10 00:30:52.765887 containerd[1461]: time="2025-09-10T00:30:52.765676382Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 10 00:30:52.765951 containerd[1461]: time="2025-09-10T00:30:52.765920828Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 10 00:30:52.766033 containerd[1461]: time="2025-09-10T00:30:52.766011150Z" level=info msg="containerd successfully booted in 0.050789s" Sep 10 00:30:52.766180 systemd[1]: Started containerd.service - containerd container runtime. Sep 10 00:30:52.789498 systemd-networkd[1388]: eth0: Gained IPv6LL Sep 10 00:30:52.792784 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 10 00:30:52.794527 systemd[1]: Reached target network-online.target - Network is Online. Sep 10 00:30:52.812406 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 10 00:30:52.814912 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:30:52.819391 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 10 00:30:52.867336 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 10 00:30:52.867874 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 10 00:30:52.874704 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 10 00:30:52.878380 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 10 00:30:53.013257 tar[1454]: linux-amd64/LICENSE Sep 10 00:30:53.013365 tar[1454]: linux-amd64/README.md Sep 10 00:30:53.037604 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 10 00:30:54.317951 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:30:54.320005 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 10 00:30:54.321396 systemd[1]: Startup finished in 1.155s (kernel) + 6.428s (initrd) + 5.232s (userspace) = 12.816s. Sep 10 00:30:54.334227 (kubelet)[1546]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 00:30:55.021041 kubelet[1546]: E0910 00:30:55.020884 1546 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:30:55.025617 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:30:55.025831 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:30:55.026329 systemd[1]: kubelet.service: Consumed 2.036s CPU time. Sep 10 00:30:56.641813 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 10 00:30:56.643295 systemd[1]: Started sshd@0-10.0.0.14:22-10.0.0.1:60028.service - OpenSSH per-connection server daemon (10.0.0.1:60028). Sep 10 00:30:56.686957 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 60028 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:30:56.689427 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:30:56.698666 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 10 00:30:56.711408 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 10 00:30:56.713315 systemd-logind[1444]: New session 1 of user core. Sep 10 00:30:56.724067 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 10 00:30:56.736410 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 10 00:30:56.740073 (systemd)[1563]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:30:56.846856 systemd[1563]: Queued start job for default target default.target. Sep 10 00:30:56.855589 systemd[1563]: Created slice app.slice - User Application Slice. Sep 10 00:30:56.855617 systemd[1563]: Reached target paths.target - Paths. Sep 10 00:30:56.855631 systemd[1563]: Reached target timers.target - Timers. Sep 10 00:30:56.857461 systemd[1563]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 10 00:30:56.870436 systemd[1563]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 10 00:30:56.870613 systemd[1563]: Reached target sockets.target - Sockets. Sep 10 00:30:56.870635 systemd[1563]: Reached target basic.target - Basic System. Sep 10 00:30:56.870685 systemd[1563]: Reached target default.target - Main User Target. Sep 10 00:30:56.870724 systemd[1563]: Startup finished in 122ms. Sep 10 00:30:56.871113 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 10 00:30:56.872839 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 10 00:30:56.937719 systemd[1]: Started sshd@1-10.0.0.14:22-10.0.0.1:60036.service - OpenSSH per-connection server daemon (10.0.0.1:60036). Sep 10 00:30:56.972357 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 60036 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:30:56.974000 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:30:56.978054 systemd-logind[1444]: New session 2 of user core. Sep 10 00:30:56.994300 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 10 00:30:57.049652 sshd[1574]: pam_unix(sshd:session): session closed for user core Sep 10 00:30:57.060945 systemd[1]: sshd@1-10.0.0.14:22-10.0.0.1:60036.service: Deactivated successfully. Sep 10 00:30:57.062912 systemd[1]: session-2.scope: Deactivated successfully. Sep 10 00:30:57.064600 systemd-logind[1444]: Session 2 logged out. Waiting for processes to exit. Sep 10 00:30:57.071412 systemd[1]: Started sshd@2-10.0.0.14:22-10.0.0.1:60038.service - OpenSSH per-connection server daemon (10.0.0.1:60038). Sep 10 00:30:57.072312 systemd-logind[1444]: Removed session 2. Sep 10 00:30:57.099297 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 60038 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:30:57.100985 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:30:57.104818 systemd-logind[1444]: New session 3 of user core. Sep 10 00:30:57.111280 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 10 00:30:57.161904 sshd[1581]: pam_unix(sshd:session): session closed for user core Sep 10 00:30:57.175221 systemd[1]: sshd@2-10.0.0.14:22-10.0.0.1:60038.service: Deactivated successfully. Sep 10 00:30:57.177233 systemd[1]: session-3.scope: Deactivated successfully. Sep 10 00:30:57.179083 systemd-logind[1444]: Session 3 logged out. Waiting for processes to exit. Sep 10 00:30:57.191515 systemd[1]: Started sshd@3-10.0.0.14:22-10.0.0.1:60046.service - OpenSSH per-connection server daemon (10.0.0.1:60046). Sep 10 00:30:57.192742 systemd-logind[1444]: Removed session 3. Sep 10 00:30:57.220999 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 60046 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:30:57.222616 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:30:57.226586 systemd-logind[1444]: New session 4 of user core. Sep 10 00:30:57.242318 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 10 00:30:57.297991 sshd[1588]: pam_unix(sshd:session): session closed for user core Sep 10 00:30:57.315077 systemd[1]: sshd@3-10.0.0.14:22-10.0.0.1:60046.service: Deactivated successfully. Sep 10 00:30:57.316975 systemd[1]: session-4.scope: Deactivated successfully. Sep 10 00:30:57.318695 systemd-logind[1444]: Session 4 logged out. Waiting for processes to exit. Sep 10 00:30:57.328462 systemd[1]: Started sshd@4-10.0.0.14:22-10.0.0.1:60062.service - OpenSSH per-connection server daemon (10.0.0.1:60062). Sep 10 00:30:57.329463 systemd-logind[1444]: Removed session 4. Sep 10 00:30:57.356960 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 60062 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:30:57.358605 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:30:57.362333 systemd-logind[1444]: New session 5 of user core. Sep 10 00:30:57.373292 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 10 00:30:57.432224 sudo[1598]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 10 00:30:57.432599 sudo[1598]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:30:57.455768 sudo[1598]: pam_unix(sudo:session): session closed for user root Sep 10 00:30:57.457924 sshd[1595]: pam_unix(sshd:session): session closed for user core Sep 10 00:30:57.470238 systemd[1]: sshd@4-10.0.0.14:22-10.0.0.1:60062.service: Deactivated successfully. Sep 10 00:30:57.472778 systemd[1]: session-5.scope: Deactivated successfully. Sep 10 00:30:57.475018 systemd-logind[1444]: Session 5 logged out. Waiting for processes to exit. Sep 10 00:30:57.483465 systemd[1]: Started sshd@5-10.0.0.14:22-10.0.0.1:60066.service - OpenSSH per-connection server daemon (10.0.0.1:60066). Sep 10 00:30:57.484503 systemd-logind[1444]: Removed session 5. Sep 10 00:30:57.512377 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 60066 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:30:57.513974 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:30:57.518059 systemd-logind[1444]: New session 6 of user core. Sep 10 00:30:57.527298 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 10 00:30:57.581385 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 10 00:30:57.581747 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:30:57.585612 sudo[1607]: pam_unix(sudo:session): session closed for user root Sep 10 00:30:57.592448 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 10 00:30:57.592811 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:30:57.611390 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 10 00:30:57.613218 auditctl[1610]: No rules Sep 10 00:30:57.614651 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 00:30:57.614938 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 10 00:30:57.616775 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 10 00:30:57.649388 augenrules[1628]: No rules Sep 10 00:30:57.651430 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 10 00:30:57.652761 sudo[1606]: pam_unix(sudo:session): session closed for user root Sep 10 00:30:57.654686 sshd[1603]: pam_unix(sshd:session): session closed for user core Sep 10 00:30:57.666865 systemd[1]: sshd@5-10.0.0.14:22-10.0.0.1:60066.service: Deactivated successfully. Sep 10 00:30:57.668741 systemd[1]: session-6.scope: Deactivated successfully. Sep 10 00:30:57.670517 systemd-logind[1444]: Session 6 logged out. Waiting for processes to exit. Sep 10 00:30:57.681400 systemd[1]: Started sshd@6-10.0.0.14:22-10.0.0.1:60078.service - OpenSSH per-connection server daemon (10.0.0.1:60078). Sep 10 00:30:57.682308 systemd-logind[1444]: Removed session 6. Sep 10 00:30:57.710142 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 60078 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:30:57.712056 sshd[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:30:57.716102 systemd-logind[1444]: New session 7 of user core. Sep 10 00:30:57.725291 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 10 00:30:57.778799 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 10 00:30:57.779145 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:30:58.405441 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 10 00:30:58.405592 (dockerd)[1657]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 10 00:30:58.997068 dockerd[1657]: time="2025-09-10T00:30:58.996806853Z" level=info msg="Starting up" Sep 10 00:30:59.462046 dockerd[1657]: time="2025-09-10T00:30:59.461877059Z" level=info msg="Loading containers: start." Sep 10 00:30:59.600231 kernel: Initializing XFRM netlink socket Sep 10 00:30:59.703711 systemd-networkd[1388]: docker0: Link UP Sep 10 00:30:59.727383 dockerd[1657]: time="2025-09-10T00:30:59.727238273Z" level=info msg="Loading containers: done." Sep 10 00:30:59.746840 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck880527341-merged.mount: Deactivated successfully. Sep 10 00:30:59.750450 dockerd[1657]: time="2025-09-10T00:30:59.750405050Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 10 00:30:59.750517 dockerd[1657]: time="2025-09-10T00:30:59.750502231Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 10 00:30:59.750654 dockerd[1657]: time="2025-09-10T00:30:59.750629366Z" level=info msg="Daemon has completed initialization" Sep 10 00:30:59.790269 dockerd[1657]: time="2025-09-10T00:30:59.790114219Z" level=info msg="API listen on /run/docker.sock" Sep 10 00:30:59.790408 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 10 00:31:00.945707 containerd[1461]: time="2025-09-10T00:31:00.945632419Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 10 00:31:01.831937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3551311771.mount: Deactivated successfully. Sep 10 00:31:03.220911 containerd[1461]: time="2025-09-10T00:31:03.220830665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:03.221443 containerd[1461]: time="2025-09-10T00:31:03.221377239Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=28079631" Sep 10 00:31:03.222669 containerd[1461]: time="2025-09-10T00:31:03.222601216Z" level=info msg="ImageCreate event name:\"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:03.225566 containerd[1461]: time="2025-09-10T00:31:03.225510151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:03.226636 containerd[1461]: time="2025-09-10T00:31:03.226605705Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"28076431\" in 2.280915211s" Sep 10 00:31:03.226682 containerd[1461]: time="2025-09-10T00:31:03.226654399Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 10 00:31:03.227757 containerd[1461]: time="2025-09-10T00:31:03.227728987Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 10 00:31:04.627232 containerd[1461]: time="2025-09-10T00:31:04.627133550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:04.627877 containerd[1461]: time="2025-09-10T00:31:04.627778379Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=24714681" Sep 10 00:31:04.628945 containerd[1461]: time="2025-09-10T00:31:04.628907906Z" level=info msg="ImageCreate event name:\"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:04.632177 containerd[1461]: time="2025-09-10T00:31:04.632125793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:04.635095 containerd[1461]: time="2025-09-10T00:31:04.634651502Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"26317875\" in 1.406890658s" Sep 10 00:31:04.635095 containerd[1461]: time="2025-09-10T00:31:04.634698643Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 10 00:31:04.635484 containerd[1461]: time="2025-09-10T00:31:04.635439224Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 10 00:31:05.276377 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 10 00:31:05.286501 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:31:05.551446 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:31:05.557293 (kubelet)[1872]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 00:31:05.640124 kubelet[1872]: E0910 00:31:05.639932 1872 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:31:05.646847 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:31:05.647116 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:31:06.668569 containerd[1461]: time="2025-09-10T00:31:06.668477603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:06.669251 containerd[1461]: time="2025-09-10T00:31:06.669229768Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=18782427" Sep 10 00:31:06.670515 containerd[1461]: time="2025-09-10T00:31:06.670453656Z" level=info msg="ImageCreate event name:\"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:06.674070 containerd[1461]: time="2025-09-10T00:31:06.673999647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:06.675551 containerd[1461]: time="2025-09-10T00:31:06.675514278Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"20385639\" in 2.039943715s" Sep 10 00:31:06.675597 containerd[1461]: time="2025-09-10T00:31:06.675560705Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 10 00:31:06.676274 containerd[1461]: time="2025-09-10T00:31:06.676244286Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 10 00:31:08.180483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount292572398.mount: Deactivated successfully. Sep 10 00:31:08.690715 containerd[1461]: time="2025-09-10T00:31:08.690633093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:08.691570 containerd[1461]: time="2025-09-10T00:31:08.691494085Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=30384255" Sep 10 00:31:08.692896 containerd[1461]: time="2025-09-10T00:31:08.692838117Z" level=info msg="ImageCreate event name:\"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:08.694976 containerd[1461]: time="2025-09-10T00:31:08.694938141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:08.695560 containerd[1461]: time="2025-09-10T00:31:08.695507984Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"30383274\" in 2.019231421s" Sep 10 00:31:08.695560 containerd[1461]: time="2025-09-10T00:31:08.695553670Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 10 00:31:08.696248 containerd[1461]: time="2025-09-10T00:31:08.696203339Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 10 00:31:09.316581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1783130932.mount: Deactivated successfully. Sep 10 00:31:10.253523 containerd[1461]: time="2025-09-10T00:31:10.253443009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:10.254224 containerd[1461]: time="2025-09-10T00:31:10.254191947Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 10 00:31:10.255362 containerd[1461]: time="2025-09-10T00:31:10.255328459Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:10.259181 containerd[1461]: time="2025-09-10T00:31:10.259117287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:10.260199 containerd[1461]: time="2025-09-10T00:31:10.260158161Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.563912703s" Sep 10 00:31:10.260243 containerd[1461]: time="2025-09-10T00:31:10.260202634Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 10 00:31:10.260838 containerd[1461]: time="2025-09-10T00:31:10.260805217Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 10 00:31:10.849961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount410522704.mount: Deactivated successfully. Sep 10 00:31:10.856974 containerd[1461]: time="2025-09-10T00:31:10.856918832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:10.857852 containerd[1461]: time="2025-09-10T00:31:10.857776043Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 10 00:31:10.859133 containerd[1461]: time="2025-09-10T00:31:10.859097191Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:10.861655 containerd[1461]: time="2025-09-10T00:31:10.861622671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:10.862405 containerd[1461]: time="2025-09-10T00:31:10.862349884Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 601.512029ms" Sep 10 00:31:10.862474 containerd[1461]: time="2025-09-10T00:31:10.862406441Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 10 00:31:10.862946 containerd[1461]: time="2025-09-10T00:31:10.862919979Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 10 00:31:11.536567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1114653936.mount: Deactivated successfully. Sep 10 00:31:13.620515 containerd[1461]: time="2025-09-10T00:31:13.620443636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:13.621272 containerd[1461]: time="2025-09-10T00:31:13.621233885Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 10 00:31:13.622409 containerd[1461]: time="2025-09-10T00:31:13.622373753Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:13.625539 containerd[1461]: time="2025-09-10T00:31:13.625498853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:13.626802 containerd[1461]: time="2025-09-10T00:31:13.626748208Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.763796714s" Sep 10 00:31:13.626862 containerd[1461]: time="2025-09-10T00:31:13.626800255Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 10 00:31:15.897306 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 10 00:31:15.907325 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:31:16.085261 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:31:16.090008 (kubelet)[2033]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 00:31:16.118002 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:31:16.153355 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 00:31:16.153752 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:31:16.165495 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:31:16.190969 systemd[1]: Reloading requested from client PID 2049 ('systemctl') (unit session-7.scope)... Sep 10 00:31:16.190986 systemd[1]: Reloading... Sep 10 00:31:16.279300 zram_generator::config[2094]: No configuration found. Sep 10 00:31:16.888982 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:31:16.968064 systemd[1]: Reloading finished in 776 ms. Sep 10 00:31:17.018654 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 10 00:31:17.018793 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 10 00:31:17.019184 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:31:17.021196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:31:17.207443 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:31:17.212948 (kubelet)[2137]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 00:31:17.257897 kubelet[2137]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:31:17.257897 kubelet[2137]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 10 00:31:17.257897 kubelet[2137]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:31:17.258361 kubelet[2137]: I0910 00:31:17.257975 2137 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 00:31:17.518390 kubelet[2137]: I0910 00:31:17.518274 2137 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 10 00:31:17.518390 kubelet[2137]: I0910 00:31:17.518305 2137 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 00:31:17.518587 kubelet[2137]: I0910 00:31:17.518542 2137 server.go:934] "Client rotation is on, will bootstrap in background" Sep 10 00:31:17.536792 kubelet[2137]: E0910 00:31:17.536754 2137 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:31:17.537229 kubelet[2137]: I0910 00:31:17.537200 2137 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 00:31:17.543466 kubelet[2137]: E0910 00:31:17.543434 2137 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 10 00:31:17.543466 kubelet[2137]: I0910 00:31:17.543466 2137 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 10 00:31:17.550358 kubelet[2137]: I0910 00:31:17.550318 2137 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 00:31:17.551152 kubelet[2137]: I0910 00:31:17.551122 2137 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 10 00:31:17.551395 kubelet[2137]: I0910 00:31:17.551339 2137 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 00:31:17.551611 kubelet[2137]: I0910 00:31:17.551385 2137 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 00:31:17.551707 kubelet[2137]: I0910 00:31:17.551630 2137 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 00:31:17.551707 kubelet[2137]: I0910 00:31:17.551653 2137 container_manager_linux.go:300] "Creating device plugin manager" Sep 10 00:31:17.551840 kubelet[2137]: I0910 00:31:17.551822 2137 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:31:17.554721 kubelet[2137]: I0910 00:31:17.554691 2137 kubelet.go:408] "Attempting to sync node with API server" Sep 10 00:31:17.554721 kubelet[2137]: I0910 00:31:17.554722 2137 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 00:31:17.554796 kubelet[2137]: I0910 00:31:17.554760 2137 kubelet.go:314] "Adding apiserver pod source" Sep 10 00:31:17.554796 kubelet[2137]: I0910 00:31:17.554788 2137 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 00:31:17.557595 kubelet[2137]: W0910 00:31:17.557533 2137 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Sep 10 00:31:17.557648 kubelet[2137]: E0910 00:31:17.557607 2137 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:31:17.557839 kubelet[2137]: I0910 00:31:17.557810 2137 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 10 00:31:17.558281 kubelet[2137]: I0910 00:31:17.558244 2137 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 00:31:17.558333 kubelet[2137]: W0910 00:31:17.558298 2137 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Sep 10 00:31:17.558355 kubelet[2137]: E0910 00:31:17.558334 2137 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:31:17.559027 kubelet[2137]: W0910 00:31:17.558996 2137 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 10 00:31:17.561757 kubelet[2137]: I0910 00:31:17.561383 2137 server.go:1274] "Started kubelet" Sep 10 00:31:17.561864 kubelet[2137]: I0910 00:31:17.561837 2137 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 00:31:17.563202 kubelet[2137]: I0910 00:31:17.562818 2137 server.go:449] "Adding debug handlers to kubelet server" Sep 10 00:31:17.565283 kubelet[2137]: I0910 00:31:17.564252 2137 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 00:31:17.565283 kubelet[2137]: I0910 00:31:17.564281 2137 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 00:31:17.565283 kubelet[2137]: I0910 00:31:17.564559 2137 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 00:31:17.565283 kubelet[2137]: I0910 00:31:17.565096 2137 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 00:31:17.566418 kubelet[2137]: E0910 00:31:17.565887 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:17.566418 kubelet[2137]: I0910 00:31:17.565934 2137 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 10 00:31:17.566418 kubelet[2137]: I0910 00:31:17.566095 2137 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 10 00:31:17.566418 kubelet[2137]: I0910 00:31:17.566204 2137 reconciler.go:26] "Reconciler: start to sync state" Sep 10 00:31:17.566527 kubelet[2137]: W0910 00:31:17.566487 2137 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Sep 10 00:31:17.566562 kubelet[2137]: E0910 00:31:17.566526 2137 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:31:17.567571 kubelet[2137]: E0910 00:31:17.565186 2137 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863c472f17b9fc6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 00:31:17.561352134 +0000 UTC m=+0.343701043,LastTimestamp:2025-09-10 00:31:17.561352134 +0000 UTC m=+0.343701043,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 00:31:17.568147 kubelet[2137]: E0910 00:31:17.567799 2137 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="200ms" Sep 10 00:31:17.570728 kubelet[2137]: I0910 00:31:17.570696 2137 factory.go:221] Registration of the containerd container factory successfully Sep 10 00:31:17.570728 kubelet[2137]: I0910 00:31:17.570722 2137 factory.go:221] Registration of the systemd container factory successfully Sep 10 00:31:17.570843 kubelet[2137]: I0910 00:31:17.570815 2137 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 00:31:17.573047 kubelet[2137]: E0910 00:31:17.573017 2137 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 00:31:17.587926 kubelet[2137]: I0910 00:31:17.587886 2137 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 10 00:31:17.587926 kubelet[2137]: I0910 00:31:17.587906 2137 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 10 00:31:17.587926 kubelet[2137]: I0910 00:31:17.587906 2137 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 00:31:17.587926 kubelet[2137]: I0910 00:31:17.587926 2137 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:31:17.589399 kubelet[2137]: I0910 00:31:17.589376 2137 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 00:31:17.589452 kubelet[2137]: I0910 00:31:17.589414 2137 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 10 00:31:17.589452 kubelet[2137]: I0910 00:31:17.589445 2137 kubelet.go:2321] "Starting kubelet main sync loop" Sep 10 00:31:17.589603 kubelet[2137]: E0910 00:31:17.589489 2137 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 00:31:17.590234 kubelet[2137]: W0910 00:31:17.590029 2137 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Sep 10 00:31:17.590234 kubelet[2137]: E0910 00:31:17.590077 2137 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:31:17.666987 kubelet[2137]: E0910 00:31:17.666960 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:17.690268 kubelet[2137]: E0910 00:31:17.690238 2137 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 00:31:17.767329 kubelet[2137]: E0910 00:31:17.767306 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:17.768711 kubelet[2137]: E0910 00:31:17.768625 2137 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="400ms" Sep 10 00:31:17.867961 kubelet[2137]: E0910 00:31:17.867930 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:17.891121 kubelet[2137]: E0910 00:31:17.891105 2137 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 00:31:17.962559 kubelet[2137]: E0910 00:31:17.962474 2137 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863c472f17b9fc6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 00:31:17.561352134 +0000 UTC m=+0.343701043,LastTimestamp:2025-09-10 00:31:17.561352134 +0000 UTC m=+0.343701043,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 00:31:17.968834 kubelet[2137]: E0910 00:31:17.968807 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:18.069809 kubelet[2137]: E0910 00:31:18.069720 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:18.169863 kubelet[2137]: E0910 00:31:18.169800 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:18.170010 kubelet[2137]: E0910 00:31:18.169859 2137 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="800ms" Sep 10 00:31:18.270182 kubelet[2137]: E0910 00:31:18.270137 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:18.291366 kubelet[2137]: E0910 00:31:18.291322 2137 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 00:31:18.370904 kubelet[2137]: E0910 00:31:18.370808 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:18.471333 kubelet[2137]: E0910 00:31:18.471279 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:18.549135 kubelet[2137]: W0910 00:31:18.549068 2137 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Sep 10 00:31:18.549135 kubelet[2137]: E0910 00:31:18.549130 2137 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:31:18.572409 kubelet[2137]: E0910 00:31:18.572373 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:18.673255 kubelet[2137]: E0910 00:31:18.673080 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:18.677111 kubelet[2137]: I0910 00:31:18.677064 2137 policy_none.go:49] "None policy: Start" Sep 10 00:31:18.677940 kubelet[2137]: I0910 00:31:18.677907 2137 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 10 00:31:18.677940 kubelet[2137]: I0910 00:31:18.677933 2137 state_mem.go:35] "Initializing new in-memory state store" Sep 10 00:31:18.760246 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 10 00:31:18.775767 kubelet[2137]: E0910 00:31:18.773480 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:18.780631 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 10 00:31:18.783974 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 10 00:31:18.799618 kubelet[2137]: I0910 00:31:18.799378 2137 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 00:31:18.799618 kubelet[2137]: I0910 00:31:18.799622 2137 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 00:31:18.799822 kubelet[2137]: I0910 00:31:18.799639 2137 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 00:31:18.800221 kubelet[2137]: I0910 00:31:18.799924 2137 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 00:31:18.801179 kubelet[2137]: E0910 00:31:18.801137 2137 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 10 00:31:18.878227 kubelet[2137]: W0910 00:31:18.878147 2137 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Sep 10 00:31:18.878287 kubelet[2137]: E0910 00:31:18.878237 2137 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:31:18.902155 kubelet[2137]: I0910 00:31:18.901837 2137 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:31:18.902282 kubelet[2137]: E0910 00:31:18.902245 2137 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Sep 10 00:31:18.954924 kubelet[2137]: W0910 00:31:18.954819 2137 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Sep 10 00:31:18.954924 kubelet[2137]: E0910 00:31:18.954873 2137 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:31:18.970655 kubelet[2137]: E0910 00:31:18.970608 2137 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="1.6s" Sep 10 00:31:19.003432 kubelet[2137]: W0910 00:31:19.003347 2137 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Sep 10 00:31:19.003536 kubelet[2137]: E0910 00:31:19.003438 2137 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:31:19.100665 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice - libcontainer container kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Sep 10 00:31:19.103914 kubelet[2137]: I0910 00:31:19.103869 2137 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:31:19.104226 kubelet[2137]: E0910 00:31:19.104198 2137 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Sep 10 00:31:19.109476 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice - libcontainer container kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Sep 10 00:31:19.127962 systemd[1]: Created slice kubepods-burstable-podb1aa62331e1304bba1f96c3c0239d4dc.slice - libcontainer container kubepods-burstable-podb1aa62331e1304bba1f96c3c0239d4dc.slice. Sep 10 00:31:19.176358 kubelet[2137]: I0910 00:31:19.176261 2137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:31:19.176358 kubelet[2137]: I0910 00:31:19.176328 2137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:31:19.176358 kubelet[2137]: I0910 00:31:19.176354 2137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b1aa62331e1304bba1f96c3c0239d4dc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b1aa62331e1304bba1f96c3c0239d4dc\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:31:19.176358 kubelet[2137]: I0910 00:31:19.176376 2137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 10 00:31:19.176627 kubelet[2137]: I0910 00:31:19.176394 2137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b1aa62331e1304bba1f96c3c0239d4dc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b1aa62331e1304bba1f96c3c0239d4dc\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:31:19.176627 kubelet[2137]: I0910 00:31:19.176419 2137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b1aa62331e1304bba1f96c3c0239d4dc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b1aa62331e1304bba1f96c3c0239d4dc\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:31:19.176627 kubelet[2137]: I0910 00:31:19.176516 2137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:31:19.176627 kubelet[2137]: I0910 00:31:19.176573 2137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:31:19.176627 kubelet[2137]: I0910 00:31:19.176594 2137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:31:19.407810 kubelet[2137]: E0910 00:31:19.407648 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:19.408673 containerd[1461]: time="2025-09-10T00:31:19.408615584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 10 00:31:19.426050 kubelet[2137]: E0910 00:31:19.426009 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:19.430413 kubelet[2137]: E0910 00:31:19.430356 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:19.430598 containerd[1461]: time="2025-09-10T00:31:19.430381119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 10 00:31:19.430823 containerd[1461]: time="2025-09-10T00:31:19.430786832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b1aa62331e1304bba1f96c3c0239d4dc,Namespace:kube-system,Attempt:0,}" Sep 10 00:31:19.505868 kubelet[2137]: I0910 00:31:19.505820 2137 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:31:19.506358 kubelet[2137]: E0910 00:31:19.506324 2137 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Sep 10 00:31:19.598410 kubelet[2137]: E0910 00:31:19.598356 2137 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:31:19.991942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3778654573.mount: Deactivated successfully. Sep 10 00:31:19.998233 containerd[1461]: time="2025-09-10T00:31:19.998191729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:31:19.999319 containerd[1461]: time="2025-09-10T00:31:19.999289115Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:31:20.000207 containerd[1461]: time="2025-09-10T00:31:20.000147277Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 10 00:31:20.001080 containerd[1461]: time="2025-09-10T00:31:20.001041841Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:31:20.002117 containerd[1461]: time="2025-09-10T00:31:20.002060754Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 10 00:31:20.002923 containerd[1461]: time="2025-09-10T00:31:20.002887577Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 10 00:31:20.006181 containerd[1461]: time="2025-09-10T00:31:20.004121124Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:31:20.008573 containerd[1461]: time="2025-09-10T00:31:20.008522943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:31:20.009376 containerd[1461]: time="2025-09-10T00:31:20.009344015Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 600.637356ms" Sep 10 00:31:20.011666 containerd[1461]: time="2025-09-10T00:31:20.011627655Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 581.142887ms" Sep 10 00:31:20.012400 containerd[1461]: time="2025-09-10T00:31:20.012343475Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 581.478624ms" Sep 10 00:31:20.200669 containerd[1461]: time="2025-09-10T00:31:20.200478490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:31:20.202139 containerd[1461]: time="2025-09-10T00:31:20.201238909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:31:20.202139 containerd[1461]: time="2025-09-10T00:31:20.201337057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:31:20.202139 containerd[1461]: time="2025-09-10T00:31:20.201609920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:31:20.203391 containerd[1461]: time="2025-09-10T00:31:20.202788986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:31:20.203391 containerd[1461]: time="2025-09-10T00:31:20.202847534Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:31:20.203391 containerd[1461]: time="2025-09-10T00:31:20.202892746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:31:20.203391 containerd[1461]: time="2025-09-10T00:31:20.203056878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:31:20.206287 containerd[1461]: time="2025-09-10T00:31:20.206024133Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:31:20.206287 containerd[1461]: time="2025-09-10T00:31:20.206083714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:31:20.206287 containerd[1461]: time="2025-09-10T00:31:20.206098384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:31:20.206287 containerd[1461]: time="2025-09-10T00:31:20.206213797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:31:20.234307 systemd[1]: Started cri-containerd-58d5df6a5d37af5d9cc025a33c38483ee2ea52f0075eb317e49e257b6fcec164.scope - libcontainer container 58d5df6a5d37af5d9cc025a33c38483ee2ea52f0075eb317e49e257b6fcec164. Sep 10 00:31:20.236521 systemd[1]: Started cri-containerd-6cb421f3626f20cedd3d590037bf5290494a6dc156802c936dce69ab26d2e071.scope - libcontainer container 6cb421f3626f20cedd3d590037bf5290494a6dc156802c936dce69ab26d2e071. Sep 10 00:31:20.255780 kubelet[2137]: W0910 00:31:20.255562 2137 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Sep 10 00:31:20.255780 kubelet[2137]: E0910 00:31:20.255621 2137 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:31:20.270262 systemd[1]: Started cri-containerd-87082ab5655fc6f50bc7424a0025b62437df7829138d9311c2c131d5db259b4d.scope - libcontainer container 87082ab5655fc6f50bc7424a0025b62437df7829138d9311c2c131d5db259b4d. Sep 10 00:31:20.312620 kubelet[2137]: I0910 00:31:20.312528 2137 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:31:20.313834 kubelet[2137]: E0910 00:31:20.313644 2137 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Sep 10 00:31:20.327609 containerd[1461]: time="2025-09-10T00:31:20.327552938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"58d5df6a5d37af5d9cc025a33c38483ee2ea52f0075eb317e49e257b6fcec164\"" Sep 10 00:31:20.328278 containerd[1461]: time="2025-09-10T00:31:20.328217593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b1aa62331e1304bba1f96c3c0239d4dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"6cb421f3626f20cedd3d590037bf5290494a6dc156802c936dce69ab26d2e071\"" Sep 10 00:31:20.329072 kubelet[2137]: E0910 00:31:20.329037 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:20.329449 kubelet[2137]: E0910 00:31:20.329366 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:20.331906 containerd[1461]: time="2025-09-10T00:31:20.331872250Z" level=info msg="CreateContainer within sandbox \"58d5df6a5d37af5d9cc025a33c38483ee2ea52f0075eb317e49e257b6fcec164\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 10 00:31:20.332078 containerd[1461]: time="2025-09-10T00:31:20.331884735Z" level=info msg="CreateContainer within sandbox \"6cb421f3626f20cedd3d590037bf5290494a6dc156802c936dce69ab26d2e071\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 10 00:31:20.336290 containerd[1461]: time="2025-09-10T00:31:20.336251613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"87082ab5655fc6f50bc7424a0025b62437df7829138d9311c2c131d5db259b4d\"" Sep 10 00:31:20.337934 kubelet[2137]: E0910 00:31:20.337907 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:20.340270 containerd[1461]: time="2025-09-10T00:31:20.340233311Z" level=info msg="CreateContainer within sandbox \"87082ab5655fc6f50bc7424a0025b62437df7829138d9311c2c131d5db259b4d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 10 00:31:20.355247 containerd[1461]: time="2025-09-10T00:31:20.355200282Z" level=info msg="CreateContainer within sandbox \"6cb421f3626f20cedd3d590037bf5290494a6dc156802c936dce69ab26d2e071\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"670ddce0b45f0e7eb62ddf1cf04bd85c5d3dfe4674845685ccbf73886c586529\"" Sep 10 00:31:20.356066 containerd[1461]: time="2025-09-10T00:31:20.356022285Z" level=info msg="StartContainer for \"670ddce0b45f0e7eb62ddf1cf04bd85c5d3dfe4674845685ccbf73886c586529\"" Sep 10 00:31:20.358533 containerd[1461]: time="2025-09-10T00:31:20.358470590Z" level=info msg="CreateContainer within sandbox \"58d5df6a5d37af5d9cc025a33c38483ee2ea52f0075eb317e49e257b6fcec164\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f66cbc268c02c599395cf9aa8694c6567ad3bcc419461bbb8fc1866eb177f41a\"" Sep 10 00:31:20.360600 containerd[1461]: time="2025-09-10T00:31:20.360471259Z" level=info msg="StartContainer for \"f66cbc268c02c599395cf9aa8694c6567ad3bcc419461bbb8fc1866eb177f41a\"" Sep 10 00:31:20.366641 containerd[1461]: time="2025-09-10T00:31:20.366593680Z" level=info msg="CreateContainer within sandbox \"87082ab5655fc6f50bc7424a0025b62437df7829138d9311c2c131d5db259b4d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"12951152f209eb3671fa567a6c6d8ef59b143465006cf2587a8215bf04eebb55\"" Sep 10 00:31:20.368934 containerd[1461]: time="2025-09-10T00:31:20.368867923Z" level=info msg="StartContainer for \"12951152f209eb3671fa567a6c6d8ef59b143465006cf2587a8215bf04eebb55\"" Sep 10 00:31:20.404107 systemd[1]: Started cri-containerd-12951152f209eb3671fa567a6c6d8ef59b143465006cf2587a8215bf04eebb55.scope - libcontainer container 12951152f209eb3671fa567a6c6d8ef59b143465006cf2587a8215bf04eebb55. Sep 10 00:31:20.407445 systemd[1]: Started cri-containerd-670ddce0b45f0e7eb62ddf1cf04bd85c5d3dfe4674845685ccbf73886c586529.scope - libcontainer container 670ddce0b45f0e7eb62ddf1cf04bd85c5d3dfe4674845685ccbf73886c586529. Sep 10 00:31:20.422308 systemd[1]: Started cri-containerd-f66cbc268c02c599395cf9aa8694c6567ad3bcc419461bbb8fc1866eb177f41a.scope - libcontainer container f66cbc268c02c599395cf9aa8694c6567ad3bcc419461bbb8fc1866eb177f41a. Sep 10 00:31:20.470615 containerd[1461]: time="2025-09-10T00:31:20.470567847Z" level=info msg="StartContainer for \"12951152f209eb3671fa567a6c6d8ef59b143465006cf2587a8215bf04eebb55\" returns successfully" Sep 10 00:31:20.484552 containerd[1461]: time="2025-09-10T00:31:20.484188323Z" level=info msg="StartContainer for \"670ddce0b45f0e7eb62ddf1cf04bd85c5d3dfe4674845685ccbf73886c586529\" returns successfully" Sep 10 00:31:20.502877 containerd[1461]: time="2025-09-10T00:31:20.502819219Z" level=info msg="StartContainer for \"f66cbc268c02c599395cf9aa8694c6567ad3bcc419461bbb8fc1866eb177f41a\" returns successfully" Sep 10 00:31:20.601923 kubelet[2137]: E0910 00:31:20.600282 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:20.608228 kubelet[2137]: E0910 00:31:20.607295 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:20.610032 kubelet[2137]: E0910 00:31:20.609991 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:21.610682 kubelet[2137]: E0910 00:31:21.610624 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:21.889739 kubelet[2137]: E0910 00:31:21.889577 2137 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 10 00:31:21.915950 kubelet[2137]: I0910 00:31:21.915914 2137 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:31:22.028708 kubelet[2137]: I0910 00:31:22.028446 2137 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 10 00:31:22.028708 kubelet[2137]: E0910 00:31:22.028502 2137 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 10 00:31:22.038443 kubelet[2137]: E0910 00:31:22.038395 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:22.139195 kubelet[2137]: E0910 00:31:22.139086 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:22.239385 kubelet[2137]: E0910 00:31:22.239237 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:22.340272 kubelet[2137]: E0910 00:31:22.340143 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:22.441085 kubelet[2137]: E0910 00:31:22.441024 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:22.541800 kubelet[2137]: E0910 00:31:22.541709 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:22.641857 kubelet[2137]: E0910 00:31:22.641806 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:22.742563 kubelet[2137]: E0910 00:31:22.742471 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:22.805861 kubelet[2137]: E0910 00:31:22.805736 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:22.843013 kubelet[2137]: E0910 00:31:22.842953 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:22.943508 kubelet[2137]: E0910 00:31:22.943461 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:23.044347 kubelet[2137]: E0910 00:31:23.044284 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:23.145366 kubelet[2137]: E0910 00:31:23.145183 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:23.246076 kubelet[2137]: E0910 00:31:23.246020 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:23.346976 kubelet[2137]: E0910 00:31:23.346930 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:23.447704 kubelet[2137]: E0910 00:31:23.447555 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:23.545226 kubelet[2137]: E0910 00:31:23.545191 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:23.548239 kubelet[2137]: E0910 00:31:23.548209 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:23.649229 kubelet[2137]: E0910 00:31:23.649193 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:23.750104 kubelet[2137]: E0910 00:31:23.749968 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:23.850425 kubelet[2137]: E0910 00:31:23.850382 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:23.950912 kubelet[2137]: E0910 00:31:23.950871 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:24.051693 kubelet[2137]: E0910 00:31:24.051560 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:24.152411 kubelet[2137]: E0910 00:31:24.152344 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:24.167570 systemd[1]: Reloading requested from client PID 2414 ('systemctl') (unit session-7.scope)... Sep 10 00:31:24.167588 systemd[1]: Reloading... Sep 10 00:31:24.253623 kubelet[2137]: E0910 00:31:24.252958 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:24.253769 zram_generator::config[2456]: No configuration found. Sep 10 00:31:24.354096 kubelet[2137]: E0910 00:31:24.353969 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:24.364932 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:31:24.454587 kubelet[2137]: E0910 00:31:24.454536 2137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:24.468697 systemd[1]: Reloading finished in 300 ms. Sep 10 00:31:24.524069 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:31:24.549405 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 00:31:24.549818 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:31:24.558593 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:31:24.733834 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:31:24.739688 (kubelet)[2498]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 00:31:24.789914 kubelet[2498]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:31:24.789914 kubelet[2498]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 10 00:31:24.789914 kubelet[2498]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:31:24.790496 kubelet[2498]: I0910 00:31:24.789969 2498 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 00:31:24.801259 kubelet[2498]: I0910 00:31:24.801217 2498 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 10 00:31:24.801259 kubelet[2498]: I0910 00:31:24.801247 2498 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 00:31:24.801506 kubelet[2498]: I0910 00:31:24.801490 2498 server.go:934] "Client rotation is on, will bootstrap in background" Sep 10 00:31:24.802848 kubelet[2498]: I0910 00:31:24.802822 2498 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 10 00:31:24.804749 kubelet[2498]: I0910 00:31:24.804699 2498 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 00:31:24.808207 kubelet[2498]: E0910 00:31:24.808174 2498 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 10 00:31:24.808207 kubelet[2498]: I0910 00:31:24.808207 2498 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 10 00:31:24.864539 kubelet[2498]: I0910 00:31:24.864405 2498 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 00:31:24.864704 kubelet[2498]: I0910 00:31:24.864620 2498 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 10 00:31:24.864909 kubelet[2498]: I0910 00:31:24.864764 2498 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 00:31:24.864992 kubelet[2498]: I0910 00:31:24.864816 2498 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 00:31:24.865085 kubelet[2498]: I0910 00:31:24.864999 2498 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 00:31:24.865085 kubelet[2498]: I0910 00:31:24.865009 2498 container_manager_linux.go:300] "Creating device plugin manager" Sep 10 00:31:24.865085 kubelet[2498]: I0910 00:31:24.865043 2498 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:31:24.865206 kubelet[2498]: I0910 00:31:24.865185 2498 kubelet.go:408] "Attempting to sync node with API server" Sep 10 00:31:24.865206 kubelet[2498]: I0910 00:31:24.865202 2498 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 00:31:24.865261 kubelet[2498]: I0910 00:31:24.865239 2498 kubelet.go:314] "Adding apiserver pod source" Sep 10 00:31:24.865261 kubelet[2498]: I0910 00:31:24.865250 2498 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 00:31:24.867132 kubelet[2498]: I0910 00:31:24.867085 2498 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 10 00:31:24.867912 kubelet[2498]: I0910 00:31:24.867871 2498 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 00:31:24.868810 kubelet[2498]: I0910 00:31:24.868700 2498 server.go:1274] "Started kubelet" Sep 10 00:31:24.869840 kubelet[2498]: I0910 00:31:24.869794 2498 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 00:31:24.870005 kubelet[2498]: I0910 00:31:24.869938 2498 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 00:31:24.870328 kubelet[2498]: I0910 00:31:24.870309 2498 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 00:31:24.872309 kubelet[2498]: I0910 00:31:24.872290 2498 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 00:31:24.873263 kubelet[2498]: I0910 00:31:24.873246 2498 server.go:449] "Adding debug handlers to kubelet server" Sep 10 00:31:24.873702 kubelet[2498]: I0910 00:31:24.873522 2498 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 10 00:31:24.873702 kubelet[2498]: E0910 00:31:24.873640 2498 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:24.873941 kubelet[2498]: I0910 00:31:24.873849 2498 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 10 00:31:24.874763 kubelet[2498]: I0910 00:31:24.872714 2498 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 00:31:24.879067 kubelet[2498]: I0910 00:31:24.878794 2498 factory.go:221] Registration of the systemd container factory successfully Sep 10 00:31:24.879067 kubelet[2498]: I0910 00:31:24.878895 2498 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 00:31:24.881503 kubelet[2498]: I0910 00:31:24.881476 2498 reconciler.go:26] "Reconciler: start to sync state" Sep 10 00:31:24.886125 kubelet[2498]: I0910 00:31:24.886104 2498 factory.go:221] Registration of the containerd container factory successfully Sep 10 00:31:24.888956 kubelet[2498]: E0910 00:31:24.888929 2498 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 00:31:24.898672 kubelet[2498]: I0910 00:31:24.898628 2498 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 00:31:24.900369 kubelet[2498]: I0910 00:31:24.900322 2498 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 00:31:24.900448 kubelet[2498]: I0910 00:31:24.900395 2498 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 10 00:31:24.900448 kubelet[2498]: I0910 00:31:24.900421 2498 kubelet.go:2321] "Starting kubelet main sync loop" Sep 10 00:31:24.900508 kubelet[2498]: E0910 00:31:24.900490 2498 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 00:31:24.926234 kubelet[2498]: I0910 00:31:24.926154 2498 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 10 00:31:24.926234 kubelet[2498]: I0910 00:31:24.926202 2498 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 10 00:31:24.926234 kubelet[2498]: I0910 00:31:24.926222 2498 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:31:24.926429 kubelet[2498]: I0910 00:31:24.926371 2498 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 10 00:31:24.926429 kubelet[2498]: I0910 00:31:24.926393 2498 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 10 00:31:24.926429 kubelet[2498]: I0910 00:31:24.926414 2498 policy_none.go:49] "None policy: Start" Sep 10 00:31:24.928184 kubelet[2498]: I0910 00:31:24.927211 2498 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 10 00:31:24.928184 kubelet[2498]: I0910 00:31:24.927243 2498 state_mem.go:35] "Initializing new in-memory state store" Sep 10 00:31:24.928363 kubelet[2498]: I0910 00:31:24.928341 2498 state_mem.go:75] "Updated machine memory state" Sep 10 00:31:24.933271 kubelet[2498]: I0910 00:31:24.933245 2498 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 00:31:24.933480 kubelet[2498]: I0910 00:31:24.933425 2498 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 00:31:24.933480 kubelet[2498]: I0910 00:31:24.933441 2498 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 00:31:24.935132 kubelet[2498]: I0910 00:31:24.934248 2498 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 00:31:25.040869 kubelet[2498]: I0910 00:31:25.040744 2498 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:31:25.046673 kubelet[2498]: I0910 00:31:25.046632 2498 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 10 00:31:25.046860 kubelet[2498]: I0910 00:31:25.046713 2498 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 10 00:31:25.083404 kubelet[2498]: I0910 00:31:25.083373 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b1aa62331e1304bba1f96c3c0239d4dc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b1aa62331e1304bba1f96c3c0239d4dc\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:31:25.083502 kubelet[2498]: I0910 00:31:25.083402 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b1aa62331e1304bba1f96c3c0239d4dc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b1aa62331e1304bba1f96c3c0239d4dc\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:31:25.083502 kubelet[2498]: I0910 00:31:25.083432 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:31:25.083575 kubelet[2498]: I0910 00:31:25.083542 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:31:25.083604 kubelet[2498]: I0910 00:31:25.083591 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b1aa62331e1304bba1f96c3c0239d4dc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b1aa62331e1304bba1f96c3c0239d4dc\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:31:25.083631 kubelet[2498]: I0910 00:31:25.083608 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:31:25.083662 kubelet[2498]: I0910 00:31:25.083636 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:31:25.083689 kubelet[2498]: I0910 00:31:25.083677 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:31:25.083740 kubelet[2498]: I0910 00:31:25.083719 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 10 00:31:25.152011 sudo[2536]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 10 00:31:25.152414 sudo[2536]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 10 00:31:25.308501 kubelet[2498]: E0910 00:31:25.308355 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:25.308501 kubelet[2498]: E0910 00:31:25.308427 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:25.308650 kubelet[2498]: E0910 00:31:25.308551 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:25.750762 sudo[2536]: pam_unix(sudo:session): session closed for user root Sep 10 00:31:25.865519 kubelet[2498]: I0910 00:31:25.865441 2498 apiserver.go:52] "Watching apiserver" Sep 10 00:31:25.874735 kubelet[2498]: I0910 00:31:25.874678 2498 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 10 00:31:25.917199 kubelet[2498]: E0910 00:31:25.914807 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:25.917199 kubelet[2498]: E0910 00:31:25.915014 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:25.981026 kubelet[2498]: E0910 00:31:25.980966 2498 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 10 00:31:25.981189 kubelet[2498]: E0910 00:31:25.981157 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:25.981395 kubelet[2498]: I0910 00:31:25.981349 2498 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.981311455 podStartE2EDuration="981.311455ms" podCreationTimestamp="2025-09-10 00:31:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:31:25.981073399 +0000 UTC m=+1.237119304" watchObservedRunningTime="2025-09-10 00:31:25.981311455 +0000 UTC m=+1.237357350" Sep 10 00:31:26.001745 kubelet[2498]: I0910 00:31:26.001574 2498 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.001546026 podStartE2EDuration="1.001546026s" podCreationTimestamp="2025-09-10 00:31:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:31:25.990227897 +0000 UTC m=+1.246273792" watchObservedRunningTime="2025-09-10 00:31:26.001546026 +0000 UTC m=+1.257591921" Sep 10 00:31:26.916513 kubelet[2498]: E0910 00:31:26.916473 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:27.353327 sudo[1639]: pam_unix(sudo:session): session closed for user root Sep 10 00:31:27.355516 sshd[1636]: pam_unix(sshd:session): session closed for user core Sep 10 00:31:27.360125 systemd[1]: sshd@6-10.0.0.14:22-10.0.0.1:60078.service: Deactivated successfully. Sep 10 00:31:27.362421 systemd[1]: session-7.scope: Deactivated successfully. Sep 10 00:31:27.362631 systemd[1]: session-7.scope: Consumed 5.391s CPU time, 156.1M memory peak, 0B memory swap peak. Sep 10 00:31:27.363082 systemd-logind[1444]: Session 7 logged out. Waiting for processes to exit. Sep 10 00:31:27.364089 systemd-logind[1444]: Removed session 7. Sep 10 00:31:30.398402 kubelet[2498]: E0910 00:31:30.398366 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:30.410372 kubelet[2498]: I0910 00:31:30.410299 2498 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.410253801 podStartE2EDuration="5.410253801s" podCreationTimestamp="2025-09-10 00:31:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:31:26.003331213 +0000 UTC m=+1.259377108" watchObservedRunningTime="2025-09-10 00:31:30.410253801 +0000 UTC m=+5.666299686" Sep 10 00:31:30.652125 kubelet[2498]: E0910 00:31:30.651904 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:30.711022 kubelet[2498]: I0910 00:31:30.710984 2498 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 10 00:31:30.711596 containerd[1461]: time="2025-09-10T00:31:30.711549225Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 10 00:31:30.712094 kubelet[2498]: I0910 00:31:30.711801 2498 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 10 00:31:30.921736 kubelet[2498]: E0910 00:31:30.921578 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:30.921736 kubelet[2498]: E0910 00:31:30.921661 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:31.666107 kubelet[2498]: W0910 00:31:31.666040 2498 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 10 00:31:31.666107 kubelet[2498]: E0910 00:31:31.666093 2498 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 10 00:31:31.666545 kubelet[2498]: W0910 00:31:31.666126 2498 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 10 00:31:31.666545 kubelet[2498]: E0910 00:31:31.666137 2498 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 10 00:31:31.666545 kubelet[2498]: W0910 00:31:31.666209 2498 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 10 00:31:31.666545 kubelet[2498]: E0910 00:31:31.666221 2498 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 10 00:31:31.667892 systemd[1]: Created slice kubepods-besteffort-pod6cc91414_6d21_4c56_b58e_b7f26ac41f44.slice - libcontainer container kubepods-besteffort-pod6cc91414_6d21_4c56_b58e_b7f26ac41f44.slice. Sep 10 00:31:31.696069 systemd[1]: Created slice kubepods-burstable-pod8270908e_cdfb_4c56_aeed_f2328f128cf6.slice - libcontainer container kubepods-burstable-pod8270908e_cdfb_4c56_aeed_f2328f128cf6.slice. Sep 10 00:31:31.723741 kubelet[2498]: I0910 00:31:31.723692 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-xtables-lock\") pod \"cilium-6s87r\" (UID: \"8270908e-cdfb-4c56-aeed-f2328f128cf6\") " pod="kube-system/cilium-6s87r" Sep 10 00:31:31.723741 kubelet[2498]: I0910 00:31:31.723730 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8270908e-cdfb-4c56-aeed-f2328f128cf6-cilium-config-path\") pod \"cilium-6s87r\" (UID: \"8270908e-cdfb-4c56-aeed-f2328f128cf6\") " pod="kube-system/cilium-6s87r" Sep 10 00:31:31.723959 kubelet[2498]: I0910 00:31:31.723758 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6cc91414-6d21-4c56-b58e-b7f26ac41f44-kube-proxy\") pod \"kube-proxy-jd8cd\" (UID: \"6cc91414-6d21-4c56-b58e-b7f26ac41f44\") " pod="kube-system/kube-proxy-jd8cd" Sep 10 00:31:31.723959 kubelet[2498]: I0910 00:31:31.723773 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-host-proc-sys-net\") pod \"cilium-6s87r\" (UID: \"8270908e-cdfb-4c56-aeed-f2328f128cf6\") " pod="kube-system/cilium-6s87r" Sep 10 00:31:31.723959 kubelet[2498]: I0910 00:31:31.723787 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl4ln\" (UniqueName: \"kubernetes.io/projected/8270908e-cdfb-4c56-aeed-f2328f128cf6-kube-api-access-gl4ln\") pod \"cilium-6s87r\" (UID: \"8270908e-cdfb-4c56-aeed-f2328f128cf6\") " pod="kube-system/cilium-6s87r" Sep 10 00:31:31.723959 kubelet[2498]: I0910 00:31:31.723811 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-cni-path\") pod \"cilium-6s87r\" (UID: \"8270908e-cdfb-4c56-aeed-f2328f128cf6\") " pod="kube-system/cilium-6s87r" Sep 10 00:31:31.723959 kubelet[2498]: I0910 00:31:31.723826 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8270908e-cdfb-4c56-aeed-f2328f128cf6-hubble-tls\") pod \"cilium-6s87r\" (UID: \"8270908e-cdfb-4c56-aeed-f2328f128cf6\") " pod="kube-system/cilium-6s87r" Sep 10 00:31:31.723959 kubelet[2498]: I0910 00:31:31.723840 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6cc91414-6d21-4c56-b58e-b7f26ac41f44-lib-modules\") pod \"kube-proxy-jd8cd\" (UID: \"6cc91414-6d21-4c56-b58e-b7f26ac41f44\") " pod="kube-system/kube-proxy-jd8cd" Sep 10 00:31:31.724257 kubelet[2498]: I0910 00:31:31.723860 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-hostproc\") pod \"cilium-6s87r\" (UID: \"8270908e-cdfb-4c56-aeed-f2328f128cf6\") " pod="kube-system/cilium-6s87r" Sep 10 00:31:31.724257 kubelet[2498]: I0910 00:31:31.723873 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-cilium-cgroup\") pod \"cilium-6s87r\" (UID: \"8270908e-cdfb-4c56-aeed-f2328f128cf6\") " pod="kube-system/cilium-6s87r" Sep 10 00:31:31.724257 kubelet[2498]: I0910 00:31:31.723887 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-cilium-run\") pod \"cilium-6s87r\" (UID: \"8270908e-cdfb-4c56-aeed-f2328f128cf6\") " pod="kube-system/cilium-6s87r" Sep 10 00:31:31.724257 kubelet[2498]: I0910 00:31:31.723903 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-lib-modules\") pod \"cilium-6s87r\" (UID: \"8270908e-cdfb-4c56-aeed-f2328f128cf6\") " pod="kube-system/cilium-6s87r" Sep 10 00:31:31.724257 kubelet[2498]: I0910 00:31:31.723915 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-host-proc-sys-kernel\") pod \"cilium-6s87r\" (UID: \"8270908e-cdfb-4c56-aeed-f2328f128cf6\") " pod="kube-system/cilium-6s87r" Sep 10 00:31:31.724257 kubelet[2498]: I0910 00:31:31.723929 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-etc-cni-netd\") pod \"cilium-6s87r\" (UID: \"8270908e-cdfb-4c56-aeed-f2328f128cf6\") " pod="kube-system/cilium-6s87r" Sep 10 00:31:31.724449 kubelet[2498]: I0910 00:31:31.723942 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-bpf-maps\") pod \"cilium-6s87r\" (UID: \"8270908e-cdfb-4c56-aeed-f2328f128cf6\") " pod="kube-system/cilium-6s87r" Sep 10 00:31:31.724449 kubelet[2498]: I0910 00:31:31.723955 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8270908e-cdfb-4c56-aeed-f2328f128cf6-clustermesh-secrets\") pod \"cilium-6s87r\" (UID: \"8270908e-cdfb-4c56-aeed-f2328f128cf6\") " pod="kube-system/cilium-6s87r" Sep 10 00:31:31.724449 kubelet[2498]: I0910 00:31:31.723968 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6cc91414-6d21-4c56-b58e-b7f26ac41f44-xtables-lock\") pod \"kube-proxy-jd8cd\" (UID: \"6cc91414-6d21-4c56-b58e-b7f26ac41f44\") " pod="kube-system/kube-proxy-jd8cd" Sep 10 00:31:31.724449 kubelet[2498]: I0910 00:31:31.723991 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdwzp\" (UniqueName: \"kubernetes.io/projected/6cc91414-6d21-4c56-b58e-b7f26ac41f44-kube-api-access-kdwzp\") pod \"kube-proxy-jd8cd\" (UID: \"6cc91414-6d21-4c56-b58e-b7f26ac41f44\") " pod="kube-system/kube-proxy-jd8cd" Sep 10 00:31:31.728533 systemd[1]: Created slice kubepods-besteffort-pod83b7f552_3827_4d34_8b87_852d9f4f172c.slice - libcontainer container kubepods-besteffort-pod83b7f552_3827_4d34_8b87_852d9f4f172c.slice. Sep 10 00:31:31.825277 kubelet[2498]: I0910 00:31:31.825211 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzhbx\" (UniqueName: \"kubernetes.io/projected/83b7f552-3827-4d34-8b87-852d9f4f172c-kube-api-access-wzhbx\") pod \"cilium-operator-5d85765b45-qbd4b\" (UID: \"83b7f552-3827-4d34-8b87-852d9f4f172c\") " pod="kube-system/cilium-operator-5d85765b45-qbd4b" Sep 10 00:31:31.826092 kubelet[2498]: I0910 00:31:31.825504 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/83b7f552-3827-4d34-8b87-852d9f4f172c-cilium-config-path\") pod \"cilium-operator-5d85765b45-qbd4b\" (UID: \"83b7f552-3827-4d34-8b87-852d9f4f172c\") " pod="kube-system/cilium-operator-5d85765b45-qbd4b" Sep 10 00:31:31.994724 kubelet[2498]: E0910 00:31:31.994673 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:31.995404 containerd[1461]: time="2025-09-10T00:31:31.995354534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jd8cd,Uid:6cc91414-6d21-4c56-b58e-b7f26ac41f44,Namespace:kube-system,Attempt:0,}" Sep 10 00:31:32.035756 containerd[1461]: time="2025-09-10T00:31:32.035598211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:31:32.035756 containerd[1461]: time="2025-09-10T00:31:32.035691896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:31:32.035756 containerd[1461]: time="2025-09-10T00:31:32.035707136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:31:32.035940 containerd[1461]: time="2025-09-10T00:31:32.035822113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:31:32.060341 systemd[1]: Started cri-containerd-8e963797d23be671c145e8585781bbc1c66780b89b6ba0f03efebd633c5b878f.scope - libcontainer container 8e963797d23be671c145e8585781bbc1c66780b89b6ba0f03efebd633c5b878f. Sep 10 00:31:32.085752 containerd[1461]: time="2025-09-10T00:31:32.085495162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jd8cd,Uid:6cc91414-6d21-4c56-b58e-b7f26ac41f44,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e963797d23be671c145e8585781bbc1c66780b89b6ba0f03efebd633c5b878f\"" Sep 10 00:31:32.086262 kubelet[2498]: E0910 00:31:32.086235 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:32.088188 containerd[1461]: time="2025-09-10T00:31:32.088134080Z" level=info msg="CreateContainer within sandbox \"8e963797d23be671c145e8585781bbc1c66780b89b6ba0f03efebd633c5b878f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 10 00:31:32.438561 containerd[1461]: time="2025-09-10T00:31:32.438481026Z" level=info msg="CreateContainer within sandbox \"8e963797d23be671c145e8585781bbc1c66780b89b6ba0f03efebd633c5b878f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"34f26536c2acc70a0a15b207fa854c645174477ba9ce3211444b32e35d196715\"" Sep 10 00:31:32.439213 containerd[1461]: time="2025-09-10T00:31:32.439153635Z" level=info msg="StartContainer for \"34f26536c2acc70a0a15b207fa854c645174477ba9ce3211444b32e35d196715\"" Sep 10 00:31:32.473317 systemd[1]: Started cri-containerd-34f26536c2acc70a0a15b207fa854c645174477ba9ce3211444b32e35d196715.scope - libcontainer container 34f26536c2acc70a0a15b207fa854c645174477ba9ce3211444b32e35d196715. Sep 10 00:31:32.507365 containerd[1461]: time="2025-09-10T00:31:32.507293472Z" level=info msg="StartContainer for \"34f26536c2acc70a0a15b207fa854c645174477ba9ce3211444b32e35d196715\" returns successfully" Sep 10 00:31:32.825960 kubelet[2498]: E0910 00:31:32.825801 2498 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Sep 10 00:31:32.825960 kubelet[2498]: E0910 00:31:32.825934 2498 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8270908e-cdfb-4c56-aeed-f2328f128cf6-cilium-config-path podName:8270908e-cdfb-4c56-aeed-f2328f128cf6 nodeName:}" failed. No retries permitted until 2025-09-10 00:31:33.325910332 +0000 UTC m=+8.581956227 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/8270908e-cdfb-4c56-aeed-f2328f128cf6-cilium-config-path") pod "cilium-6s87r" (UID: "8270908e-cdfb-4c56-aeed-f2328f128cf6") : failed to sync configmap cache: timed out waiting for the condition Sep 10 00:31:32.827476 kubelet[2498]: E0910 00:31:32.827016 2498 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Sep 10 00:31:32.827476 kubelet[2498]: E0910 00:31:32.827050 2498 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Sep 10 00:31:32.827476 kubelet[2498]: E0910 00:31:32.827188 2498 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8270908e-cdfb-4c56-aeed-f2328f128cf6-clustermesh-secrets podName:8270908e-cdfb-4c56-aeed-f2328f128cf6 nodeName:}" failed. No retries permitted until 2025-09-10 00:31:33.327152507 +0000 UTC m=+8.583198392 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/8270908e-cdfb-4c56-aeed-f2328f128cf6-clustermesh-secrets") pod "cilium-6s87r" (UID: "8270908e-cdfb-4c56-aeed-f2328f128cf6") : failed to sync secret cache: timed out waiting for the condition Sep 10 00:31:32.827476 kubelet[2498]: E0910 00:31:32.827052 2498 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-6s87r: failed to sync secret cache: timed out waiting for the condition Sep 10 00:31:32.827476 kubelet[2498]: E0910 00:31:32.827271 2498 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8270908e-cdfb-4c56-aeed-f2328f128cf6-hubble-tls podName:8270908e-cdfb-4c56-aeed-f2328f128cf6 nodeName:}" failed. No retries permitted until 2025-09-10 00:31:33.327261362 +0000 UTC m=+8.583307257 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/8270908e-cdfb-4c56-aeed-f2328f128cf6-hubble-tls") pod "cilium-6s87r" (UID: "8270908e-cdfb-4c56-aeed-f2328f128cf6") : failed to sync secret cache: timed out waiting for the condition Sep 10 00:31:32.904589 kubelet[2498]: E0910 00:31:32.904544 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:32.927732 kubelet[2498]: E0910 00:31:32.927377 2498 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Sep 10 00:31:32.927732 kubelet[2498]: E0910 00:31:32.927460 2498 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/83b7f552-3827-4d34-8b87-852d9f4f172c-cilium-config-path podName:83b7f552-3827-4d34-8b87-852d9f4f172c nodeName:}" failed. No retries permitted until 2025-09-10 00:31:33.427433835 +0000 UTC m=+8.683479730 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/83b7f552-3827-4d34-8b87-852d9f4f172c-cilium-config-path") pod "cilium-operator-5d85765b45-qbd4b" (UID: "83b7f552-3827-4d34-8b87-852d9f4f172c") : failed to sync configmap cache: timed out waiting for the condition Sep 10 00:31:32.929307 kubelet[2498]: E0910 00:31:32.929279 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:32.929831 kubelet[2498]: E0910 00:31:32.929794 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:33.498845 kubelet[2498]: E0910 00:31:33.498798 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:33.499673 containerd[1461]: time="2025-09-10T00:31:33.499292733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6s87r,Uid:8270908e-cdfb-4c56-aeed-f2328f128cf6,Namespace:kube-system,Attempt:0,}" Sep 10 00:31:33.524760 containerd[1461]: time="2025-09-10T00:31:33.524667886Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:31:33.524760 containerd[1461]: time="2025-09-10T00:31:33.524723587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:31:33.524760 containerd[1461]: time="2025-09-10T00:31:33.524739037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:31:33.524947 containerd[1461]: time="2025-09-10T00:31:33.524832491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:31:33.535831 kubelet[2498]: E0910 00:31:33.535794 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:33.537462 containerd[1461]: time="2025-09-10T00:31:33.536699274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-qbd4b,Uid:83b7f552-3827-4d34-8b87-852d9f4f172c,Namespace:kube-system,Attempt:0,}" Sep 10 00:31:33.549395 systemd[1]: Started cri-containerd-5cff573c958c687b26df329029b840589d99507de8c7720945ab6457a6ccd886.scope - libcontainer container 5cff573c958c687b26df329029b840589d99507de8c7720945ab6457a6ccd886. Sep 10 00:31:33.564412 containerd[1461]: time="2025-09-10T00:31:33.564314500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:31:33.564412 containerd[1461]: time="2025-09-10T00:31:33.564376863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:31:33.564412 containerd[1461]: time="2025-09-10T00:31:33.564391261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:31:33.565147 containerd[1461]: time="2025-09-10T00:31:33.564480517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:31:33.578020 containerd[1461]: time="2025-09-10T00:31:33.577975092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6s87r,Uid:8270908e-cdfb-4c56-aeed-f2328f128cf6,Namespace:kube-system,Attempt:0,} returns sandbox id \"5cff573c958c687b26df329029b840589d99507de8c7720945ab6457a6ccd886\"" Sep 10 00:31:33.579723 kubelet[2498]: E0910 00:31:33.579698 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:33.581012 containerd[1461]: time="2025-09-10T00:31:33.580981727Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 10 00:31:33.595408 systemd[1]: Started cri-containerd-cfbc1c9d25515e75115965cc08323f845c1d23e50b9257d5e12b0d3a9a7a67c2.scope - libcontainer container cfbc1c9d25515e75115965cc08323f845c1d23e50b9257d5e12b0d3a9a7a67c2. Sep 10 00:31:33.631968 containerd[1461]: time="2025-09-10T00:31:33.631924915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-qbd4b,Uid:83b7f552-3827-4d34-8b87-852d9f4f172c,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfbc1c9d25515e75115965cc08323f845c1d23e50b9257d5e12b0d3a9a7a67c2\"" Sep 10 00:31:33.632538 kubelet[2498]: E0910 00:31:33.632504 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:37.506712 update_engine[1448]: I20250910 00:31:37.506610 1448 update_attempter.cc:509] Updating boot flags... Sep 10 00:31:37.547192 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2877) Sep 10 00:31:37.588227 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2880) Sep 10 00:31:37.617217 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2880) Sep 10 00:31:40.484479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount641400601.mount: Deactivated successfully. Sep 10 00:31:48.222044 containerd[1461]: time="2025-09-10T00:31:48.221965925Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:48.223128 containerd[1461]: time="2025-09-10T00:31:48.223083369Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 10 00:31:48.224530 containerd[1461]: time="2025-09-10T00:31:48.224507938Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:48.226376 containerd[1461]: time="2025-09-10T00:31:48.226332421Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 14.645312329s" Sep 10 00:31:48.226376 containerd[1461]: time="2025-09-10T00:31:48.226375475Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 10 00:31:48.233881 containerd[1461]: time="2025-09-10T00:31:48.233702194Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 10 00:31:48.250052 containerd[1461]: time="2025-09-10T00:31:48.250000787Z" level=info msg="CreateContainer within sandbox \"5cff573c958c687b26df329029b840589d99507de8c7720945ab6457a6ccd886\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 10 00:31:48.269770 containerd[1461]: time="2025-09-10T00:31:48.269722330Z" level=info msg="CreateContainer within sandbox \"5cff573c958c687b26df329029b840589d99507de8c7720945ab6457a6ccd886\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a4e9c213a092da1542c80f470c69a8d38c5cbf1ce339915acfea918ef93bc688\"" Sep 10 00:31:48.273251 containerd[1461]: time="2025-09-10T00:31:48.273211597Z" level=info msg="StartContainer for \"a4e9c213a092da1542c80f470c69a8d38c5cbf1ce339915acfea918ef93bc688\"" Sep 10 00:31:48.314340 systemd[1]: Started cri-containerd-a4e9c213a092da1542c80f470c69a8d38c5cbf1ce339915acfea918ef93bc688.scope - libcontainer container a4e9c213a092da1542c80f470c69a8d38c5cbf1ce339915acfea918ef93bc688. Sep 10 00:31:48.427295 systemd[1]: cri-containerd-a4e9c213a092da1542c80f470c69a8d38c5cbf1ce339915acfea918ef93bc688.scope: Deactivated successfully. Sep 10 00:31:48.486218 containerd[1461]: time="2025-09-10T00:31:48.486082837Z" level=info msg="StartContainer for \"a4e9c213a092da1542c80f470c69a8d38c5cbf1ce339915acfea918ef93bc688\" returns successfully" Sep 10 00:31:48.807856 containerd[1461]: time="2025-09-10T00:31:48.807352922Z" level=info msg="shim disconnected" id=a4e9c213a092da1542c80f470c69a8d38c5cbf1ce339915acfea918ef93bc688 namespace=k8s.io Sep 10 00:31:48.807856 containerd[1461]: time="2025-09-10T00:31:48.807433568Z" level=warning msg="cleaning up after shim disconnected" id=a4e9c213a092da1542c80f470c69a8d38c5cbf1ce339915acfea918ef93bc688 namespace=k8s.io Sep 10 00:31:48.807856 containerd[1461]: time="2025-09-10T00:31:48.807445692Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:31:48.956944 kubelet[2498]: E0910 00:31:48.956897 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:48.958550 containerd[1461]: time="2025-09-10T00:31:48.958511898Z" level=info msg="CreateContainer within sandbox \"5cff573c958c687b26df329029b840589d99507de8c7720945ab6457a6ccd886\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 10 00:31:48.973519 containerd[1461]: time="2025-09-10T00:31:48.973476639Z" level=info msg="CreateContainer within sandbox \"5cff573c958c687b26df329029b840589d99507de8c7720945ab6457a6ccd886\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9782aa999c9de15037d85be4e52b0ec369e4a647f533a229abaf9f328bf04e23\"" Sep 10 00:31:48.974237 kubelet[2498]: I0910 00:31:48.974198 2498 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jd8cd" podStartSLOduration=17.974179891 podStartE2EDuration="17.974179891s" podCreationTimestamp="2025-09-10 00:31:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:31:32.947243367 +0000 UTC m=+8.203289262" watchObservedRunningTime="2025-09-10 00:31:48.974179891 +0000 UTC m=+24.230225786" Sep 10 00:31:48.976142 containerd[1461]: time="2025-09-10T00:31:48.974446818Z" level=info msg="StartContainer for \"9782aa999c9de15037d85be4e52b0ec369e4a647f533a229abaf9f328bf04e23\"" Sep 10 00:31:49.004297 systemd[1]: Started cri-containerd-9782aa999c9de15037d85be4e52b0ec369e4a647f533a229abaf9f328bf04e23.scope - libcontainer container 9782aa999c9de15037d85be4e52b0ec369e4a647f533a229abaf9f328bf04e23. Sep 10 00:31:49.030654 containerd[1461]: time="2025-09-10T00:31:49.030597474Z" level=info msg="StartContainer for \"9782aa999c9de15037d85be4e52b0ec369e4a647f533a229abaf9f328bf04e23\" returns successfully" Sep 10 00:31:49.043051 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 00:31:49.043309 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:31:49.043385 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 10 00:31:49.049477 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 00:31:49.049677 systemd[1]: cri-containerd-9782aa999c9de15037d85be4e52b0ec369e4a647f533a229abaf9f328bf04e23.scope: Deactivated successfully. Sep 10 00:31:49.072534 containerd[1461]: time="2025-09-10T00:31:49.072410550Z" level=info msg="shim disconnected" id=9782aa999c9de15037d85be4e52b0ec369e4a647f533a229abaf9f328bf04e23 namespace=k8s.io Sep 10 00:31:49.072534 containerd[1461]: time="2025-09-10T00:31:49.072475816Z" level=warning msg="cleaning up after shim disconnected" id=9782aa999c9de15037d85be4e52b0ec369e4a647f533a229abaf9f328bf04e23 namespace=k8s.io Sep 10 00:31:49.072534 containerd[1461]: time="2025-09-10T00:31:49.072488181Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:31:49.075173 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:31:49.159958 systemd[1]: Started sshd@7-10.0.0.14:22-10.0.0.1:45652.service - OpenSSH per-connection server daemon (10.0.0.1:45652). Sep 10 00:31:49.196904 sshd[3044]: Accepted publickey for core from 10.0.0.1 port 45652 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:31:49.198802 sshd[3044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:31:49.203065 systemd-logind[1444]: New session 8 of user core. Sep 10 00:31:49.212274 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 10 00:31:49.265808 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4e9c213a092da1542c80f470c69a8d38c5cbf1ce339915acfea918ef93bc688-rootfs.mount: Deactivated successfully. Sep 10 00:31:49.341509 sshd[3044]: pam_unix(sshd:session): session closed for user core Sep 10 00:31:49.346723 systemd[1]: sshd@7-10.0.0.14:22-10.0.0.1:45652.service: Deactivated successfully. Sep 10 00:31:49.349786 systemd[1]: session-8.scope: Deactivated successfully. Sep 10 00:31:49.350523 systemd-logind[1444]: Session 8 logged out. Waiting for processes to exit. Sep 10 00:31:49.351467 systemd-logind[1444]: Removed session 8. Sep 10 00:31:49.962388 kubelet[2498]: E0910 00:31:49.962046 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:49.967922 containerd[1461]: time="2025-09-10T00:31:49.967884571Z" level=info msg="CreateContainer within sandbox \"5cff573c958c687b26df329029b840589d99507de8c7720945ab6457a6ccd886\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 10 00:31:49.987435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2184564426.mount: Deactivated successfully. Sep 10 00:31:49.992080 containerd[1461]: time="2025-09-10T00:31:49.992025266Z" level=info msg="CreateContainer within sandbox \"5cff573c958c687b26df329029b840589d99507de8c7720945ab6457a6ccd886\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"27c763179b6601d9f998c36ebd869784d4de4b1e02a70ec053f195774ec9f4be\"" Sep 10 00:31:49.992769 containerd[1461]: time="2025-09-10T00:31:49.992634946Z" level=info msg="StartContainer for \"27c763179b6601d9f998c36ebd869784d4de4b1e02a70ec053f195774ec9f4be\"" Sep 10 00:31:50.036316 systemd[1]: Started cri-containerd-27c763179b6601d9f998c36ebd869784d4de4b1e02a70ec053f195774ec9f4be.scope - libcontainer container 27c763179b6601d9f998c36ebd869784d4de4b1e02a70ec053f195774ec9f4be. Sep 10 00:31:50.070719 containerd[1461]: time="2025-09-10T00:31:50.070654783Z" level=info msg="StartContainer for \"27c763179b6601d9f998c36ebd869784d4de4b1e02a70ec053f195774ec9f4be\" returns successfully" Sep 10 00:31:50.073562 systemd[1]: cri-containerd-27c763179b6601d9f998c36ebd869784d4de4b1e02a70ec053f195774ec9f4be.scope: Deactivated successfully. Sep 10 00:31:50.102053 containerd[1461]: time="2025-09-10T00:31:50.101981965Z" level=info msg="shim disconnected" id=27c763179b6601d9f998c36ebd869784d4de4b1e02a70ec053f195774ec9f4be namespace=k8s.io Sep 10 00:31:50.102053 containerd[1461]: time="2025-09-10T00:31:50.102042321Z" level=warning msg="cleaning up after shim disconnected" id=27c763179b6601d9f998c36ebd869784d4de4b1e02a70ec053f195774ec9f4be namespace=k8s.io Sep 10 00:31:50.102053 containerd[1461]: time="2025-09-10T00:31:50.102051109Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:31:50.267121 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27c763179b6601d9f998c36ebd869784d4de4b1e02a70ec053f195774ec9f4be-rootfs.mount: Deactivated successfully. Sep 10 00:31:50.817930 containerd[1461]: time="2025-09-10T00:31:50.817860712Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:50.818871 containerd[1461]: time="2025-09-10T00:31:50.818824525Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 10 00:31:50.820351 containerd[1461]: time="2025-09-10T00:31:50.820299165Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:50.821734 containerd[1461]: time="2025-09-10T00:31:50.821693681Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.587961189s" Sep 10 00:31:50.821734 containerd[1461]: time="2025-09-10T00:31:50.821729701Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 10 00:31:50.824079 containerd[1461]: time="2025-09-10T00:31:50.824027121Z" level=info msg="CreateContainer within sandbox \"cfbc1c9d25515e75115965cc08323f845c1d23e50b9257d5e12b0d3a9a7a67c2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 10 00:31:50.836065 containerd[1461]: time="2025-09-10T00:31:50.835992348Z" level=info msg="CreateContainer within sandbox \"cfbc1c9d25515e75115965cc08323f845c1d23e50b9257d5e12b0d3a9a7a67c2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4de3f755184a0f94cf706c3014338b7701792d2b4b0095ad8550a9e3b43a25c3\"" Sep 10 00:31:50.836740 containerd[1461]: time="2025-09-10T00:31:50.836713542Z" level=info msg="StartContainer for \"4de3f755184a0f94cf706c3014338b7701792d2b4b0095ad8550a9e3b43a25c3\"" Sep 10 00:31:50.868303 systemd[1]: Started cri-containerd-4de3f755184a0f94cf706c3014338b7701792d2b4b0095ad8550a9e3b43a25c3.scope - libcontainer container 4de3f755184a0f94cf706c3014338b7701792d2b4b0095ad8550a9e3b43a25c3. Sep 10 00:31:50.898777 containerd[1461]: time="2025-09-10T00:31:50.898718199Z" level=info msg="StartContainer for \"4de3f755184a0f94cf706c3014338b7701792d2b4b0095ad8550a9e3b43a25c3\" returns successfully" Sep 10 00:31:50.968910 kubelet[2498]: E0910 00:31:50.968855 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:50.973519 kubelet[2498]: E0910 00:31:50.973471 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:50.982388 containerd[1461]: time="2025-09-10T00:31:50.982322064Z" level=info msg="CreateContainer within sandbox \"5cff573c958c687b26df329029b840589d99507de8c7720945ab6457a6ccd886\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 10 00:31:51.010318 kubelet[2498]: I0910 00:31:51.009772 2498 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-qbd4b" podStartSLOduration=2.820148601 podStartE2EDuration="20.009755453s" podCreationTimestamp="2025-09-10 00:31:31 +0000 UTC" firstStartedPulling="2025-09-10 00:31:33.632915611 +0000 UTC m=+8.888961506" lastFinishedPulling="2025-09-10 00:31:50.822522462 +0000 UTC m=+26.078568358" observedRunningTime="2025-09-10 00:31:50.981126212 +0000 UTC m=+26.237172107" watchObservedRunningTime="2025-09-10 00:31:51.009755453 +0000 UTC m=+26.265801348" Sep 10 00:31:51.010527 containerd[1461]: time="2025-09-10T00:31:51.009865105Z" level=info msg="CreateContainer within sandbox \"5cff573c958c687b26df329029b840589d99507de8c7720945ab6457a6ccd886\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d8fa8262f164c26021c1acf8ec30de4f371a94c7006815ad46da68170dcc7d89\"" Sep 10 00:31:51.010569 containerd[1461]: time="2025-09-10T00:31:51.010544086Z" level=info msg="StartContainer for \"d8fa8262f164c26021c1acf8ec30de4f371a94c7006815ad46da68170dcc7d89\"" Sep 10 00:31:51.059354 systemd[1]: Started cri-containerd-d8fa8262f164c26021c1acf8ec30de4f371a94c7006815ad46da68170dcc7d89.scope - libcontainer container d8fa8262f164c26021c1acf8ec30de4f371a94c7006815ad46da68170dcc7d89. Sep 10 00:31:51.098457 systemd[1]: cri-containerd-d8fa8262f164c26021c1acf8ec30de4f371a94c7006815ad46da68170dcc7d89.scope: Deactivated successfully. Sep 10 00:31:51.122833 containerd[1461]: time="2025-09-10T00:31:51.122757930Z" level=info msg="StartContainer for \"d8fa8262f164c26021c1acf8ec30de4f371a94c7006815ad46da68170dcc7d89\" returns successfully" Sep 10 00:31:51.149159 containerd[1461]: time="2025-09-10T00:31:51.149072455Z" level=info msg="shim disconnected" id=d8fa8262f164c26021c1acf8ec30de4f371a94c7006815ad46da68170dcc7d89 namespace=k8s.io Sep 10 00:31:51.149159 containerd[1461]: time="2025-09-10T00:31:51.149135938Z" level=warning msg="cleaning up after shim disconnected" id=d8fa8262f164c26021c1acf8ec30de4f371a94c7006815ad46da68170dcc7d89 namespace=k8s.io Sep 10 00:31:51.149159 containerd[1461]: time="2025-09-10T00:31:51.149146328Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:31:51.166568 containerd[1461]: time="2025-09-10T00:31:51.166433381Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:31:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 10 00:31:51.977612 kubelet[2498]: E0910 00:31:51.977566 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:51.978809 kubelet[2498]: E0910 00:31:51.977646 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:51.979670 containerd[1461]: time="2025-09-10T00:31:51.979622143Z" level=info msg="CreateContainer within sandbox \"5cff573c958c687b26df329029b840589d99507de8c7720945ab6457a6ccd886\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 10 00:31:52.119151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2869844449.mount: Deactivated successfully. Sep 10 00:31:52.120299 containerd[1461]: time="2025-09-10T00:31:52.120257405Z" level=info msg="CreateContainer within sandbox \"5cff573c958c687b26df329029b840589d99507de8c7720945ab6457a6ccd886\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"69edcb05a258700be2a95e80d76596a390d389f917a6549c24d1f51762052a68\"" Sep 10 00:31:52.120833 containerd[1461]: time="2025-09-10T00:31:52.120756418Z" level=info msg="StartContainer for \"69edcb05a258700be2a95e80d76596a390d389f917a6549c24d1f51762052a68\"" Sep 10 00:31:52.174323 systemd[1]: Started cri-containerd-69edcb05a258700be2a95e80d76596a390d389f917a6549c24d1f51762052a68.scope - libcontainer container 69edcb05a258700be2a95e80d76596a390d389f917a6549c24d1f51762052a68. Sep 10 00:31:52.206194 containerd[1461]: time="2025-09-10T00:31:52.206133520Z" level=info msg="StartContainer for \"69edcb05a258700be2a95e80d76596a390d389f917a6549c24d1f51762052a68\" returns successfully" Sep 10 00:31:52.268462 systemd[1]: run-containerd-runc-k8s.io-69edcb05a258700be2a95e80d76596a390d389f917a6549c24d1f51762052a68-runc.QXRiYz.mount: Deactivated successfully. Sep 10 00:31:52.343475 kubelet[2498]: I0910 00:31:52.343419 2498 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 10 00:31:52.378245 systemd[1]: Created slice kubepods-burstable-pod34a1985d_a54f_4787_b764_a0e82a1f3ecf.slice - libcontainer container kubepods-burstable-pod34a1985d_a54f_4787_b764_a0e82a1f3ecf.slice. Sep 10 00:31:52.389316 systemd[1]: Created slice kubepods-burstable-pod52015fe8_b43a_4ae8_951e_882813d1d5d1.slice - libcontainer container kubepods-burstable-pod52015fe8_b43a_4ae8_951e_882813d1d5d1.slice. Sep 10 00:31:52.459660 kubelet[2498]: I0910 00:31:52.459602 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sv8f\" (UniqueName: \"kubernetes.io/projected/34a1985d-a54f-4787-b764-a0e82a1f3ecf-kube-api-access-9sv8f\") pod \"coredns-7c65d6cfc9-wj42b\" (UID: \"34a1985d-a54f-4787-b764-a0e82a1f3ecf\") " pod="kube-system/coredns-7c65d6cfc9-wj42b" Sep 10 00:31:52.459660 kubelet[2498]: I0910 00:31:52.459644 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4d88\" (UniqueName: \"kubernetes.io/projected/52015fe8-b43a-4ae8-951e-882813d1d5d1-kube-api-access-h4d88\") pod \"coredns-7c65d6cfc9-gjvzd\" (UID: \"52015fe8-b43a-4ae8-951e-882813d1d5d1\") " pod="kube-system/coredns-7c65d6cfc9-gjvzd" Sep 10 00:31:52.459947 kubelet[2498]: I0910 00:31:52.459691 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52015fe8-b43a-4ae8-951e-882813d1d5d1-config-volume\") pod \"coredns-7c65d6cfc9-gjvzd\" (UID: \"52015fe8-b43a-4ae8-951e-882813d1d5d1\") " pod="kube-system/coredns-7c65d6cfc9-gjvzd" Sep 10 00:31:52.459947 kubelet[2498]: I0910 00:31:52.459713 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34a1985d-a54f-4787-b764-a0e82a1f3ecf-config-volume\") pod \"coredns-7c65d6cfc9-wj42b\" (UID: \"34a1985d-a54f-4787-b764-a0e82a1f3ecf\") " pod="kube-system/coredns-7c65d6cfc9-wj42b" Sep 10 00:31:52.683968 kubelet[2498]: E0910 00:31:52.683933 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:52.684722 containerd[1461]: time="2025-09-10T00:31:52.684669683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wj42b,Uid:34a1985d-a54f-4787-b764-a0e82a1f3ecf,Namespace:kube-system,Attempt:0,}" Sep 10 00:31:52.695581 kubelet[2498]: E0910 00:31:52.695551 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:52.696056 containerd[1461]: time="2025-09-10T00:31:52.696011777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gjvzd,Uid:52015fe8-b43a-4ae8-951e-882813d1d5d1,Namespace:kube-system,Attempt:0,}" Sep 10 00:31:52.983583 kubelet[2498]: E0910 00:31:52.983457 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:52.997419 kubelet[2498]: I0910 00:31:52.996947 2498 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6s87r" podStartSLOduration=7.344631475 podStartE2EDuration="21.996925224s" podCreationTimestamp="2025-09-10 00:31:31 +0000 UTC" firstStartedPulling="2025-09-10 00:31:33.580472011 +0000 UTC m=+8.836517906" lastFinishedPulling="2025-09-10 00:31:48.23276576 +0000 UTC m=+23.488811655" observedRunningTime="2025-09-10 00:31:52.996688727 +0000 UTC m=+28.252734622" watchObservedRunningTime="2025-09-10 00:31:52.996925224 +0000 UTC m=+28.252971119" Sep 10 00:31:53.985538 kubelet[2498]: E0910 00:31:53.985484 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:54.354290 systemd[1]: Started sshd@8-10.0.0.14:22-10.0.0.1:48332.service - OpenSSH per-connection server daemon (10.0.0.1:48332). Sep 10 00:31:54.392805 sshd[3365]: Accepted publickey for core from 10.0.0.1 port 48332 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:31:54.394601 sshd[3365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:31:54.399323 systemd-logind[1444]: New session 9 of user core. Sep 10 00:31:54.414340 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 10 00:31:54.524493 systemd-networkd[1388]: cilium_host: Link UP Sep 10 00:31:54.524966 systemd-networkd[1388]: cilium_net: Link UP Sep 10 00:31:54.525313 systemd-networkd[1388]: cilium_net: Gained carrier Sep 10 00:31:54.525770 systemd-networkd[1388]: cilium_host: Gained carrier Sep 10 00:31:54.619688 sshd[3365]: pam_unix(sshd:session): session closed for user core Sep 10 00:31:54.629892 systemd[1]: sshd@8-10.0.0.14:22-10.0.0.1:48332.service: Deactivated successfully. Sep 10 00:31:54.634505 systemd[1]: session-9.scope: Deactivated successfully. Sep 10 00:31:54.636042 systemd-logind[1444]: Session 9 logged out. Waiting for processes to exit. Sep 10 00:31:54.638812 systemd-logind[1444]: Removed session 9. Sep 10 00:31:54.708056 systemd-networkd[1388]: cilium_vxlan: Link UP Sep 10 00:31:54.708066 systemd-networkd[1388]: cilium_vxlan: Gained carrier Sep 10 00:31:54.733433 systemd-networkd[1388]: cilium_net: Gained IPv6LL Sep 10 00:31:54.797480 systemd-networkd[1388]: cilium_host: Gained IPv6LL Sep 10 00:31:54.954209 kernel: NET: Registered PF_ALG protocol family Sep 10 00:31:54.987113 kubelet[2498]: E0910 00:31:54.987068 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:55.786866 systemd-networkd[1388]: lxc_health: Link UP Sep 10 00:31:55.797318 systemd-networkd[1388]: lxc_health: Gained carrier Sep 10 00:31:56.085406 systemd-networkd[1388]: cilium_vxlan: Gained IPv6LL Sep 10 00:31:56.353147 systemd-networkd[1388]: lxc137bff8a19a8: Link UP Sep 10 00:31:56.355673 systemd-networkd[1388]: lxc298f08f01e33: Link UP Sep 10 00:31:56.373191 kernel: eth0: renamed from tmpfb71e Sep 10 00:31:56.390210 kernel: eth0: renamed from tmpd140e Sep 10 00:31:56.395126 systemd-networkd[1388]: lxc137bff8a19a8: Gained carrier Sep 10 00:31:56.398544 systemd-networkd[1388]: lxc298f08f01e33: Gained carrier Sep 10 00:31:56.853398 systemd-networkd[1388]: lxc_health: Gained IPv6LL Sep 10 00:31:57.500720 kubelet[2498]: E0910 00:31:57.500668 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:57.557416 systemd-networkd[1388]: lxc137bff8a19a8: Gained IPv6LL Sep 10 00:31:57.992116 kubelet[2498]: E0910 00:31:57.992024 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:58.005375 systemd-networkd[1388]: lxc298f08f01e33: Gained IPv6LL Sep 10 00:31:58.994493 kubelet[2498]: E0910 00:31:58.994423 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:59.637451 systemd[1]: Started sshd@9-10.0.0.14:22-10.0.0.1:48334.service - OpenSSH per-connection server daemon (10.0.0.1:48334). Sep 10 00:31:59.673581 sshd[3762]: Accepted publickey for core from 10.0.0.1 port 48334 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:31:59.675398 sshd[3762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:31:59.680486 systemd-logind[1444]: New session 10 of user core. Sep 10 00:31:59.693394 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 10 00:31:59.866410 sshd[3762]: pam_unix(sshd:session): session closed for user core Sep 10 00:31:59.872679 systemd[1]: sshd@9-10.0.0.14:22-10.0.0.1:48334.service: Deactivated successfully. Sep 10 00:31:59.877352 systemd[1]: session-10.scope: Deactivated successfully. Sep 10 00:31:59.878758 systemd-logind[1444]: Session 10 logged out. Waiting for processes to exit. Sep 10 00:31:59.881910 systemd-logind[1444]: Removed session 10. Sep 10 00:31:59.942027 containerd[1461]: time="2025-09-10T00:31:59.941735362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:31:59.942027 containerd[1461]: time="2025-09-10T00:31:59.941827790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:31:59.942027 containerd[1461]: time="2025-09-10T00:31:59.941842277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:31:59.943029 containerd[1461]: time="2025-09-10T00:31:59.942868487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:31:59.966393 systemd[1]: Started cri-containerd-fb71e6c186e8d0ce76c356871afa1f4f98917541434b2074f343734ed5339d72.scope - libcontainer container fb71e6c186e8d0ce76c356871afa1f4f98917541434b2074f343734ed5339d72. Sep 10 00:31:59.971130 containerd[1461]: time="2025-09-10T00:31:59.970682411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:31:59.971130 containerd[1461]: time="2025-09-10T00:31:59.970773597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:31:59.971130 containerd[1461]: time="2025-09-10T00:31:59.970816399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:31:59.971130 containerd[1461]: time="2025-09-10T00:31:59.970920920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:31:59.991462 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:31:59.998366 systemd[1]: Started cri-containerd-d140eb9b9f7e9ee08c895a1ec6021f0ed79186eb0255b03be0cc8418abf8b971.scope - libcontainer container d140eb9b9f7e9ee08c895a1ec6021f0ed79186eb0255b03be0cc8418abf8b971. Sep 10 00:32:00.014142 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:32:00.023785 containerd[1461]: time="2025-09-10T00:32:00.023733339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gjvzd,Uid:52015fe8-b43a-4ae8-951e-882813d1d5d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb71e6c186e8d0ce76c356871afa1f4f98917541434b2074f343734ed5339d72\"" Sep 10 00:32:00.024830 kubelet[2498]: E0910 00:32:00.024600 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:00.028086 containerd[1461]: time="2025-09-10T00:32:00.028056417Z" level=info msg="CreateContainer within sandbox \"fb71e6c186e8d0ce76c356871afa1f4f98917541434b2074f343734ed5339d72\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 00:32:00.042274 containerd[1461]: time="2025-09-10T00:32:00.042225293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wj42b,Uid:34a1985d-a54f-4787-b764-a0e82a1f3ecf,Namespace:kube-system,Attempt:0,} returns sandbox id \"d140eb9b9f7e9ee08c895a1ec6021f0ed79186eb0255b03be0cc8418abf8b971\"" Sep 10 00:32:00.043848 kubelet[2498]: E0910 00:32:00.043817 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:00.047464 containerd[1461]: time="2025-09-10T00:32:00.047426515Z" level=info msg="CreateContainer within sandbox \"d140eb9b9f7e9ee08c895a1ec6021f0ed79186eb0255b03be0cc8418abf8b971\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 00:32:00.052487 containerd[1461]: time="2025-09-10T00:32:00.052450939Z" level=info msg="CreateContainer within sandbox \"fb71e6c186e8d0ce76c356871afa1f4f98917541434b2074f343734ed5339d72\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"10f7a61946958e41267732db83f93070e7ddad9bae242428f7d7545d228e106f\"" Sep 10 00:32:00.053233 containerd[1461]: time="2025-09-10T00:32:00.053182281Z" level=info msg="StartContainer for \"10f7a61946958e41267732db83f93070e7ddad9bae242428f7d7545d228e106f\"" Sep 10 00:32:00.069247 containerd[1461]: time="2025-09-10T00:32:00.069187909Z" level=info msg="CreateContainer within sandbox \"d140eb9b9f7e9ee08c895a1ec6021f0ed79186eb0255b03be0cc8418abf8b971\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e42bf851a713e2a3c9579652b5fd0e47e427d89b5054c487724512a4b3d7cb6b\"" Sep 10 00:32:00.069746 containerd[1461]: time="2025-09-10T00:32:00.069717103Z" level=info msg="StartContainer for \"e42bf851a713e2a3c9579652b5fd0e47e427d89b5054c487724512a4b3d7cb6b\"" Sep 10 00:32:00.087345 systemd[1]: Started cri-containerd-10f7a61946958e41267732db83f93070e7ddad9bae242428f7d7545d228e106f.scope - libcontainer container 10f7a61946958e41267732db83f93070e7ddad9bae242428f7d7545d228e106f. Sep 10 00:32:00.113566 systemd[1]: Started cri-containerd-e42bf851a713e2a3c9579652b5fd0e47e427d89b5054c487724512a4b3d7cb6b.scope - libcontainer container e42bf851a713e2a3c9579652b5fd0e47e427d89b5054c487724512a4b3d7cb6b. Sep 10 00:32:00.137354 containerd[1461]: time="2025-09-10T00:32:00.137309059Z" level=info msg="StartContainer for \"10f7a61946958e41267732db83f93070e7ddad9bae242428f7d7545d228e106f\" returns successfully" Sep 10 00:32:00.144826 containerd[1461]: time="2025-09-10T00:32:00.144780584Z" level=info msg="StartContainer for \"e42bf851a713e2a3c9579652b5fd0e47e427d89b5054c487724512a4b3d7cb6b\" returns successfully" Sep 10 00:32:00.949158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount157751495.mount: Deactivated successfully. Sep 10 00:32:00.999935 kubelet[2498]: E0910 00:32:00.999898 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:01.002592 kubelet[2498]: E0910 00:32:01.002559 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:01.012349 kubelet[2498]: I0910 00:32:01.012244 2498 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-gjvzd" podStartSLOduration=30.012209929 podStartE2EDuration="30.012209929s" podCreationTimestamp="2025-09-10 00:31:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:32:01.01046726 +0000 UTC m=+36.266513165" watchObservedRunningTime="2025-09-10 00:32:01.012209929 +0000 UTC m=+36.268255824" Sep 10 00:32:01.020846 kubelet[2498]: I0910 00:32:01.020779 2498 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-wj42b" podStartSLOduration=30.020756053 podStartE2EDuration="30.020756053s" podCreationTimestamp="2025-09-10 00:31:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:32:01.020500865 +0000 UTC m=+36.276546760" watchObservedRunningTime="2025-09-10 00:32:01.020756053 +0000 UTC m=+36.276801948" Sep 10 00:32:02.004417 kubelet[2498]: E0910 00:32:02.004381 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:02.004882 kubelet[2498]: E0910 00:32:02.004536 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:03.006082 kubelet[2498]: E0910 00:32:03.006030 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:04.877579 systemd[1]: Started sshd@10-10.0.0.14:22-10.0.0.1:45030.service - OpenSSH per-connection server daemon (10.0.0.1:45030). Sep 10 00:32:04.917310 sshd[3949]: Accepted publickey for core from 10.0.0.1 port 45030 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:32:04.919408 sshd[3949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:32:04.924749 systemd-logind[1444]: New session 11 of user core. Sep 10 00:32:04.932355 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 10 00:32:05.061787 sshd[3949]: pam_unix(sshd:session): session closed for user core Sep 10 00:32:05.066488 systemd[1]: sshd@10-10.0.0.14:22-10.0.0.1:45030.service: Deactivated successfully. Sep 10 00:32:05.068825 systemd[1]: session-11.scope: Deactivated successfully. Sep 10 00:32:05.069561 systemd-logind[1444]: Session 11 logged out. Waiting for processes to exit. Sep 10 00:32:05.070646 systemd-logind[1444]: Removed session 11. Sep 10 00:32:10.073277 systemd[1]: Started sshd@11-10.0.0.14:22-10.0.0.1:48730.service - OpenSSH per-connection server daemon (10.0.0.1:48730). Sep 10 00:32:10.106056 sshd[3967]: Accepted publickey for core from 10.0.0.1 port 48730 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:32:10.107994 sshd[3967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:32:10.112505 systemd-logind[1444]: New session 12 of user core. Sep 10 00:32:10.122298 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 10 00:32:10.233701 sshd[3967]: pam_unix(sshd:session): session closed for user core Sep 10 00:32:10.251019 systemd[1]: sshd@11-10.0.0.14:22-10.0.0.1:48730.service: Deactivated successfully. Sep 10 00:32:10.253368 systemd[1]: session-12.scope: Deactivated successfully. Sep 10 00:32:10.254918 systemd-logind[1444]: Session 12 logged out. Waiting for processes to exit. Sep 10 00:32:10.263526 systemd[1]: Started sshd@12-10.0.0.14:22-10.0.0.1:48742.service - OpenSSH per-connection server daemon (10.0.0.1:48742). Sep 10 00:32:10.264632 systemd-logind[1444]: Removed session 12. Sep 10 00:32:10.295561 sshd[3983]: Accepted publickey for core from 10.0.0.1 port 48742 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:32:10.297381 sshd[3983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:32:10.301824 systemd-logind[1444]: New session 13 of user core. Sep 10 00:32:10.311425 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 10 00:32:10.476385 sshd[3983]: pam_unix(sshd:session): session closed for user core Sep 10 00:32:10.489684 systemd[1]: sshd@12-10.0.0.14:22-10.0.0.1:48742.service: Deactivated successfully. Sep 10 00:32:10.494897 systemd[1]: session-13.scope: Deactivated successfully. Sep 10 00:32:10.497666 systemd-logind[1444]: Session 13 logged out. Waiting for processes to exit. Sep 10 00:32:10.507549 systemd[1]: Started sshd@13-10.0.0.14:22-10.0.0.1:48758.service - OpenSSH per-connection server daemon (10.0.0.1:48758). Sep 10 00:32:10.508778 systemd-logind[1444]: Removed session 13. Sep 10 00:32:10.541233 sshd[3995]: Accepted publickey for core from 10.0.0.1 port 48758 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:32:10.543394 sshd[3995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:32:10.548217 systemd-logind[1444]: New session 14 of user core. Sep 10 00:32:10.559515 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 10 00:32:10.679239 sshd[3995]: pam_unix(sshd:session): session closed for user core Sep 10 00:32:10.684062 systemd[1]: sshd@13-10.0.0.14:22-10.0.0.1:48758.service: Deactivated successfully. Sep 10 00:32:10.686697 systemd[1]: session-14.scope: Deactivated successfully. Sep 10 00:32:10.687435 systemd-logind[1444]: Session 14 logged out. Waiting for processes to exit. Sep 10 00:32:10.688521 systemd-logind[1444]: Removed session 14. Sep 10 00:32:15.692763 systemd[1]: Started sshd@14-10.0.0.14:22-10.0.0.1:48764.service - OpenSSH per-connection server daemon (10.0.0.1:48764). Sep 10 00:32:15.729100 sshd[4009]: Accepted publickey for core from 10.0.0.1 port 48764 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:32:15.731396 sshd[4009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:32:15.736756 systemd-logind[1444]: New session 15 of user core. Sep 10 00:32:15.750501 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 10 00:32:15.870332 sshd[4009]: pam_unix(sshd:session): session closed for user core Sep 10 00:32:15.874836 systemd[1]: sshd@14-10.0.0.14:22-10.0.0.1:48764.service: Deactivated successfully. Sep 10 00:32:15.877284 systemd[1]: session-15.scope: Deactivated successfully. Sep 10 00:32:15.878033 systemd-logind[1444]: Session 15 logged out. Waiting for processes to exit. Sep 10 00:32:15.879090 systemd-logind[1444]: Removed session 15. Sep 10 00:32:20.886238 systemd[1]: Started sshd@15-10.0.0.14:22-10.0.0.1:50990.service - OpenSSH per-connection server daemon (10.0.0.1:50990). Sep 10 00:32:20.920843 sshd[4023]: Accepted publickey for core from 10.0.0.1 port 50990 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:32:20.922921 sshd[4023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:32:20.927519 systemd-logind[1444]: New session 16 of user core. Sep 10 00:32:20.932315 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 10 00:32:21.042005 sshd[4023]: pam_unix(sshd:session): session closed for user core Sep 10 00:32:21.054436 systemd[1]: sshd@15-10.0.0.14:22-10.0.0.1:50990.service: Deactivated successfully. Sep 10 00:32:21.056672 systemd[1]: session-16.scope: Deactivated successfully. Sep 10 00:32:21.058595 systemd-logind[1444]: Session 16 logged out. Waiting for processes to exit. Sep 10 00:32:21.066597 systemd[1]: Started sshd@16-10.0.0.14:22-10.0.0.1:50996.service - OpenSSH per-connection server daemon (10.0.0.1:50996). Sep 10 00:32:21.067735 systemd-logind[1444]: Removed session 16. Sep 10 00:32:21.096103 sshd[4037]: Accepted publickey for core from 10.0.0.1 port 50996 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:32:21.097904 sshd[4037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:32:21.102258 systemd-logind[1444]: New session 17 of user core. Sep 10 00:32:21.113471 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 10 00:32:21.493312 sshd[4037]: pam_unix(sshd:session): session closed for user core Sep 10 00:32:21.504431 systemd[1]: sshd@16-10.0.0.14:22-10.0.0.1:50996.service: Deactivated successfully. Sep 10 00:32:21.506929 systemd[1]: session-17.scope: Deactivated successfully. Sep 10 00:32:21.508699 systemd-logind[1444]: Session 17 logged out. Waiting for processes to exit. Sep 10 00:32:21.517548 systemd[1]: Started sshd@17-10.0.0.14:22-10.0.0.1:51000.service - OpenSSH per-connection server daemon (10.0.0.1:51000). Sep 10 00:32:21.518709 systemd-logind[1444]: Removed session 17. Sep 10 00:32:21.555522 sshd[4049]: Accepted publickey for core from 10.0.0.1 port 51000 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:32:21.557230 sshd[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:32:21.562003 systemd-logind[1444]: New session 18 of user core. Sep 10 00:32:21.570357 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 10 00:32:22.727909 sshd[4049]: pam_unix(sshd:session): session closed for user core Sep 10 00:32:22.743615 systemd[1]: sshd@17-10.0.0.14:22-10.0.0.1:51000.service: Deactivated successfully. Sep 10 00:32:22.747666 systemd[1]: session-18.scope: Deactivated successfully. Sep 10 00:32:22.750720 systemd-logind[1444]: Session 18 logged out. Waiting for processes to exit. Sep 10 00:32:22.757593 systemd[1]: Started sshd@18-10.0.0.14:22-10.0.0.1:51008.service - OpenSSH per-connection server daemon (10.0.0.1:51008). Sep 10 00:32:22.759710 systemd-logind[1444]: Removed session 18. Sep 10 00:32:22.795191 sshd[4072]: Accepted publickey for core from 10.0.0.1 port 51008 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:32:22.797439 sshd[4072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:32:22.803406 systemd-logind[1444]: New session 19 of user core. Sep 10 00:32:22.815433 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 10 00:32:23.201866 sshd[4072]: pam_unix(sshd:session): session closed for user core Sep 10 00:32:23.213484 systemd[1]: sshd@18-10.0.0.14:22-10.0.0.1:51008.service: Deactivated successfully. Sep 10 00:32:23.215784 systemd[1]: session-19.scope: Deactivated successfully. Sep 10 00:32:23.217726 systemd-logind[1444]: Session 19 logged out. Waiting for processes to exit. Sep 10 00:32:23.219854 systemd[1]: Started sshd@19-10.0.0.14:22-10.0.0.1:51018.service - OpenSSH per-connection server daemon (10.0.0.1:51018). Sep 10 00:32:23.220679 systemd-logind[1444]: Removed session 19. Sep 10 00:32:23.269229 sshd[4084]: Accepted publickey for core from 10.0.0.1 port 51018 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:32:23.271436 sshd[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:32:23.276906 systemd-logind[1444]: New session 20 of user core. Sep 10 00:32:23.289349 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 10 00:32:23.403502 sshd[4084]: pam_unix(sshd:session): session closed for user core Sep 10 00:32:23.408311 systemd[1]: sshd@19-10.0.0.14:22-10.0.0.1:51018.service: Deactivated successfully. Sep 10 00:32:23.411569 systemd[1]: session-20.scope: Deactivated successfully. Sep 10 00:32:23.412469 systemd-logind[1444]: Session 20 logged out. Waiting for processes to exit. Sep 10 00:32:23.413713 systemd-logind[1444]: Removed session 20. Sep 10 00:32:28.420547 systemd[1]: Started sshd@20-10.0.0.14:22-10.0.0.1:51020.service - OpenSSH per-connection server daemon (10.0.0.1:51020). Sep 10 00:32:28.454217 sshd[4101]: Accepted publickey for core from 10.0.0.1 port 51020 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:32:28.455847 sshd[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:32:28.459916 systemd-logind[1444]: New session 21 of user core. Sep 10 00:32:28.469376 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 10 00:32:28.577959 sshd[4101]: pam_unix(sshd:session): session closed for user core Sep 10 00:32:28.582433 systemd[1]: sshd@20-10.0.0.14:22-10.0.0.1:51020.service: Deactivated successfully. Sep 10 00:32:28.584585 systemd[1]: session-21.scope: Deactivated successfully. Sep 10 00:32:28.585235 systemd-logind[1444]: Session 21 logged out. Waiting for processes to exit. Sep 10 00:32:28.586131 systemd-logind[1444]: Removed session 21. Sep 10 00:32:33.590485 systemd[1]: Started sshd@21-10.0.0.14:22-10.0.0.1:59492.service - OpenSSH per-connection server daemon (10.0.0.1:59492). Sep 10 00:32:33.623919 sshd[4121]: Accepted publickey for core from 10.0.0.1 port 59492 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:32:33.625640 sshd[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:32:33.629680 systemd-logind[1444]: New session 22 of user core. Sep 10 00:32:33.636313 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 10 00:32:33.756742 sshd[4121]: pam_unix(sshd:session): session closed for user core Sep 10 00:32:33.761107 systemd[1]: sshd@21-10.0.0.14:22-10.0.0.1:59492.service: Deactivated successfully. Sep 10 00:32:33.763383 systemd[1]: session-22.scope: Deactivated successfully. Sep 10 00:32:33.763974 systemd-logind[1444]: Session 22 logged out. Waiting for processes to exit. Sep 10 00:32:33.764884 systemd-logind[1444]: Removed session 22. Sep 10 00:32:38.772520 systemd[1]: Started sshd@22-10.0.0.14:22-10.0.0.1:59500.service - OpenSSH per-connection server daemon (10.0.0.1:59500). Sep 10 00:32:38.807378 sshd[4135]: Accepted publickey for core from 10.0.0.1 port 59500 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:32:38.809253 sshd[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:32:38.813742 systemd-logind[1444]: New session 23 of user core. Sep 10 00:32:38.821301 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 10 00:32:38.901529 kubelet[2498]: E0910 00:32:38.901483 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:38.930226 sshd[4135]: pam_unix(sshd:session): session closed for user core Sep 10 00:32:38.934616 systemd[1]: sshd@22-10.0.0.14:22-10.0.0.1:59500.service: Deactivated successfully. Sep 10 00:32:38.937623 systemd[1]: session-23.scope: Deactivated successfully. Sep 10 00:32:38.938444 systemd-logind[1444]: Session 23 logged out. Waiting for processes to exit. Sep 10 00:32:38.939602 systemd-logind[1444]: Removed session 23. Sep 10 00:32:40.901281 kubelet[2498]: E0910 00:32:40.901226 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:40.901281 kubelet[2498]: E0910 00:32:40.901288 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:43.947046 systemd[1]: Started sshd@23-10.0.0.14:22-10.0.0.1:33062.service - OpenSSH per-connection server daemon (10.0.0.1:33062). Sep 10 00:32:43.982544 sshd[4150]: Accepted publickey for core from 10.0.0.1 port 33062 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:32:43.984256 sshd[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:32:43.988475 systemd-logind[1444]: New session 24 of user core. Sep 10 00:32:44.000330 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 10 00:32:44.103045 sshd[4150]: pam_unix(sshd:session): session closed for user core Sep 10 00:32:44.117395 systemd[1]: sshd@23-10.0.0.14:22-10.0.0.1:33062.service: Deactivated successfully. Sep 10 00:32:44.119592 systemd[1]: session-24.scope: Deactivated successfully. Sep 10 00:32:44.121302 systemd-logind[1444]: Session 24 logged out. Waiting for processes to exit. Sep 10 00:32:44.132417 systemd[1]: Started sshd@24-10.0.0.14:22-10.0.0.1:33064.service - OpenSSH per-connection server daemon (10.0.0.1:33064). Sep 10 00:32:44.133315 systemd-logind[1444]: Removed session 24. Sep 10 00:32:44.162445 sshd[4165]: Accepted publickey for core from 10.0.0.1 port 33064 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:32:44.164149 sshd[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:32:44.167948 systemd-logind[1444]: New session 25 of user core. Sep 10 00:32:44.177342 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 10 00:32:45.856223 containerd[1461]: time="2025-09-10T00:32:45.856137983Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 00:32:45.860919 containerd[1461]: time="2025-09-10T00:32:45.860880024Z" level=info msg="StopContainer for \"69edcb05a258700be2a95e80d76596a390d389f917a6549c24d1f51762052a68\" with timeout 2 (s)" Sep 10 00:32:45.861219 containerd[1461]: time="2025-09-10T00:32:45.861155764Z" level=info msg="Stop container \"69edcb05a258700be2a95e80d76596a390d389f917a6549c24d1f51762052a68\" with signal terminated" Sep 10 00:32:45.868486 systemd-networkd[1388]: lxc_health: Link DOWN Sep 10 00:32:45.868499 systemd-networkd[1388]: lxc_health: Lost carrier Sep 10 00:32:45.897701 containerd[1461]: time="2025-09-10T00:32:45.897656955Z" level=info msg="StopContainer for \"4de3f755184a0f94cf706c3014338b7701792d2b4b0095ad8550a9e3b43a25c3\" with timeout 30 (s)" Sep 10 00:32:45.898992 containerd[1461]: time="2025-09-10T00:32:45.898953620Z" level=info msg="Stop container \"4de3f755184a0f94cf706c3014338b7701792d2b4b0095ad8550a9e3b43a25c3\" with signal terminated" Sep 10 00:32:45.910686 systemd[1]: cri-containerd-69edcb05a258700be2a95e80d76596a390d389f917a6549c24d1f51762052a68.scope: Deactivated successfully. Sep 10 00:32:45.911200 systemd[1]: cri-containerd-69edcb05a258700be2a95e80d76596a390d389f917a6549c24d1f51762052a68.scope: Consumed 7.343s CPU time. Sep 10 00:32:45.912654 systemd[1]: cri-containerd-4de3f755184a0f94cf706c3014338b7701792d2b4b0095ad8550a9e3b43a25c3.scope: Deactivated successfully. Sep 10 00:32:45.936800 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4de3f755184a0f94cf706c3014338b7701792d2b4b0095ad8550a9e3b43a25c3-rootfs.mount: Deactivated successfully. Sep 10 00:32:45.939563 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69edcb05a258700be2a95e80d76596a390d389f917a6549c24d1f51762052a68-rootfs.mount: Deactivated successfully. Sep 10 00:32:46.341221 containerd[1461]: time="2025-09-10T00:32:46.341054296Z" level=info msg="shim disconnected" id=69edcb05a258700be2a95e80d76596a390d389f917a6549c24d1f51762052a68 namespace=k8s.io Sep 10 00:32:46.341221 containerd[1461]: time="2025-09-10T00:32:46.341151449Z" level=warning msg="cleaning up after shim disconnected" id=69edcb05a258700be2a95e80d76596a390d389f917a6549c24d1f51762052a68 namespace=k8s.io Sep 10 00:32:46.341505 containerd[1461]: time="2025-09-10T00:32:46.341194271Z" level=info msg="shim disconnected" id=4de3f755184a0f94cf706c3014338b7701792d2b4b0095ad8550a9e3b43a25c3 namespace=k8s.io Sep 10 00:32:46.341505 containerd[1461]: time="2025-09-10T00:32:46.341272288Z" level=warning msg="cleaning up after shim disconnected" id=4de3f755184a0f94cf706c3014338b7701792d2b4b0095ad8550a9e3b43a25c3 namespace=k8s.io Sep 10 00:32:46.341505 containerd[1461]: time="2025-09-10T00:32:46.341281225Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:32:46.341505 containerd[1461]: time="2025-09-10T00:32:46.341199991Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:32:46.356469 containerd[1461]: time="2025-09-10T00:32:46.356407834Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:32:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 10 00:32:46.532578 containerd[1461]: time="2025-09-10T00:32:46.532507267Z" level=info msg="StopContainer for \"69edcb05a258700be2a95e80d76596a390d389f917a6549c24d1f51762052a68\" returns successfully" Sep 10 00:32:46.590410 containerd[1461]: time="2025-09-10T00:32:46.590322500Z" level=info msg="StopPodSandbox for \"5cff573c958c687b26df329029b840589d99507de8c7720945ab6457a6ccd886\"" Sep 10 00:32:46.590410 containerd[1461]: time="2025-09-10T00:32:46.590400757Z" level=info msg="Container to stop \"a4e9c213a092da1542c80f470c69a8d38c5cbf1ce339915acfea918ef93bc688\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:32:46.590410 containerd[1461]: time="2025-09-10T00:32:46.590414012Z" level=info msg="Container to stop \"9782aa999c9de15037d85be4e52b0ec369e4a647f533a229abaf9f328bf04e23\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:32:46.590410 containerd[1461]: time="2025-09-10T00:32:46.590423510Z" level=info msg="Container to stop \"27c763179b6601d9f998c36ebd869784d4de4b1e02a70ec053f195774ec9f4be\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:32:46.590410 containerd[1461]: time="2025-09-10T00:32:46.590433359Z" level=info msg="Container to stop \"d8fa8262f164c26021c1acf8ec30de4f371a94c7006815ad46da68170dcc7d89\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:32:46.590703 containerd[1461]: time="2025-09-10T00:32:46.590443518Z" level=info msg="Container to stop \"69edcb05a258700be2a95e80d76596a390d389f917a6549c24d1f51762052a68\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:32:46.592673 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5cff573c958c687b26df329029b840589d99507de8c7720945ab6457a6ccd886-shm.mount: Deactivated successfully. Sep 10 00:32:46.597770 systemd[1]: cri-containerd-5cff573c958c687b26df329029b840589d99507de8c7720945ab6457a6ccd886.scope: Deactivated successfully. Sep 10 00:32:46.616514 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5cff573c958c687b26df329029b840589d99507de8c7720945ab6457a6ccd886-rootfs.mount: Deactivated successfully. Sep 10 00:32:46.703444 containerd[1461]: time="2025-09-10T00:32:46.703378192Z" level=info msg="StopContainer for \"4de3f755184a0f94cf706c3014338b7701792d2b4b0095ad8550a9e3b43a25c3\" returns successfully" Sep 10 00:32:46.704070 containerd[1461]: time="2025-09-10T00:32:46.704019371Z" level=info msg="StopPodSandbox for \"cfbc1c9d25515e75115965cc08323f845c1d23e50b9257d5e12b0d3a9a7a67c2\"" Sep 10 00:32:46.704070 containerd[1461]: time="2025-09-10T00:32:46.704077490Z" level=info msg="Container to stop \"4de3f755184a0f94cf706c3014338b7701792d2b4b0095ad8550a9e3b43a25c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:32:46.711440 systemd[1]: cri-containerd-cfbc1c9d25515e75115965cc08323f845c1d23e50b9257d5e12b0d3a9a7a67c2.scope: Deactivated successfully. Sep 10 00:32:46.821592 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfbc1c9d25515e75115965cc08323f845c1d23e50b9257d5e12b0d3a9a7a67c2-rootfs.mount: Deactivated successfully. Sep 10 00:32:46.821711 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cfbc1c9d25515e75115965cc08323f845c1d23e50b9257d5e12b0d3a9a7a67c2-shm.mount: Deactivated successfully. Sep 10 00:32:46.861616 containerd[1461]: time="2025-09-10T00:32:46.861433165Z" level=info msg="shim disconnected" id=5cff573c958c687b26df329029b840589d99507de8c7720945ab6457a6ccd886 namespace=k8s.io Sep 10 00:32:46.861616 containerd[1461]: time="2025-09-10T00:32:46.861503667Z" level=warning msg="cleaning up after shim disconnected" id=5cff573c958c687b26df329029b840589d99507de8c7720945ab6457a6ccd886 namespace=k8s.io Sep 10 00:32:46.861616 containerd[1461]: time="2025-09-10T00:32:46.861513947Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:32:46.897234 containerd[1461]: time="2025-09-10T00:32:46.897148621Z" level=info msg="TearDown network for sandbox \"5cff573c958c687b26df329029b840589d99507de8c7720945ab6457a6ccd886\" successfully" Sep 10 00:32:46.897234 containerd[1461]: time="2025-09-10T00:32:46.897222711Z" level=info msg="StopPodSandbox for \"5cff573c958c687b26df329029b840589d99507de8c7720945ab6457a6ccd886\" returns successfully" Sep 10 00:32:46.909015 containerd[1461]: time="2025-09-10T00:32:46.908909765Z" level=info msg="shim disconnected" id=cfbc1c9d25515e75115965cc08323f845c1d23e50b9257d5e12b0d3a9a7a67c2 namespace=k8s.io Sep 10 00:32:46.909015 containerd[1461]: time="2025-09-10T00:32:46.909009624Z" level=warning msg="cleaning up after shim disconnected" id=cfbc1c9d25515e75115965cc08323f845c1d23e50b9257d5e12b0d3a9a7a67c2 namespace=k8s.io Sep 10 00:32:46.909015 containerd[1461]: time="2025-09-10T00:32:46.909026796Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:32:46.926738 containerd[1461]: time="2025-09-10T00:32:46.926690088Z" level=info msg="TearDown network for sandbox \"cfbc1c9d25515e75115965cc08323f845c1d23e50b9257d5e12b0d3a9a7a67c2\" successfully" Sep 10 00:32:46.926738 containerd[1461]: time="2025-09-10T00:32:46.926733330Z" level=info msg="StopPodSandbox for \"cfbc1c9d25515e75115965cc08323f845c1d23e50b9257d5e12b0d3a9a7a67c2\" returns successfully" Sep 10 00:32:47.002628 kubelet[2498]: I0910 00:32:47.002573 2498 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gl4ln\" (UniqueName: \"kubernetes.io/projected/8270908e-cdfb-4c56-aeed-f2328f128cf6-kube-api-access-gl4ln\") pod \"8270908e-cdfb-4c56-aeed-f2328f128cf6\" (UID: \"8270908e-cdfb-4c56-aeed-f2328f128cf6\") " Sep 10 00:32:47.002628 kubelet[2498]: I0910 00:32:47.002615 2498 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8270908e-cdfb-4c56-aeed-f2328f128cf6-cilium-config-path\") pod \"8270908e-cdfb-4c56-aeed-f2328f128cf6\" (UID: \"8270908e-cdfb-4c56-aeed-f2328f128cf6\") " Sep 10 00:32:47.002628 kubelet[2498]: I0910 00:32:47.002635 2498 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-hostproc\") pod \"8270908e-cdfb-4c56-aeed-f2328f128cf6\" (UID: \"8270908e-cdfb-4c56-aeed-f2328f128cf6\") " Sep 10 00:32:47.002628 kubelet[2498]: I0910 00:32:47.002649 2498 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-lib-modules\") pod \"8270908e-cdfb-4c56-aeed-f2328f128cf6\" (UID: \"8270908e-cdfb-4c56-aeed-f2328f128cf6\") " Sep 10 00:32:47.003374 kubelet[2498]: I0910 00:32:47.002666 2498 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8270908e-cdfb-4c56-aeed-f2328f128cf6-clustermesh-secrets\") pod \"8270908e-cdfb-4c56-aeed-f2328f128cf6\" (UID: \"8270908e-cdfb-4c56-aeed-f2328f128cf6\") " Sep 10 00:32:47.003374 kubelet[2498]: I0910 00:32:47.002682 2498 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzhbx\" (UniqueName: \"kubernetes.io/projected/83b7f552-3827-4d34-8b87-852d9f4f172c-kube-api-access-wzhbx\") pod \"83b7f552-3827-4d34-8b87-852d9f4f172c\" (UID: \"83b7f552-3827-4d34-8b87-852d9f4f172c\") " Sep 10 00:32:47.003374 kubelet[2498]: I0910 00:32:47.002697 2498 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-cilium-cgroup\") pod \"8270908e-cdfb-4c56-aeed-f2328f128cf6\" (UID: \"8270908e-cdfb-4c56-aeed-f2328f128cf6\") " Sep 10 00:32:47.003374 kubelet[2498]: I0910 00:32:47.002711 2498 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/83b7f552-3827-4d34-8b87-852d9f4f172c-cilium-config-path\") pod \"83b7f552-3827-4d34-8b87-852d9f4f172c\" (UID: \"83b7f552-3827-4d34-8b87-852d9f4f172c\") " Sep 10 00:32:47.003374 kubelet[2498]: I0910 00:32:47.002724 2498 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-cni-path\") pod \"8270908e-cdfb-4c56-aeed-f2328f128cf6\" (UID: \"8270908e-cdfb-4c56-aeed-f2328f128cf6\") " Sep 10 00:32:47.003374 kubelet[2498]: I0910 00:32:47.002741 2498 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8270908e-cdfb-4c56-aeed-f2328f128cf6-hubble-tls\") pod \"8270908e-cdfb-4c56-aeed-f2328f128cf6\" (UID: \"8270908e-cdfb-4c56-aeed-f2328f128cf6\") " Sep 10 00:32:47.003675 kubelet[2498]: I0910 00:32:47.002753 2498 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-cilium-run\") pod \"8270908e-cdfb-4c56-aeed-f2328f128cf6\" (UID: \"8270908e-cdfb-4c56-aeed-f2328f128cf6\") " Sep 10 00:32:47.003675 kubelet[2498]: I0910 00:32:47.002767 2498 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-bpf-maps\") pod \"8270908e-cdfb-4c56-aeed-f2328f128cf6\" (UID: \"8270908e-cdfb-4c56-aeed-f2328f128cf6\") " Sep 10 00:32:47.003675 kubelet[2498]: I0910 00:32:47.002790 2498 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-etc-cni-netd\") pod \"8270908e-cdfb-4c56-aeed-f2328f128cf6\" (UID: \"8270908e-cdfb-4c56-aeed-f2328f128cf6\") " Sep 10 00:32:47.003675 kubelet[2498]: I0910 00:32:47.002813 2498 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-xtables-lock\") pod \"8270908e-cdfb-4c56-aeed-f2328f128cf6\" (UID: \"8270908e-cdfb-4c56-aeed-f2328f128cf6\") " Sep 10 00:32:47.003675 kubelet[2498]: I0910 00:32:47.002829 2498 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-host-proc-sys-net\") pod \"8270908e-cdfb-4c56-aeed-f2328f128cf6\" (UID: \"8270908e-cdfb-4c56-aeed-f2328f128cf6\") " Sep 10 00:32:47.003675 kubelet[2498]: I0910 00:32:47.002842 2498 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-host-proc-sys-kernel\") pod \"8270908e-cdfb-4c56-aeed-f2328f128cf6\" (UID: \"8270908e-cdfb-4c56-aeed-f2328f128cf6\") " Sep 10 00:32:47.006300 kubelet[2498]: I0910 00:32:47.002767 2498 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-hostproc" (OuterVolumeSpecName: "hostproc") pod "8270908e-cdfb-4c56-aeed-f2328f128cf6" (UID: "8270908e-cdfb-4c56-aeed-f2328f128cf6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:32:47.006300 kubelet[2498]: I0910 00:32:47.002939 2498 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8270908e-cdfb-4c56-aeed-f2328f128cf6" (UID: "8270908e-cdfb-4c56-aeed-f2328f128cf6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:32:47.006634 kubelet[2498]: I0910 00:32:47.002964 2498 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-cni-path" (OuterVolumeSpecName: "cni-path") pod "8270908e-cdfb-4c56-aeed-f2328f128cf6" (UID: "8270908e-cdfb-4c56-aeed-f2328f128cf6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:32:47.006634 kubelet[2498]: I0910 00:32:47.005513 2498 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8270908e-cdfb-4c56-aeed-f2328f128cf6" (UID: "8270908e-cdfb-4c56-aeed-f2328f128cf6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:32:47.006634 kubelet[2498]: I0910 00:32:47.006218 2498 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8270908e-cdfb-4c56-aeed-f2328f128cf6" (UID: "8270908e-cdfb-4c56-aeed-f2328f128cf6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:32:47.006634 kubelet[2498]: I0910 00:32:47.006235 2498 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8270908e-cdfb-4c56-aeed-f2328f128cf6" (UID: "8270908e-cdfb-4c56-aeed-f2328f128cf6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:32:47.006634 kubelet[2498]: I0910 00:32:47.006252 2498 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8270908e-cdfb-4c56-aeed-f2328f128cf6" (UID: "8270908e-cdfb-4c56-aeed-f2328f128cf6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:32:47.006819 kubelet[2498]: I0910 00:32:47.006373 2498 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8270908e-cdfb-4c56-aeed-f2328f128cf6" (UID: "8270908e-cdfb-4c56-aeed-f2328f128cf6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:32:47.006819 kubelet[2498]: I0910 00:32:47.006397 2498 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8270908e-cdfb-4c56-aeed-f2328f128cf6" (UID: "8270908e-cdfb-4c56-aeed-f2328f128cf6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:32:47.008275 kubelet[2498]: I0910 00:32:47.008102 2498 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8270908e-cdfb-4c56-aeed-f2328f128cf6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8270908e-cdfb-4c56-aeed-f2328f128cf6" (UID: "8270908e-cdfb-4c56-aeed-f2328f128cf6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 10 00:32:47.008831 kubelet[2498]: I0910 00:32:47.008796 2498 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8270908e-cdfb-4c56-aeed-f2328f128cf6" (UID: "8270908e-cdfb-4c56-aeed-f2328f128cf6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:32:47.009822 kubelet[2498]: I0910 00:32:47.009762 2498 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8270908e-cdfb-4c56-aeed-f2328f128cf6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8270908e-cdfb-4c56-aeed-f2328f128cf6" (UID: "8270908e-cdfb-4c56-aeed-f2328f128cf6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 10 00:32:47.009952 kubelet[2498]: I0910 00:32:47.009925 2498 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83b7f552-3827-4d34-8b87-852d9f4f172c-kube-api-access-wzhbx" (OuterVolumeSpecName: "kube-api-access-wzhbx") pod "83b7f552-3827-4d34-8b87-852d9f4f172c" (UID: "83b7f552-3827-4d34-8b87-852d9f4f172c"). InnerVolumeSpecName "kube-api-access-wzhbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 00:32:47.012290 kubelet[2498]: I0910 00:32:47.011893 2498 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83b7f552-3827-4d34-8b87-852d9f4f172c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "83b7f552-3827-4d34-8b87-852d9f4f172c" (UID: "83b7f552-3827-4d34-8b87-852d9f4f172c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 10 00:32:47.014012 kubelet[2498]: I0910 00:32:47.013239 2498 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8270908e-cdfb-4c56-aeed-f2328f128cf6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8270908e-cdfb-4c56-aeed-f2328f128cf6" (UID: "8270908e-cdfb-4c56-aeed-f2328f128cf6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 00:32:47.013483 systemd[1]: var-lib-kubelet-pods-8270908e\x2dcdfb\x2d4c56\x2daeed\x2df2328f128cf6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 10 00:32:47.013632 systemd[1]: var-lib-kubelet-pods-8270908e\x2dcdfb\x2d4c56\x2daeed\x2df2328f128cf6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 10 00:32:47.013733 systemd[1]: var-lib-kubelet-pods-83b7f552\x2d3827\x2d4d34\x2d8b87\x2d852d9f4f172c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwzhbx.mount: Deactivated successfully. Sep 10 00:32:47.014543 kubelet[2498]: I0910 00:32:47.014517 2498 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8270908e-cdfb-4c56-aeed-f2328f128cf6-kube-api-access-gl4ln" (OuterVolumeSpecName: "kube-api-access-gl4ln") pod "8270908e-cdfb-4c56-aeed-f2328f128cf6" (UID: "8270908e-cdfb-4c56-aeed-f2328f128cf6"). InnerVolumeSpecName "kube-api-access-gl4ln". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 00:32:47.017423 systemd[1]: var-lib-kubelet-pods-8270908e\x2dcdfb\x2d4c56\x2daeed\x2df2328f128cf6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgl4ln.mount: Deactivated successfully. Sep 10 00:32:47.087656 kubelet[2498]: I0910 00:32:47.087619 2498 scope.go:117] "RemoveContainer" containerID="4de3f755184a0f94cf706c3014338b7701792d2b4b0095ad8550a9e3b43a25c3" Sep 10 00:32:47.088714 containerd[1461]: time="2025-09-10T00:32:47.088676797Z" level=info msg="RemoveContainer for \"4de3f755184a0f94cf706c3014338b7701792d2b4b0095ad8550a9e3b43a25c3\"" Sep 10 00:32:47.094057 systemd[1]: Removed slice kubepods-besteffort-pod83b7f552_3827_4d34_8b87_852d9f4f172c.slice - libcontainer container kubepods-besteffort-pod83b7f552_3827_4d34_8b87_852d9f4f172c.slice. Sep 10 00:32:47.102008 systemd[1]: Removed slice kubepods-burstable-pod8270908e_cdfb_4c56_aeed_f2328f128cf6.slice - libcontainer container kubepods-burstable-pod8270908e_cdfb_4c56_aeed_f2328f128cf6.slice. Sep 10 00:32:47.102911 containerd[1461]: time="2025-09-10T00:32:47.102685764Z" level=info msg="RemoveContainer for \"4de3f755184a0f94cf706c3014338b7701792d2b4b0095ad8550a9e3b43a25c3\" returns successfully" Sep 10 00:32:47.102126 systemd[1]: kubepods-burstable-pod8270908e_cdfb_4c56_aeed_f2328f128cf6.slice: Consumed 7.448s CPU time. Sep 10 00:32:47.103026 kubelet[2498]: I0910 00:32:47.102969 2498 scope.go:117] "RemoveContainer" containerID="4de3f755184a0f94cf706c3014338b7701792d2b4b0095ad8550a9e3b43a25c3" Sep 10 00:32:47.103120 kubelet[2498]: I0910 00:32:47.103084 2498 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 10 00:32:47.103120 kubelet[2498]: I0910 00:32:47.103106 2498 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8270908e-cdfb-4c56-aeed-f2328f128cf6-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 10 00:32:47.103120 kubelet[2498]: I0910 00:32:47.103119 2498 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 10 00:32:47.103511 kubelet[2498]: I0910 00:32:47.103129 2498 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 10 00:32:47.103511 kubelet[2498]: I0910 00:32:47.103138 2498 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 10 00:32:47.103511 kubelet[2498]: I0910 00:32:47.103146 2498 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 10 00:32:47.103511 kubelet[2498]: I0910 00:32:47.103154 2498 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 10 00:32:47.103511 kubelet[2498]: I0910 00:32:47.103177 2498 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 10 00:32:47.103511 kubelet[2498]: I0910 00:32:47.103261 2498 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gl4ln\" (UniqueName: \"kubernetes.io/projected/8270908e-cdfb-4c56-aeed-f2328f128cf6-kube-api-access-gl4ln\") on node \"localhost\" DevicePath \"\"" Sep 10 00:32:47.103511 kubelet[2498]: I0910 00:32:47.103276 2498 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8270908e-cdfb-4c56-aeed-f2328f128cf6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 10 00:32:47.103511 kubelet[2498]: I0910 00:32:47.103287 2498 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 10 00:32:47.103700 kubelet[2498]: I0910 00:32:47.103298 2498 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 10 00:32:47.103700 kubelet[2498]: I0910 00:32:47.103307 2498 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8270908e-cdfb-4c56-aeed-f2328f128cf6-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 10 00:32:47.103700 kubelet[2498]: I0910 00:32:47.103316 2498 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzhbx\" (UniqueName: \"kubernetes.io/projected/83b7f552-3827-4d34-8b87-852d9f4f172c-kube-api-access-wzhbx\") on node \"localhost\" DevicePath \"\"" Sep 10 00:32:47.103700 kubelet[2498]: I0910 00:32:47.103324 2498 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8270908e-cdfb-4c56-aeed-f2328f128cf6-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 10 00:32:47.103700 kubelet[2498]: I0910 00:32:47.103336 2498 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/83b7f552-3827-4d34-8b87-852d9f4f172c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 10 00:32:47.107003 containerd[1461]: time="2025-09-10T00:32:47.106905118Z" level=error msg="ContainerStatus for \"4de3f755184a0f94cf706c3014338b7701792d2b4b0095ad8550a9e3b43a25c3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4de3f755184a0f94cf706c3014338b7701792d2b4b0095ad8550a9e3b43a25c3\": not found" Sep 10 00:32:47.118628 kubelet[2498]: E0910 00:32:47.118521 2498 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4de3f755184a0f94cf706c3014338b7701792d2b4b0095ad8550a9e3b43a25c3\": not found" containerID="4de3f755184a0f94cf706c3014338b7701792d2b4b0095ad8550a9e3b43a25c3" Sep 10 00:32:47.118676 kubelet[2498]: I0910 00:32:47.118583 2498 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4de3f755184a0f94cf706c3014338b7701792d2b4b0095ad8550a9e3b43a25c3"} err="failed to get container status \"4de3f755184a0f94cf706c3014338b7701792d2b4b0095ad8550a9e3b43a25c3\": rpc error: code = NotFound desc = an error occurred when try to find container \"4de3f755184a0f94cf706c3014338b7701792d2b4b0095ad8550a9e3b43a25c3\": not found" Sep 10 00:32:47.118710 kubelet[2498]: I0910 00:32:47.118682 2498 scope.go:117] "RemoveContainer" containerID="69edcb05a258700be2a95e80d76596a390d389f917a6549c24d1f51762052a68" Sep 10 00:32:47.120874 containerd[1461]: time="2025-09-10T00:32:47.120469899Z" level=info msg="RemoveContainer for \"69edcb05a258700be2a95e80d76596a390d389f917a6549c24d1f51762052a68\"" Sep 10 00:32:47.125827 containerd[1461]: time="2025-09-10T00:32:47.125760162Z" level=info msg="RemoveContainer for \"69edcb05a258700be2a95e80d76596a390d389f917a6549c24d1f51762052a68\" returns successfully" Sep 10 00:32:47.130621 kubelet[2498]: I0910 00:32:47.129873 2498 scope.go:117] "RemoveContainer" containerID="d8fa8262f164c26021c1acf8ec30de4f371a94c7006815ad46da68170dcc7d89" Sep 10 00:32:47.132593 containerd[1461]: time="2025-09-10T00:32:47.132485349Z" level=info msg="RemoveContainer for \"d8fa8262f164c26021c1acf8ec30de4f371a94c7006815ad46da68170dcc7d89\"" Sep 10 00:32:47.136185 containerd[1461]: time="2025-09-10T00:32:47.136130480Z" level=info msg="RemoveContainer for \"d8fa8262f164c26021c1acf8ec30de4f371a94c7006815ad46da68170dcc7d89\" returns successfully" Sep 10 00:32:47.136346 kubelet[2498]: I0910 00:32:47.136318 2498 scope.go:117] "RemoveContainer" containerID="27c763179b6601d9f998c36ebd869784d4de4b1e02a70ec053f195774ec9f4be" Sep 10 00:32:47.137567 containerd[1461]: time="2025-09-10T00:32:47.137540037Z" level=info msg="RemoveContainer for \"27c763179b6601d9f998c36ebd869784d4de4b1e02a70ec053f195774ec9f4be\"" Sep 10 00:32:47.140942 containerd[1461]: time="2025-09-10T00:32:47.140906343Z" level=info msg="RemoveContainer for \"27c763179b6601d9f998c36ebd869784d4de4b1e02a70ec053f195774ec9f4be\" returns successfully" Sep 10 00:32:47.141200 kubelet[2498]: I0910 00:32:47.141089 2498 scope.go:117] "RemoveContainer" containerID="9782aa999c9de15037d85be4e52b0ec369e4a647f533a229abaf9f328bf04e23" Sep 10 00:32:47.142093 containerd[1461]: time="2025-09-10T00:32:47.142070268Z" level=info msg="RemoveContainer for \"9782aa999c9de15037d85be4e52b0ec369e4a647f533a229abaf9f328bf04e23\"" Sep 10 00:32:47.149015 containerd[1461]: time="2025-09-10T00:32:47.148967358Z" level=info msg="RemoveContainer for \"9782aa999c9de15037d85be4e52b0ec369e4a647f533a229abaf9f328bf04e23\" returns successfully" Sep 10 00:32:47.149209 kubelet[2498]: I0910 00:32:47.149171 2498 scope.go:117] "RemoveContainer" containerID="a4e9c213a092da1542c80f470c69a8d38c5cbf1ce339915acfea918ef93bc688" Sep 10 00:32:47.150308 containerd[1461]: time="2025-09-10T00:32:47.150280324Z" level=info msg="RemoveContainer for \"a4e9c213a092da1542c80f470c69a8d38c5cbf1ce339915acfea918ef93bc688\"" Sep 10 00:32:47.153528 containerd[1461]: time="2025-09-10T00:32:47.153491187Z" level=info msg="RemoveContainer for \"a4e9c213a092da1542c80f470c69a8d38c5cbf1ce339915acfea918ef93bc688\" returns successfully" Sep 10 00:32:47.153745 kubelet[2498]: I0910 00:32:47.153636 2498 scope.go:117] "RemoveContainer" containerID="69edcb05a258700be2a95e80d76596a390d389f917a6549c24d1f51762052a68" Sep 10 00:32:47.153899 containerd[1461]: time="2025-09-10T00:32:47.153863618Z" level=error msg="ContainerStatus for \"69edcb05a258700be2a95e80d76596a390d389f917a6549c24d1f51762052a68\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"69edcb05a258700be2a95e80d76596a390d389f917a6549c24d1f51762052a68\": not found" Sep 10 00:32:47.154044 kubelet[2498]: E0910 00:32:47.154014 2498 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"69edcb05a258700be2a95e80d76596a390d389f917a6549c24d1f51762052a68\": not found" containerID="69edcb05a258700be2a95e80d76596a390d389f917a6549c24d1f51762052a68" Sep 10 00:32:47.154089 kubelet[2498]: I0910 00:32:47.154054 2498 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"69edcb05a258700be2a95e80d76596a390d389f917a6549c24d1f51762052a68"} err="failed to get container status \"69edcb05a258700be2a95e80d76596a390d389f917a6549c24d1f51762052a68\": rpc error: code = NotFound desc = an error occurred when try to find container \"69edcb05a258700be2a95e80d76596a390d389f917a6549c24d1f51762052a68\": not found" Sep 10 00:32:47.154089 kubelet[2498]: I0910 00:32:47.154076 2498 scope.go:117] "RemoveContainer" containerID="d8fa8262f164c26021c1acf8ec30de4f371a94c7006815ad46da68170dcc7d89" Sep 10 00:32:47.154435 containerd[1461]: time="2025-09-10T00:32:47.154360916Z" level=error msg="ContainerStatus for \"d8fa8262f164c26021c1acf8ec30de4f371a94c7006815ad46da68170dcc7d89\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d8fa8262f164c26021c1acf8ec30de4f371a94c7006815ad46da68170dcc7d89\": not found" Sep 10 00:32:47.154593 kubelet[2498]: E0910 00:32:47.154564 2498 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d8fa8262f164c26021c1acf8ec30de4f371a94c7006815ad46da68170dcc7d89\": not found" containerID="d8fa8262f164c26021c1acf8ec30de4f371a94c7006815ad46da68170dcc7d89" Sep 10 00:32:47.154644 kubelet[2498]: I0910 00:32:47.154589 2498 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d8fa8262f164c26021c1acf8ec30de4f371a94c7006815ad46da68170dcc7d89"} err="failed to get container status \"d8fa8262f164c26021c1acf8ec30de4f371a94c7006815ad46da68170dcc7d89\": rpc error: code = NotFound desc = an error occurred when try to find container \"d8fa8262f164c26021c1acf8ec30de4f371a94c7006815ad46da68170dcc7d89\": not found" Sep 10 00:32:47.154644 kubelet[2498]: I0910 00:32:47.154608 2498 scope.go:117] "RemoveContainer" containerID="27c763179b6601d9f998c36ebd869784d4de4b1e02a70ec053f195774ec9f4be" Sep 10 00:32:47.154805 containerd[1461]: time="2025-09-10T00:32:47.154767192Z" level=error msg="ContainerStatus for \"27c763179b6601d9f998c36ebd869784d4de4b1e02a70ec053f195774ec9f4be\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"27c763179b6601d9f998c36ebd869784d4de4b1e02a70ec053f195774ec9f4be\": not found" Sep 10 00:32:47.154923 kubelet[2498]: E0910 00:32:47.154889 2498 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"27c763179b6601d9f998c36ebd869784d4de4b1e02a70ec053f195774ec9f4be\": not found" containerID="27c763179b6601d9f998c36ebd869784d4de4b1e02a70ec053f195774ec9f4be" Sep 10 00:32:47.154985 kubelet[2498]: I0910 00:32:47.154933 2498 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"27c763179b6601d9f998c36ebd869784d4de4b1e02a70ec053f195774ec9f4be"} err="failed to get container status \"27c763179b6601d9f998c36ebd869784d4de4b1e02a70ec053f195774ec9f4be\": rpc error: code = NotFound desc = an error occurred when try to find container \"27c763179b6601d9f998c36ebd869784d4de4b1e02a70ec053f195774ec9f4be\": not found" Sep 10 00:32:47.155016 kubelet[2498]: I0910 00:32:47.154984 2498 scope.go:117] "RemoveContainer" containerID="9782aa999c9de15037d85be4e52b0ec369e4a647f533a229abaf9f328bf04e23" Sep 10 00:32:47.155373 containerd[1461]: time="2025-09-10T00:32:47.155155003Z" level=error msg="ContainerStatus for \"9782aa999c9de15037d85be4e52b0ec369e4a647f533a229abaf9f328bf04e23\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9782aa999c9de15037d85be4e52b0ec369e4a647f533a229abaf9f328bf04e23\": not found" Sep 10 00:32:47.155443 kubelet[2498]: E0910 00:32:47.155259 2498 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9782aa999c9de15037d85be4e52b0ec369e4a647f533a229abaf9f328bf04e23\": not found" containerID="9782aa999c9de15037d85be4e52b0ec369e4a647f533a229abaf9f328bf04e23" Sep 10 00:32:47.155443 kubelet[2498]: I0910 00:32:47.155287 2498 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9782aa999c9de15037d85be4e52b0ec369e4a647f533a229abaf9f328bf04e23"} err="failed to get container status \"9782aa999c9de15037d85be4e52b0ec369e4a647f533a229abaf9f328bf04e23\": rpc error: code = NotFound desc = an error occurred when try to find container \"9782aa999c9de15037d85be4e52b0ec369e4a647f533a229abaf9f328bf04e23\": not found" Sep 10 00:32:47.155443 kubelet[2498]: I0910 00:32:47.155311 2498 scope.go:117] "RemoveContainer" containerID="a4e9c213a092da1542c80f470c69a8d38c5cbf1ce339915acfea918ef93bc688" Sep 10 00:32:47.155541 containerd[1461]: time="2025-09-10T00:32:47.155423469Z" level=error msg="ContainerStatus for \"a4e9c213a092da1542c80f470c69a8d38c5cbf1ce339915acfea918ef93bc688\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a4e9c213a092da1542c80f470c69a8d38c5cbf1ce339915acfea918ef93bc688\": not found" Sep 10 00:32:47.155573 kubelet[2498]: E0910 00:32:47.155510 2498 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a4e9c213a092da1542c80f470c69a8d38c5cbf1ce339915acfea918ef93bc688\": not found" containerID="a4e9c213a092da1542c80f470c69a8d38c5cbf1ce339915acfea918ef93bc688" Sep 10 00:32:47.155573 kubelet[2498]: I0910 00:32:47.155535 2498 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a4e9c213a092da1542c80f470c69a8d38c5cbf1ce339915acfea918ef93bc688"} err="failed to get container status \"a4e9c213a092da1542c80f470c69a8d38c5cbf1ce339915acfea918ef93bc688\": rpc error: code = NotFound desc = an error occurred when try to find container \"a4e9c213a092da1542c80f470c69a8d38c5cbf1ce339915acfea918ef93bc688\": not found" Sep 10 00:32:47.571200 sshd[4165]: pam_unix(sshd:session): session closed for user core Sep 10 00:32:47.579338 systemd[1]: sshd@24-10.0.0.14:22-10.0.0.1:33064.service: Deactivated successfully. Sep 10 00:32:47.581507 systemd[1]: session-25.scope: Deactivated successfully. Sep 10 00:32:47.583068 systemd-logind[1444]: Session 25 logged out. Waiting for processes to exit. Sep 10 00:32:47.588423 systemd[1]: Started sshd@25-10.0.0.14:22-10.0.0.1:33078.service - OpenSSH per-connection server daemon (10.0.0.1:33078). Sep 10 00:32:47.589473 systemd-logind[1444]: Removed session 25. Sep 10 00:32:47.625110 sshd[4329]: Accepted publickey for core from 10.0.0.1 port 33078 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:32:47.627009 sshd[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:32:47.631635 systemd-logind[1444]: New session 26 of user core. Sep 10 00:32:47.641303 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 10 00:32:48.155152 sshd[4329]: pam_unix(sshd:session): session closed for user core Sep 10 00:32:48.167854 systemd[1]: sshd@25-10.0.0.14:22-10.0.0.1:33078.service: Deactivated successfully. Sep 10 00:32:48.171890 systemd[1]: session-26.scope: Deactivated successfully. Sep 10 00:32:48.175760 systemd-logind[1444]: Session 26 logged out. Waiting for processes to exit. Sep 10 00:32:48.194381 kubelet[2498]: E0910 00:32:48.192778 2498 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8270908e-cdfb-4c56-aeed-f2328f128cf6" containerName="mount-bpf-fs" Sep 10 00:32:48.194381 kubelet[2498]: E0910 00:32:48.192838 2498 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8270908e-cdfb-4c56-aeed-f2328f128cf6" containerName="cilium-agent" Sep 10 00:32:48.194381 kubelet[2498]: E0910 00:32:48.192850 2498 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8270908e-cdfb-4c56-aeed-f2328f128cf6" containerName="mount-cgroup" Sep 10 00:32:48.194381 kubelet[2498]: E0910 00:32:48.192859 2498 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8270908e-cdfb-4c56-aeed-f2328f128cf6" containerName="apply-sysctl-overwrites" Sep 10 00:32:48.194381 kubelet[2498]: E0910 00:32:48.192869 2498 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="83b7f552-3827-4d34-8b87-852d9f4f172c" containerName="cilium-operator" Sep 10 00:32:48.194381 kubelet[2498]: E0910 00:32:48.192877 2498 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8270908e-cdfb-4c56-aeed-f2328f128cf6" containerName="clean-cilium-state" Sep 10 00:32:48.194381 kubelet[2498]: I0910 00:32:48.192918 2498 memory_manager.go:354] "RemoveStaleState removing state" podUID="8270908e-cdfb-4c56-aeed-f2328f128cf6" containerName="cilium-agent" Sep 10 00:32:48.194381 kubelet[2498]: I0910 00:32:48.192927 2498 memory_manager.go:354] "RemoveStaleState removing state" podUID="83b7f552-3827-4d34-8b87-852d9f4f172c" containerName="cilium-operator" Sep 10 00:32:48.194395 systemd[1]: Started sshd@26-10.0.0.14:22-10.0.0.1:33082.service - OpenSSH per-connection server daemon (10.0.0.1:33082). Sep 10 00:32:48.197709 systemd-logind[1444]: Removed session 26. Sep 10 00:32:48.212899 systemd[1]: Created slice kubepods-burstable-podc5fbd368_79fd_483e_809a_ca22a01bf92e.slice - libcontainer container kubepods-burstable-podc5fbd368_79fd_483e_809a_ca22a01bf92e.slice. Sep 10 00:32:48.214140 kubelet[2498]: I0910 00:32:48.214091 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c5fbd368-79fd-483e-809a-ca22a01bf92e-clustermesh-secrets\") pod \"cilium-9mlmq\" (UID: \"c5fbd368-79fd-483e-809a-ca22a01bf92e\") " pod="kube-system/cilium-9mlmq" Sep 10 00:32:48.214211 kubelet[2498]: I0910 00:32:48.214137 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c5fbd368-79fd-483e-809a-ca22a01bf92e-cilium-run\") pod \"cilium-9mlmq\" (UID: \"c5fbd368-79fd-483e-809a-ca22a01bf92e\") " pod="kube-system/cilium-9mlmq" Sep 10 00:32:48.214211 kubelet[2498]: I0910 00:32:48.214185 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c5fbd368-79fd-483e-809a-ca22a01bf92e-cilium-config-path\") pod \"cilium-9mlmq\" (UID: \"c5fbd368-79fd-483e-809a-ca22a01bf92e\") " pod="kube-system/cilium-9mlmq" Sep 10 00:32:48.214211 kubelet[2498]: I0910 00:32:48.214207 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c5fbd368-79fd-483e-809a-ca22a01bf92e-cilium-cgroup\") pod \"cilium-9mlmq\" (UID: \"c5fbd368-79fd-483e-809a-ca22a01bf92e\") " pod="kube-system/cilium-9mlmq" Sep 10 00:32:48.214300 kubelet[2498]: I0910 00:32:48.214232 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5fbd368-79fd-483e-809a-ca22a01bf92e-lib-modules\") pod \"cilium-9mlmq\" (UID: \"c5fbd368-79fd-483e-809a-ca22a01bf92e\") " pod="kube-system/cilium-9mlmq" Sep 10 00:32:48.214300 kubelet[2498]: I0910 00:32:48.214253 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c5fbd368-79fd-483e-809a-ca22a01bf92e-etc-cni-netd\") pod \"cilium-9mlmq\" (UID: \"c5fbd368-79fd-483e-809a-ca22a01bf92e\") " pod="kube-system/cilium-9mlmq" Sep 10 00:32:48.214300 kubelet[2498]: I0910 00:32:48.214276 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c5fbd368-79fd-483e-809a-ca22a01bf92e-hubble-tls\") pod \"cilium-9mlmq\" (UID: \"c5fbd368-79fd-483e-809a-ca22a01bf92e\") " pod="kube-system/cilium-9mlmq" Sep 10 00:32:48.214370 kubelet[2498]: I0910 00:32:48.214300 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c5fbd368-79fd-483e-809a-ca22a01bf92e-hostproc\") pod \"cilium-9mlmq\" (UID: \"c5fbd368-79fd-483e-809a-ca22a01bf92e\") " pod="kube-system/cilium-9mlmq" Sep 10 00:32:48.214370 kubelet[2498]: I0910 00:32:48.214318 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5fbd368-79fd-483e-809a-ca22a01bf92e-xtables-lock\") pod \"cilium-9mlmq\" (UID: \"c5fbd368-79fd-483e-809a-ca22a01bf92e\") " pod="kube-system/cilium-9mlmq" Sep 10 00:32:48.214370 kubelet[2498]: I0910 00:32:48.214339 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c5fbd368-79fd-483e-809a-ca22a01bf92e-cilium-ipsec-secrets\") pod \"cilium-9mlmq\" (UID: \"c5fbd368-79fd-483e-809a-ca22a01bf92e\") " pod="kube-system/cilium-9mlmq" Sep 10 00:32:48.214370 kubelet[2498]: I0910 00:32:48.214359 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c5fbd368-79fd-483e-809a-ca22a01bf92e-host-proc-sys-net\") pod \"cilium-9mlmq\" (UID: \"c5fbd368-79fd-483e-809a-ca22a01bf92e\") " pod="kube-system/cilium-9mlmq" Sep 10 00:32:48.214532 kubelet[2498]: I0910 00:32:48.214381 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c5fbd368-79fd-483e-809a-ca22a01bf92e-host-proc-sys-kernel\") pod \"cilium-9mlmq\" (UID: \"c5fbd368-79fd-483e-809a-ca22a01bf92e\") " pod="kube-system/cilium-9mlmq" Sep 10 00:32:48.214532 kubelet[2498]: I0910 00:32:48.214402 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkr7d\" (UniqueName: \"kubernetes.io/projected/c5fbd368-79fd-483e-809a-ca22a01bf92e-kube-api-access-vkr7d\") pod \"cilium-9mlmq\" (UID: \"c5fbd368-79fd-483e-809a-ca22a01bf92e\") " pod="kube-system/cilium-9mlmq" Sep 10 00:32:48.214532 kubelet[2498]: I0910 00:32:48.214423 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c5fbd368-79fd-483e-809a-ca22a01bf92e-bpf-maps\") pod \"cilium-9mlmq\" (UID: \"c5fbd368-79fd-483e-809a-ca22a01bf92e\") " pod="kube-system/cilium-9mlmq" Sep 10 00:32:48.214532 kubelet[2498]: I0910 00:32:48.214445 2498 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c5fbd368-79fd-483e-809a-ca22a01bf92e-cni-path\") pod \"cilium-9mlmq\" (UID: \"c5fbd368-79fd-483e-809a-ca22a01bf92e\") " pod="kube-system/cilium-9mlmq" Sep 10 00:32:48.229290 sshd[4342]: Accepted publickey for core from 10.0.0.1 port 33082 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:32:48.232962 sshd[4342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:32:48.244588 systemd-logind[1444]: New session 27 of user core. Sep 10 00:32:48.252653 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 10 00:32:48.307375 sshd[4342]: pam_unix(sshd:session): session closed for user core Sep 10 00:32:48.318558 systemd[1]: sshd@26-10.0.0.14:22-10.0.0.1:33082.service: Deactivated successfully. Sep 10 00:32:48.331407 systemd[1]: session-27.scope: Deactivated successfully. Sep 10 00:32:48.333272 systemd-logind[1444]: Session 27 logged out. Waiting for processes to exit. Sep 10 00:32:48.341458 systemd[1]: Started sshd@27-10.0.0.14:22-10.0.0.1:33090.service - OpenSSH per-connection server daemon (10.0.0.1:33090). Sep 10 00:32:48.342390 systemd-logind[1444]: Removed session 27. Sep 10 00:32:48.370830 sshd[4354]: Accepted publickey for core from 10.0.0.1 port 33090 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:32:48.372468 sshd[4354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:32:48.376544 systemd-logind[1444]: New session 28 of user core. Sep 10 00:32:48.386297 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 10 00:32:48.522344 kubelet[2498]: E0910 00:32:48.522303 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:48.523013 containerd[1461]: time="2025-09-10T00:32:48.522946744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9mlmq,Uid:c5fbd368-79fd-483e-809a-ca22a01bf92e,Namespace:kube-system,Attempt:0,}" Sep 10 00:32:48.550721 containerd[1461]: time="2025-09-10T00:32:48.549951482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:32:48.550721 containerd[1461]: time="2025-09-10T00:32:48.550682069Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:32:48.550913 containerd[1461]: time="2025-09-10T00:32:48.550700073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:32:48.550913 containerd[1461]: time="2025-09-10T00:32:48.550815410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:32:48.576306 systemd[1]: Started cri-containerd-9f4bd92d685b65e8948cb9e3e5dfe1319285aaff277002ce35d39c0700f6bdf2.scope - libcontainer container 9f4bd92d685b65e8948cb9e3e5dfe1319285aaff277002ce35d39c0700f6bdf2. Sep 10 00:32:48.596988 containerd[1461]: time="2025-09-10T00:32:48.596913375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9mlmq,Uid:c5fbd368-79fd-483e-809a-ca22a01bf92e,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f4bd92d685b65e8948cb9e3e5dfe1319285aaff277002ce35d39c0700f6bdf2\"" Sep 10 00:32:48.597591 kubelet[2498]: E0910 00:32:48.597567 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:48.600810 containerd[1461]: time="2025-09-10T00:32:48.600760827Z" level=info msg="CreateContainer within sandbox \"9f4bd92d685b65e8948cb9e3e5dfe1319285aaff277002ce35d39c0700f6bdf2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 10 00:32:48.615584 containerd[1461]: time="2025-09-10T00:32:48.615535323Z" level=info msg="CreateContainer within sandbox \"9f4bd92d685b65e8948cb9e3e5dfe1319285aaff277002ce35d39c0700f6bdf2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"97a7d12ab2c73c96f4f1c6c2343db98bfc8889630a7037034e91a19da7560171\"" Sep 10 00:32:48.616050 containerd[1461]: time="2025-09-10T00:32:48.616015658Z" level=info msg="StartContainer for \"97a7d12ab2c73c96f4f1c6c2343db98bfc8889630a7037034e91a19da7560171\"" Sep 10 00:32:48.648295 systemd[1]: Started cri-containerd-97a7d12ab2c73c96f4f1c6c2343db98bfc8889630a7037034e91a19da7560171.scope - libcontainer container 97a7d12ab2c73c96f4f1c6c2343db98bfc8889630a7037034e91a19da7560171. Sep 10 00:32:48.673975 containerd[1461]: time="2025-09-10T00:32:48.673922510Z" level=info msg="StartContainer for \"97a7d12ab2c73c96f4f1c6c2343db98bfc8889630a7037034e91a19da7560171\" returns successfully" Sep 10 00:32:48.684669 systemd[1]: cri-containerd-97a7d12ab2c73c96f4f1c6c2343db98bfc8889630a7037034e91a19da7560171.scope: Deactivated successfully. Sep 10 00:32:48.716260 containerd[1461]: time="2025-09-10T00:32:48.716180086Z" level=info msg="shim disconnected" id=97a7d12ab2c73c96f4f1c6c2343db98bfc8889630a7037034e91a19da7560171 namespace=k8s.io Sep 10 00:32:48.716260 containerd[1461]: time="2025-09-10T00:32:48.716254566Z" level=warning msg="cleaning up after shim disconnected" id=97a7d12ab2c73c96f4f1c6c2343db98bfc8889630a7037034e91a19da7560171 namespace=k8s.io Sep 10 00:32:48.716260 containerd[1461]: time="2025-09-10T00:32:48.716265748Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:32:48.904782 kubelet[2498]: I0910 00:32:48.904652 2498 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8270908e-cdfb-4c56-aeed-f2328f128cf6" path="/var/lib/kubelet/pods/8270908e-cdfb-4c56-aeed-f2328f128cf6/volumes" Sep 10 00:32:48.905799 kubelet[2498]: I0910 00:32:48.905766 2498 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83b7f552-3827-4d34-8b87-852d9f4f172c" path="/var/lib/kubelet/pods/83b7f552-3827-4d34-8b87-852d9f4f172c/volumes" Sep 10 00:32:49.102267 kubelet[2498]: E0910 00:32:49.102237 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:49.104811 containerd[1461]: time="2025-09-10T00:32:49.104736019Z" level=info msg="CreateContainer within sandbox \"9f4bd92d685b65e8948cb9e3e5dfe1319285aaff277002ce35d39c0700f6bdf2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 10 00:32:49.139337 containerd[1461]: time="2025-09-10T00:32:49.139244120Z" level=info msg="CreateContainer within sandbox \"9f4bd92d685b65e8948cb9e3e5dfe1319285aaff277002ce35d39c0700f6bdf2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0409b09f85ce5cfb6588dd2bd22c753026bf7d053adacc6a2d6893ed48abc051\"" Sep 10 00:32:49.140904 containerd[1461]: time="2025-09-10T00:32:49.140849174Z" level=info msg="StartContainer for \"0409b09f85ce5cfb6588dd2bd22c753026bf7d053adacc6a2d6893ed48abc051\"" Sep 10 00:32:49.170307 systemd[1]: Started cri-containerd-0409b09f85ce5cfb6588dd2bd22c753026bf7d053adacc6a2d6893ed48abc051.scope - libcontainer container 0409b09f85ce5cfb6588dd2bd22c753026bf7d053adacc6a2d6893ed48abc051. Sep 10 00:32:49.197829 containerd[1461]: time="2025-09-10T00:32:49.197776894Z" level=info msg="StartContainer for \"0409b09f85ce5cfb6588dd2bd22c753026bf7d053adacc6a2d6893ed48abc051\" returns successfully" Sep 10 00:32:49.208306 systemd[1]: cri-containerd-0409b09f85ce5cfb6588dd2bd22c753026bf7d053adacc6a2d6893ed48abc051.scope: Deactivated successfully. Sep 10 00:32:49.231778 containerd[1461]: time="2025-09-10T00:32:49.231708498Z" level=info msg="shim disconnected" id=0409b09f85ce5cfb6588dd2bd22c753026bf7d053adacc6a2d6893ed48abc051 namespace=k8s.io Sep 10 00:32:49.231778 containerd[1461]: time="2025-09-10T00:32:49.231770565Z" level=warning msg="cleaning up after shim disconnected" id=0409b09f85ce5cfb6588dd2bd22c753026bf7d053adacc6a2d6893ed48abc051 namespace=k8s.io Sep 10 00:32:49.231778 containerd[1461]: time="2025-09-10T00:32:49.231779903Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:32:49.961159 kubelet[2498]: E0910 00:32:49.961048 2498 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 10 00:32:50.106452 kubelet[2498]: E0910 00:32:50.106416 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:50.108627 containerd[1461]: time="2025-09-10T00:32:50.108575550Z" level=info msg="CreateContainer within sandbox \"9f4bd92d685b65e8948cb9e3e5dfe1319285aaff277002ce35d39c0700f6bdf2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 10 00:32:50.375725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4239785175.mount: Deactivated successfully. Sep 10 00:32:50.457444 containerd[1461]: time="2025-09-10T00:32:50.457373323Z" level=info msg="CreateContainer within sandbox \"9f4bd92d685b65e8948cb9e3e5dfe1319285aaff277002ce35d39c0700f6bdf2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2d6225a5c91263b2527bcbcb10c2399e19b396d0493c151408b4c2e14c62738d\"" Sep 10 00:32:50.458147 containerd[1461]: time="2025-09-10T00:32:50.458104300Z" level=info msg="StartContainer for \"2d6225a5c91263b2527bcbcb10c2399e19b396d0493c151408b4c2e14c62738d\"" Sep 10 00:32:50.497335 systemd[1]: Started cri-containerd-2d6225a5c91263b2527bcbcb10c2399e19b396d0493c151408b4c2e14c62738d.scope - libcontainer container 2d6225a5c91263b2527bcbcb10c2399e19b396d0493c151408b4c2e14c62738d. Sep 10 00:32:50.640787 containerd[1461]: time="2025-09-10T00:32:50.640650818Z" level=info msg="StartContainer for \"2d6225a5c91263b2527bcbcb10c2399e19b396d0493c151408b4c2e14c62738d\" returns successfully" Sep 10 00:32:50.645571 systemd[1]: cri-containerd-2d6225a5c91263b2527bcbcb10c2399e19b396d0493c151408b4c2e14c62738d.scope: Deactivated successfully. Sep 10 00:32:50.669613 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d6225a5c91263b2527bcbcb10c2399e19b396d0493c151408b4c2e14c62738d-rootfs.mount: Deactivated successfully. Sep 10 00:32:50.677384 containerd[1461]: time="2025-09-10T00:32:50.677304541Z" level=info msg="shim disconnected" id=2d6225a5c91263b2527bcbcb10c2399e19b396d0493c151408b4c2e14c62738d namespace=k8s.io Sep 10 00:32:50.677384 containerd[1461]: time="2025-09-10T00:32:50.677362861Z" level=warning msg="cleaning up after shim disconnected" id=2d6225a5c91263b2527bcbcb10c2399e19b396d0493c151408b4c2e14c62738d namespace=k8s.io Sep 10 00:32:50.677384 containerd[1461]: time="2025-09-10T00:32:50.677372940Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:32:51.110257 kubelet[2498]: E0910 00:32:51.110209 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:51.115233 containerd[1461]: time="2025-09-10T00:32:51.113666173Z" level=info msg="CreateContainer within sandbox \"9f4bd92d685b65e8948cb9e3e5dfe1319285aaff277002ce35d39c0700f6bdf2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 10 00:32:51.255644 containerd[1461]: time="2025-09-10T00:32:51.255576706Z" level=info msg="CreateContainer within sandbox \"9f4bd92d685b65e8948cb9e3e5dfe1319285aaff277002ce35d39c0700f6bdf2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ed4a41fe27095af8ffe6de17e1b75f9d6a6fdc4faf9a4fa88a07b05d2fec5ca6\"" Sep 10 00:32:51.256237 containerd[1461]: time="2025-09-10T00:32:51.256185764Z" level=info msg="StartContainer for \"ed4a41fe27095af8ffe6de17e1b75f9d6a6fdc4faf9a4fa88a07b05d2fec5ca6\"" Sep 10 00:32:51.292412 systemd[1]: Started cri-containerd-ed4a41fe27095af8ffe6de17e1b75f9d6a6fdc4faf9a4fa88a07b05d2fec5ca6.scope - libcontainer container ed4a41fe27095af8ffe6de17e1b75f9d6a6fdc4faf9a4fa88a07b05d2fec5ca6. Sep 10 00:32:51.319305 systemd[1]: cri-containerd-ed4a41fe27095af8ffe6de17e1b75f9d6a6fdc4faf9a4fa88a07b05d2fec5ca6.scope: Deactivated successfully. Sep 10 00:32:51.321516 containerd[1461]: time="2025-09-10T00:32:51.321473743Z" level=info msg="StartContainer for \"ed4a41fe27095af8ffe6de17e1b75f9d6a6fdc4faf9a4fa88a07b05d2fec5ca6\" returns successfully" Sep 10 00:32:51.736352 containerd[1461]: time="2025-09-10T00:32:51.736260954Z" level=info msg="shim disconnected" id=ed4a41fe27095af8ffe6de17e1b75f9d6a6fdc4faf9a4fa88a07b05d2fec5ca6 namespace=k8s.io Sep 10 00:32:51.736352 containerd[1461]: time="2025-09-10T00:32:51.736334883Z" level=warning msg="cleaning up after shim disconnected" id=ed4a41fe27095af8ffe6de17e1b75f9d6a6fdc4faf9a4fa88a07b05d2fec5ca6 namespace=k8s.io Sep 10 00:32:51.736352 containerd[1461]: time="2025-09-10T00:32:51.736344150Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:32:52.114305 kubelet[2498]: E0910 00:32:52.114138 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:52.115900 containerd[1461]: time="2025-09-10T00:32:52.115845044Z" level=info msg="CreateContainer within sandbox \"9f4bd92d685b65e8948cb9e3e5dfe1319285aaff277002ce35d39c0700f6bdf2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 10 00:32:52.134025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4099596382.mount: Deactivated successfully. Sep 10 00:32:52.135345 containerd[1461]: time="2025-09-10T00:32:52.135295843Z" level=info msg="CreateContainer within sandbox \"9f4bd92d685b65e8948cb9e3e5dfe1319285aaff277002ce35d39c0700f6bdf2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6eb1ea3ba4aa88b51e18e999016b53785d50581bbd5fab79aa6178d6aa3f51f5\"" Sep 10 00:32:52.135881 containerd[1461]: time="2025-09-10T00:32:52.135839357Z" level=info msg="StartContainer for \"6eb1ea3ba4aa88b51e18e999016b53785d50581bbd5fab79aa6178d6aa3f51f5\"" Sep 10 00:32:52.175303 systemd[1]: Started cri-containerd-6eb1ea3ba4aa88b51e18e999016b53785d50581bbd5fab79aa6178d6aa3f51f5.scope - libcontainer container 6eb1ea3ba4aa88b51e18e999016b53785d50581bbd5fab79aa6178d6aa3f51f5. Sep 10 00:32:52.329923 containerd[1461]: time="2025-09-10T00:32:52.329838824Z" level=info msg="StartContainer for \"6eb1ea3ba4aa88b51e18e999016b53785d50581bbd5fab79aa6178d6aa3f51f5\" returns successfully" Sep 10 00:32:52.738207 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 10 00:32:53.120012 kubelet[2498]: E0910 00:32:53.119869 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:53.381355 kubelet[2498]: I0910 00:32:53.381133 2498 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9mlmq" podStartSLOduration=5.381101538 podStartE2EDuration="5.381101538s" podCreationTimestamp="2025-09-10 00:32:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:32:53.380865625 +0000 UTC m=+88.636911520" watchObservedRunningTime="2025-09-10 00:32:53.381101538 +0000 UTC m=+88.637147433" Sep 10 00:32:54.523429 kubelet[2498]: E0910 00:32:54.523362 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:54.901851 kubelet[2498]: E0910 00:32:54.901443 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:56.244814 systemd-networkd[1388]: lxc_health: Link UP Sep 10 00:32:56.256057 systemd-networkd[1388]: lxc_health: Gained carrier Sep 10 00:32:56.527811 kubelet[2498]: E0910 00:32:56.525981 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:57.128467 kubelet[2498]: E0910 00:32:57.128427 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:57.973466 systemd-networkd[1388]: lxc_health: Gained IPv6LL Sep 10 00:32:58.130707 kubelet[2498]: E0910 00:32:58.130665 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:58.896423 systemd[1]: run-containerd-runc-k8s.io-6eb1ea3ba4aa88b51e18e999016b53785d50581bbd5fab79aa6178d6aa3f51f5-runc.qz2THX.mount: Deactivated successfully. Sep 10 00:33:01.079288 systemd[1]: run-containerd-runc-k8s.io-6eb1ea3ba4aa88b51e18e999016b53785d50581bbd5fab79aa6178d6aa3f51f5-runc.awOLzC.mount: Deactivated successfully. Sep 10 00:33:03.236548 sshd[4354]: pam_unix(sshd:session): session closed for user core Sep 10 00:33:03.241728 systemd[1]: sshd@27-10.0.0.14:22-10.0.0.1:33090.service: Deactivated successfully. Sep 10 00:33:03.244185 systemd[1]: session-28.scope: Deactivated successfully. Sep 10 00:33:03.244857 systemd-logind[1444]: Session 28 logged out. Waiting for processes to exit. Sep 10 00:33:03.246009 systemd-logind[1444]: Removed session 28.